I'm currently on the new PC...
($270) Aorus X570 Pro WiFi
($319) Ryzen 7 3700X
($104) Ballistix Elite DDR4-3600 CAS16 8GBx2
($100) 2-Pack Inland Premium 512GB Phison E12 PCIe Gen3x4 SSDs
($140) XFX Radeon 8GB RX-580 GTS XXX OC + Edition 1386MHz
($60) UpHere 240mm AIO Liquid Cooler
($37) DeepCool MATREXX 55 ADD-RGB Mid-Tower Case With 240mm Top-Rad Config modded with Independent Digital Thermo ($6) in Front Panel & 2.5" SSD Bay ($14)
($30) ExcelVan 1000W GPU Mining PSU
($26) Arctic P12 PWM/PST Fans 5-Pack
($1106 Total)
After a day solid of tinkering with RAM timings manually and with 1usmus' timing calculator, pretty much the only thing I took away from the experience was his RAM voltage recommendation for my Micron e-Die Ballistix RAM.
By upping that to 1.39V, I was able to not only get the PC to POST with XMP enabled, I was able to get it to OC RAM up to 4200mhz. However this threw the MemFab ratios totally out to lunch with latencies in the high 70nS range or worse, while the best I was able to get otherwise was 68.8nS with CAS manually set at 15 and all other parameters at Auto; it wanted to run in mid-70s using every variant of 1usmus' calculated settings I could think to try.
I was able to bring that down to 67.4nS at 4367MHz CPU just by leaving RAM V at 1.39, enabling PBO + XMP, then Setting RAM OC to "Enabled" with every single timing setting at Auto. fCLK Auto-configs at a few MHz either side of 1800MHz with RAM at 3800MHz 16-18-18-38 or 16-17-17-38. Now admittedly, I don't know what I'm doing in there anymore, either; terms have changed and so has the technology since I was into "EXTREME!!!"-ness on the Personal Confuser stage. But I'll wager I'm still more knowledgeable than most, and I STILL got better, more consistent results letting the BIOS manage pretty much everything. The main variable always boiled down to what OC frequency the BIOS decided to run the CPU at; a few hundred MHz one way or the other.
When AMD said they weren't leaving anything on the table with this product refresh, they meant it. Even with everything-auto-OC, I'm approaching overall latency of similarly-rated Samsung B-die RAM costing 150-200% of my set at $104, so still pretty pleased in general.
Issues with my Aorus X570 Elite MB not booting with QVL-recommended RAM forced me to send it back and go higher on the food chain locally; (okay... I also sold myself on the upgrade after researching the difference more carefully than first time around); replacing it with one that was actually NEW, not a repacked customer return or prerelease test unit. I paid $270 for this privilege.
While I was at the Center of Micros here in Houston picking up that MB, I also noodled around and found they had Inland-branded NVMe Gen3x4 SSDs using the same Phison E12 chipset as the Samsung 970EVOPlus, at about half the price. Plus another $10 off due to price-matching the next cheaper model which was out of stock. At first I was only going to get one; but the extra discount gave me an idea for an experiment, so I bought 2 for $99.98. I figured worst case, they could go in laptops to speed them up.
Historically, one of the biggest gripes speed users have had with AMD is that only the first m.2 slot operates at full speed as it is served by the CPU; the 2nd (or however many more you have) are served by the chipset and never reach similar throughput. This has resulted in common configurations of a fast, smaller boot drive with a second larger, cheaper drive for data/apps so loading could be better distributed. The following experiment doesn't exactly invalidate that approach; but it does offer an inexpensive alternative for those curious about the crazy speeds being promised with this generation of SSDs.
Yes, there's a lot of technical stuff involved, but
AMD and the major MB manufacturers have promised that is not the case with this release; even touting potential for massive throughput with 2 or more PCIe4.0 drives in RAID0. Gigabyte recently demo'd a 4-SSD RAID expansion card yielding 15,385/15,509MBps. Current leaders are Gigabyte's own Aorus PCIe4.0 SSD at around $260 for the 1TB and $460 for the 2TB; these spec at ~5000MBps/4400MBps top speeds. Close second is Sabrent's Rocket PCIe4.0 lineup promising the same top speeds at $199/1TB and $399/2TB.
Now the main experiment: Cheapest possible current-tech Phison E12-based NVMe SSDs straight up and in RAID0 to see what numbers I get.After all this benchmarking, it appears AMD/Gigabyte's claims of the 2nd M.2 slot FINALLY being able to operate at similar speeds to slot one ARE validated, even with cheaper PCIe3.0 nvme drives that ordinary mortals are likely to buy. When PCIe4.0 SSDs drop down to sane prices, I 'll see if this MB can yield similarly matched results at their higher rated speeds.
Numbers-wise, performance is VERY comparable to the 2 current leaders at 1/4-1/5 the price, with biggest penalty being slightly slower on small file read-speeds compared to these PCIe4.0 drives and a noticeable net loss of med-small file write speeds, even compared to its own single-channel speeds. This latter is a loss you'll feel in everyday use; much more than the small file read speed penalty.
But... the big BUT: POST takes longer due to having to initialize the array every time. In my case, this translated to 25-28 second boot times compared to 12-16 seconds. The solution for me will be to heavily abuse "Sleep" mode rather than allowing the PC to shut down completely.
Remember too that even
these PCIe 4.0 drives are still based on the Phison E16 architecture; this is NOT 100% "next-gen" tech. It is a fast PCIe 4.0 controller designed to control current-gen NAND. The NEXT generation after this will be the one that is truly designed with PCIe 4.0 in mind from the ground up; Samsung is already halfway there, I'm sure. There's a reason I wasn't too sanguine about spending THAT kind of money... while I want the faster foundation X570 provides, I'm not so excited about paying premium price for that intermediate product SSD. But... this is how these things progress. Now that the foundation is out there, people will build faster SSDs (and faster everything else) and before we know it, we'll be taking these speeds for granted and wanting more.
I can especially see the explosion in consumer-level VR driving this; the demand for ever-higher 3D resolutions and faster, more accurate tracking is already taxing the limits of USB3. People are already spending megabucks to get the equivalent of HDMI1.4 and several channels of USB3.0 in a wireless brick the size of a brick. Demand for bandwidth is only going to increase.
The TL/DR: Some decreased small file read performance and med-small file write performance and longer RAID Boot times compared to single-channel connected. Otherwise, these $59 SSDs produced very comparable numbers to premium PCIe3.0 drives costing 2x as much; in RAID, speeds comparable to (In some cases, far exceeding) PCIe4.0 drives costing 5x as much. Definitely a worthwhile experiment.mnem
The Three Laws of Thermodynamics:
1. You can't win.
2. You can't break even.
3. You can't even get out of the game.