Testing PCIe 4.0

It's been over a year since the first consumer CPUs and SSDs supporting PCIe 4.0 hit the market, so we're a bit overdue for a testbed upgrade. Our Skylake system was adequate for even the fastest PCIe gen3 drives, but is finally a serious bottleneck.

We have years of archived results from the old testbed, which are still relevant to the vast majority of SSDs and computers out there that do not yet support PCIe gen4. We're not ready to throw out all that work quite yet; we will still be adding new test results measured on the old system until PCIe gen4 support is more widespread, or my office gets too crowded with computers—whichever happens first. (Side note: some rackmount cases for all these test systems would be greatly appreciated.)

AnandTech 2017-2020 Skylake Consumer SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
Software Windows 10 x64, version 1709
Linux kernel version 4.14, fio version 3.6
Spectre/Meltdown microcode and OS patches current as of May 2018

Since introducing the Skylake SSD testbed in 2017, we have made few changes to our testing configurations and procedures. In December 2017, we started using a Quarch XLC programmable power module (PPM), providing far more detailed and accurate power measurements than our old multimeter setup. In May 2019, we upgraded to a Quarch HD PPM, which can automatically compensate for voltage drop along the power cable to the drive. This allowed us to more directly measure M.2 PCIe SSD power: these drives can pull well over 2A from the 3.3V supply which can easily lead to more than the 5% supply voltage drop that drives are supposed to tolerate. At the same time, we introduced a new set of idle power measurements conducted on a newer Coffee Lake system. This is our first (and for the moment, only) SSD testbed that is capable of using the full range of PCIe power management features without crashing or other bugs. This allowed us to start reporting idle power levels for typical desktop and best-case laptop configurations.

Coffee Lake SSD Testbed for Idle Power
CPU Intel Core i7-8700K
Motherboard Gigabyte Aorus H370 Gaming 3 WiFi
Memory 2x 8GB Kingston DDR4-2666

On the software side, the disclosure of the Meltdown and Spectre CPU vulnerabilities at the beginning of 2018 led to numerous mitigations that affected overall system performance. The most severe effects were to system call overhead, which has a measurable impact on high-IOPS synthetic benchmarks. In May 2018, after the dust started to settle from the first round of vulnerability disclosures, we updated the firmware, microcode and operating systems on our testbed and took the opportunity to slightly tweak some of our synthetic benchmarks. Our pre-Spectre results are archived in the SSD 2017 section of our Bench database while the current post-Spectre results are in the SSD 2018 section. Of course, since May 2018 there have been many further related CPU security vulnerabilities found, and many changes to the mitigation techniques. Our SSD testing has not been tracking those software and microcode updates to avoid again invalidating previous scores. However, our new gen4-capable Ryzen test system is fully up to date with the latest firmware, microcode and OS versions.

AnandTech Ryzen PCIe 4.0 Consumer SSD Testbed
CPU AMD Ryzen 5 3600X
Motherboard ASRock B550 Pro
Memory 2x 16GB Mushkin DDR4-3600
Software Linux kernel version 5.8, fio version 3.23

Our new PCIe 4 test system uses an AMD Ryzen 5 3600X processor and an ASRock B550 motherboard. This provides PCIe 4 lanes from the CPU but not from the chipset. Whenever possible, we test NVMe SSDs with CPU-provided PCIe lanes rather than going through the chipset, so the lack of PCIe gen4 from the chipset isn't an issue. (We had a similar situation back when we were using a Haswell system that supported gen3 on the CPU lanes but only gen2 on the chipset.) Going with B550 instead of X570 also avoids the potential noise of a chipset fan. The DDR4-3600 is a big jump compared to our previous testbed, but is a fairly typical speed for current desktop builds and is a reasonable overclock. We're using the stock Wraith Spire 2 cooler; our current SSD tests are mostly single-threaded, so there's no need for a bigger heatsink.

For now, we are still using the same test scripts to generate the same workloads as on our older Skylake testbed. We haven't tried to control for all possible factors that could lead to different scores between the two testbeds. For this review, we have re-tested several drives on the new testbed to illustrate the scale of these effects. In future reviews, we will be rolling out new synthetic benchmarks that will not be directly comparable to the tests in this review and past reviews. Several of our older benchmarks do a poor job of capturing the behavior of the increasingly common QLC SSDs, but that's not important for today's review. The performance differences between new and old testbeds should be minor, except where the CPU speed is a bottleneck. This mostly happens when testing random IO at high queue depths.

More important for today is the fact that our old benchmarks only test queue depths up to 32 (the limit for SATA drives), and that's not always enough to use the full theoretical performance of a high-end NVMe drive—especially since our old tests only use one CPU core to stress the SSD. We'll be introducing a few new tests to better show these theoretical limits, but unfortunately the changes required to measure those advertised speeds also make the tests much less realistic for the context of desktop workloads, so we'll continue to emphasize the more relevant low queue depth performance.

Samsung 980 Pro Cache Size Effects


View All Comments

  • Tomatotech - Wednesday, September 23, 2020 - link

    Updating my comment - StorageReview tested the 980 Pro with enterprise workloads. It seems a fantastic performer there, with some of the highest numbers I’ve ever seen, especially in random 4K r/w, which is an area I’ve long felt nvme was neglecting. The 980 Pro is a drive that finally performs well in this area.

    That said, that performance requires a monster 128 queue depth which is fine in enterprise but is very rarely seen in desktop computing. Oh well, it’s called Pro for a reason. That aspect of its performance justifies the price in my view.

  • Someguyperson - Tuesday, September 22, 2020 - link

    Why haven't you tested any Phison E16 drives yet? I get that power consumption was seen as an issue, but with these drives pulling 20 watts, I don't think Phison E16 drives would be all that different. That said, the only way to validate any of those claims is by actually testing the drives. Which you haven't done yet. Reply
  • Slash3 - Tuesday, September 22, 2020 - link

    The Firecuda 520 is a Phison E16 design. Reply
  • londedoganet - Tuesday, September 22, 2020 - link

    > Samsung Elpis

    ouk élabon pólin; álla gàr elpìs éphē kaká
  • pogostick - Friday, October 2, 2020 - link

    I have no idea what this says, but I know exactly what it says. Reply
  • Duncan Macdonald - Tuesday, September 22, 2020 - link

    An extreme endurance drive (all SLC) would seem to be a useful niche product for some users. It should be possible to produce such a drive with just a software modification to the controller. Obviously the cost/GB would be much higher but for some uses the extra cost would be worth it.
    (The same amount of NAND that would provide 2TB in TLC mode would only provide around 600GB in SLC mode.)
  • Tomatotech - Tuesday, September 22, 2020 - link

    You’re talking about enterprise SSDs. They’re over that way. And one was included in the testing in this very article you’re reading. Reply
  • FunBunny2 - Wednesday, September 23, 2020 - link

    "(The same amount of NAND that would provide 2TB in TLC mode would only provide around 600GB in SLC mode.)"

    if memory serves, at least one of the AT SSD reviewers has pointed out that TLC/QLC NAND run in 'SLC mode' isn't actually SLC. and doesn't perform like it.
  • CheapSushi - Thursday, December 17, 2020 - link

    Get OPTANE. Why do so many people constantly overlook Optane? Optane has even higher endurance than SLC. Reply
  • lilmoe - Tuesday, September 22, 2020 - link

    With the move to 128l 8nm NAND, I was hoping for MLC with higher capacity, faster performance and lower prices at the same endurance level of the 970 Pro.

    But with TLC, this is still significantly more expensive than the EVO Plus, and not worth it for the average consumer considering the competition. It's just making Hynix Gold look that much better. This isn't what the Pro series customers wanted, Samsung...

    Oh well, RIP Pro line... Really disappointed.

Log in

Don't have an account? Sign up now