AnandTech Storage Bench - The Destroyer

The Destroyer is an extremely long test replicating the access patterns of very IO-intensive desktop usage. A detailed breakdown can be found in this article. Like real-world usage, the drives do get the occasional break that allows for some background garbage collection and flushing caches, but those idle times are limited to 25ms so that it doesn't take all week to run the test. These AnandTech Storage Bench (ATSB) tests do not involve running the actual applications that generated the workloads, so the scores are relatively insensitive to changes in CPU performance and RAM from our new testbed, but the jump to a newer version of Windows and the newer storage drivers can have an impact.

We quantify performance on this test by reporting the drive's average data throughput, the average latency of the I/O operations, and the total energy used by the drive over the course of the test.

The Blue lines indicate the PM981, the OEM version of the 970 EVO.
The Orange lines are the 970 EVO.

ATSB - The Destroyer (Data Rate)

The average data rates from the Samsung 970 EVO on The Destroyer are a slight step backwards from the Samsung PM981 OEM drive and from the 960 EVO. All of the TLC-based drives are still performing below even Samsung's older MLC-based NVMe drives, and of course the Intel Optane SSD. This year's Western Digital WD Black offers about the same performance as the 970 EVO.

ATSB - The Destroyer (Average Latency)ATSB - The Destroyer (99th Percentile Latency)

Average and 99th percentile latencies for the 970 EVO are again very slightly worse than the PM981, but on these metrics the 960 EVO doesn't beat its replacement. The WD Black has notably better 99th percentile latency than the other flash-based SSDs.

ATSB - The Destroyer (Average Read Latency)ATSB - The Destroyer (Average Write Latency)

There is a clear range of average read latency scores that make up the high-end NVMe market segment. The 970 EVO doesn't stand out from the other drives in that category. For average write latency, scores vary a lot more, and the 970 EVO outperforms its predecessor slightly but fails to match the very good score the PM981 obtained.

ATSB - The Destroyer (99th Percentile Read Latency)ATSB - The Destroyer (99th Percentile Write Latency)

The 99th percentile read and write latency scores from the 970 EVO don't break new ground and mostly fail to match the PM981, though the differences aren't large enough to be a serious concern. The WD Black's notable QoS advantage is on the read side, where it is the only flash-based SSD to almost always keep read latency below 1ms.

ATSB - The Destroyer (Power)

We didn't have the opportunity to measure power usage of the Samsung PM981 on The Destroyer, so this is our first look at the power draw of the Samsung Phoenix controller on this test. The situation isn't good. The 970 EVO uses twice the energy as the WD Black to does despite both drives offering about the same level of performance on The Destroyer. The power efficiency of the 970 EVO seems to be a big step backwards from the previous generation and is not at all competitive.

Introduction AnandTech Storage Bench - Heavy
Comments Locked

68 Comments

View All Comments

  • cfenton - Tuesday, April 24, 2018 - link

    I've been meaning to ask about this for a while, but why do you order the performance charts based on the 'empty' results? In most of my systems, the SSD's are ~70% full most of the time. Does performance only degrade significantly if they are 100% full? If not, it seems to me that the 'full' results would be more representative of the performance most users will see.
  • Billy Tallis - Tuesday, April 24, 2018 - link

    At 70% full you're generally going to get performance closer to fresh out of the box than to 100% full. Performance drops steeply as the last bits of space are used up. At 70% full, you probably still have the full dynamic SLC cache size usable, and there's plenty of room for garbage collection and wear leveling.

    When it comes to manual overprovisioning to prevent full-drive performance degradation, I don't think I've ever seen someone recommend reserving more than 25% of the drive's usable space unless you're trying to abuse a consumer drive with a very heavy enterprise workload.
  • cfenton - Tuesday, April 24, 2018 - link

    Thanks for the reply. That's really helpful to know. I didn't even think about the dynamic SLC cache.
  • imaheadcase - Tuesday, April 24, 2018 - link

    So im wondering, i got a small 8TB server i use for media/backup. While i know im limited to network bandwidth, would replacing the drives with ssd make any impact at all?
  • Billy Tallis - Tuesday, April 24, 2018 - link

    It would be quieter and use less power. For media archiving over GbE, the sequential performance of mechanical drives is adequate. Incremental backups may make more random accesses, and retrieving a subset of data from your backup archive can definitely benefit from solid state performance, but it's probably not something you do often enough for it to matter.

    Even with the large pile of SSDs I have on hand, my personal machines still back up to a home server with mechanical drives in RAID.
  • gigahertz20 - Tuesday, April 24, 2018 - link

    @Billy Tallis Just out of curiosity, what backup software are you using?
  • enzotiger - Tuesday, April 24, 2018 - link

    With the exception of sequential write, there are some significant gap between your numbers and Samsung's spec. Any clue?
  • anactoraaron - Tuesday, April 24, 2018 - link

    Honest question here. Which of these tests do more than just test the SLC cache? That's a big thing to test, as some of these other drives are MLC and won't slow down when used beyond any SLC caching.
  • RamGuy239 - Tuesday, April 24, 2018 - link

    So these are sold and markedet with IEEE1667 / Microsoft edrive from the get-go, unlike Samsung 960 EVO and Pro that had this promised only to get it at the end of their life-cycles (the latest firmware update).

    That's good and old. But does it really work? The current implementation on the Samsung 960 EVO and Pro has a major issue, it doesn't work when the disk is used as a boot drive. Samsung keeps claiming this is due to a NVMe module bug in most UEFI firmware's and will require motherboard manufactures to provide a UEFI firmware update including a fix.

    Whether this is indeed true or not is hard for me to say, but that's what Samsung themselves claims over at their own support forums.

    All I know is that I can't get neither my Samsung 960 EVO 1TB, or my Samsung 960 Pro 1TB to use hardware encryption with BitLocker on Windows 10 when its used as a boot drive on neither my Asus Maximus IX Apex or my Asus Maximus X Apex both running the latest BIOS/UEFI firmware update.

    When used as a secondary drive hardware encryption works as intended.

    With this whole mess around BitLocker/IEEE1667/Microsoft Edrive on the Samsung 960 EVO and Pro how does it all fare with these new ones? Is it all indeed a issue with NVMe and most UEFI firmware's requiring new UEFI firmware's with fixes from motherboard manufactures or does the 970 EVO and Pro suddenly work with BitLocker as a boot drive without new UEFI firmware releases?
  • Palorim12 - Tuesday, April 24, 2018 - link

    Seems to be an issue with the BIOS chipset manufacturers like Megatrends, Phoenix, etc, and Samsung has stated they are working with them to resolve the issue.

Log in

Don't have an account? Sign up now