Samsung SSD 845DC PRO

Samsung SSD 845DC PRO
Capacity 400GB 800GB
Controller Samsung MDX
NAND Samsung 128Gbit 24-layer 40nm MLC V-NAND
Sequential Read 530MB/s 530MB/s
Sequential Write 460MB/s 460MB/s
4KB Random Read 92K IOPS 92K IOPS
4KB Random Write 50K IOPS 51K IOPS
Idle Power 1.0W 1.0W
Load Power (Read/Write) 1.7W / 3.1W 1.7W / 3.3W
Endurance (TBW) 7,300TB 14,600TB
Endurance (DWPD) 10 DWPD
Warranty Five years

Surprisingly the 845DC PRO goes for the older MDX controller that was used in the SSD 840 and 840 Pro. Architecturally the MDX and MEX are the same since both are based on the 3-core ARM Cortex R4 base, but the MEX just runs at a higher clock speed (400MHz vs 300MHz). I suspect the MEX controller does not really offer a major benefit for MLC NAND based SSDs because there is much less NAND management to do, but with TLC the extra processing power is certainly useful given the amount of ECC and management TLC needs.

The 845DC PRO is only available in two capacities: 400GB and 800GB. I heard Samsung has plans to add a higher capacity version (1,600GB?) later on but for the time being the 845DC PRO is limited to just 800GB. I suspect that going above 1TiB of raw NAND requires a controller update, which would explain why higher capacities are not available yet. In the end, the 845DC PRO is using silicon that is now two years old, which adds some design limitations.

Similar to the 845DC EVO, the PRO has capacitors that offer data protection in case of a power loss.

The 845DC PRO uses Samsung's first generation V-NAND, which is a 24-layer design with a die capacity of 128Gbit. The part numbers of the first and second generation are almost equal and the only way to distinquish the two is to look at the third, fourth and fifth characters since they reveal the number of die per package as well as the total capacity of the package. Our 400GB sample has four and our 800GB has eight 8-die packages on the PCB, so the raw NAND capacities work out to be 512GiB and 1024GiB respectively with over-provisioning being 28%.

I am not going to cover V-NAND in detail here as I did that in the 850 Pro review and architecturally the first generation V-NAND is no different – it is just 24 layers instead of 32. The first generation is an older, more mature process and thus more suitable for enterprise SSDs. I measured the endurance of the first generation V-NAND to be 40,000 P/E cycles, whereas the second generation V-NAND in the 850 Pro is only rated at 6,000 P/E cycles. For the record, you would either need eMLC or SLC to get 40,000 P/E cycles with 2D NAND, but V-NAND does that while being normal MLC. The benefit over eMLC is performance as eMLC sacrifices program and erase latencies for higher endurance, and the eMLC manufacturing process is also more complicated than regular MLC (although I am pretty sure that V-NAND is still more complicated and hence more expensive).

Samsung SSD 845DC EVO Samsung PM853T
Comments Locked

31 Comments

View All Comments

  • Laststop311 - Wednesday, September 3, 2014 - link

    Wish the consumer m2 drives would be released already. Samsung sm951 with pcie gen 3.0 x4 controller would be nice to be able to buy.
  • tuxRoller - Wednesday, September 3, 2014 - link

    All chart titles are the same on page five (performance consistency average iops).
  • tuxRoller - Wednesday, September 3, 2014 - link

    Actually, all the charts carry the same title, but different data.
  • Kristian Vättö - Thursday, September 4, 2014 - link

    The titles are basically "name of the SSD and its capacity - 4KB Random Write (QD32) Performance". The name of the SSD should change when you select a different SSD but every graph has the "4KB Random Write (QD32) Performance" attached to it.
  • CountDown_0 - Wednesday, September 3, 2014 - link

    Hi Kristian,
    a small suggestion: when talking about worst case IOPS you write that "The blue dots in the graphs stand for average IOPS just like before, but the red dots show the worst-case IOPS for every second." Ok, but I'd write it in the graph legend instead.
  • Kristian Vättö - Thursday, September 4, 2014 - link

    It's something I thought about and can certainly consider adding it in the future.
  • rossjudson - Thursday, September 4, 2014 - link

    I'd suggest the following. Use FIO to do your benchmarking. It supports generating and measuring just about every load you'd care about. You can also use it in a distributed mode, so you can run as many tests as you have hardware to support, at the same time.

    Second, don't use logarithmic axes on your charts. The drives you describe here take *huge* dropoffs in performance after their caches fill up and they have to start "working for a living". You are masking this performance drop by not using linear measures.

    Third, divide up your time axis into (say) 60 second chunks, and show the min/max/95/99/99.9/99.9 latency marks. Most enterprise customers care about sustained performance and worst case performance. A really slow IO is going to hold up a bunch of other stuff. There are two ways out of that: Speculative IO (wait a little while for success then issue another IO to another device), or manage and interleave background tasks (defrag/garbage collect) very carefully in the storage device. Better yet, don't have the problem at all. The marketing stats on these drives have nothing to do with the performance they exhibit when they are subject to non-stop, mixed loads.

    Unless you are a vendor that constantly tests precisely those loads, and ensures they work, stay working, and stay tight on latency.
  • SuperVeloce - Thursday, September 4, 2014 - link

    Great review... but dropdown menu for graphs annoys me. ugh
  • Kristian Vättö - Thursday, September 4, 2014 - link

    What do you find annoying in them? I can certainly consider alternative options if you can suggest any.
  • grebic - Thursday, October 2, 2014 - link

    Hi Kristian. I need to bother you with a question: do you think isit worth it to stick this SSD in a NAS? I have a ''fanless'' QNAP HS-210, 2 bay small form NAS, without drives for the moment, so in order to have a complete zero noise and time ''resistence'' to go for SSDs. But I was forgoten what was mentioned here "no wear leveling, no garbage collection'', so I'm wondering if in time the performances will decrease dramatically I'm thinking that the OS of NAS is not knowing to do such ''treatments'' over SSDs for maintaining performances, no? It's not in my intention to do operations over operations on NAS but I would like to know that my data will be ''safe'' and easy ''accesible'' over long time, OK? Very appreciated your oppinion. Thanks, Cristian

Log in

Don't have an account? Sign up now