Samsung SSD 845DC PRO

Samsung SSD 845DC PRO
Capacity 400GB 800GB
Controller Samsung MDX
NAND Samsung 128Gbit 24-layer 40nm MLC V-NAND
Sequential Read 530MB/s 530MB/s
Sequential Write 460MB/s 460MB/s
4KB Random Read 92K IOPS 92K IOPS
4KB Random Write 50K IOPS 51K IOPS
Idle Power 1.0W 1.0W
Load Power (Read/Write) 1.7W / 3.1W 1.7W / 3.3W
Endurance (TBW) 7,300TB 14,600TB
Endurance (DWPD) 10 DWPD
Warranty Five years

Surprisingly the 845DC PRO goes for the older MDX controller that was used in the SSD 840 and 840 Pro. Architecturally the MDX and MEX are the same since both are based on the 3-core ARM Cortex R4 base, but the MEX just runs at a higher clock speed (400MHz vs 300MHz). I suspect the MEX controller does not really offer a major benefit for MLC NAND based SSDs because there is much less NAND management to do, but with TLC the extra processing power is certainly useful given the amount of ECC and management TLC needs.

The 845DC PRO is only available in two capacities: 400GB and 800GB. I heard Samsung has plans to add a higher capacity version (1,600GB?) later on but for the time being the 845DC PRO is limited to just 800GB. I suspect that going above 1TiB of raw NAND requires a controller update, which would explain why higher capacities are not available yet. In the end, the 845DC PRO is using silicon that is now two years old, which adds some design limitations.

Similar to the 845DC EVO, the PRO has capacitors that offer data protection in case of a power loss.

The 845DC PRO uses Samsung's first generation V-NAND, which is a 24-layer design with a die capacity of 128Gbit. The part numbers of the first and second generation are almost equal and the only way to distinquish the two is to look at the third, fourth and fifth characters since they reveal the number of die per package as well as the total capacity of the package. Our 400GB sample has four and our 800GB has eight 8-die packages on the PCB, so the raw NAND capacities work out to be 512GiB and 1024GiB respectively with over-provisioning being 28%.

I am not going to cover V-NAND in detail here as I did that in the 850 Pro review and architecturally the first generation V-NAND is no different – it is just 24 layers instead of 32. The first generation is an older, more mature process and thus more suitable for enterprise SSDs. I measured the endurance of the first generation V-NAND to be 40,000 P/E cycles, whereas the second generation V-NAND in the 850 Pro is only rated at 6,000 P/E cycles. For the record, you would either need eMLC or SLC to get 40,000 P/E cycles with 2D NAND, but V-NAND does that while being normal MLC. The benefit over eMLC is performance as eMLC sacrifices program and erase latencies for higher endurance, and the eMLC manufacturing process is also more complicated than regular MLC (although I am pretty sure that V-NAND is still more complicated and hence more expensive).

Samsung SSD 845DC EVO Samsung PM853T
Comments Locked

31 Comments

View All Comments

  • LiviuTM - Wednesday, September 3, 2014 - link

    Great article, Kristian.
    I enjoyed finding more about latency, IOPS and throughput and the relationship between them.

    Keep up the good work :)
  • Chapbass - Wednesday, September 3, 2014 - link

    I would like to echo this statement. Some of the heavy technical stuff makes my eyes glaze over at times, but this was so well written that I really got into it. Awesome article, Kristian.
  • romrunning - Wednesday, September 3, 2014 - link

    "The only difference between the 845DC EVO and PM853T is the firmware and the PM853T is geared more towards sustained workloads, which results in slightly higher random write speed (15K IOPS vs 14K IOPS)."

    Chart for PM853T shows 14K for Random Writes, so likely needs to be corrected.
  • Kristian Vättö - Wednesday, September 3, 2014 - link

    Good catch, I was waiting for Samsung to send me the full data sheet for the PM853T but I never got it, so I accidentally left the 845DC EVO specs there. I've now updated it with the specs I have and with some additional commentary.
  • JellyRoll - Wednesday, September 3, 2014 - link

    The only issue with calculating performance as listed on the first page is that it assumes that the SSD works perfectly in all aspects. No ECC, no wear leveling, no garbage collection. None of these are true. Even without those factors no SSD will ever behave absolutely perfectly in every aspect at all times....anything but. That is why there is so much variation between vendors. It would be impossible to calculate performance using that method with an SSD in the real world.
  • Kristian Vättö - Wednesday, September 3, 2014 - link

    Of course real world is always different because the transfer size and queue depth are constantly changing and no SSD behaves perfectly. I mentioned that it is a hypothetical SSD and obviously no real drive would have a constant latency of 1ms or perfect transfer size scaling. The goal was to demonstrate how the metrics are related and it is easier with concrete, albeit hypothetical, examples.
  • hrrmph - Wednesday, September 3, 2014 - link

    I think you nailed it on the theoretical stuff (the relationships between the parameters), and presented it well (easy to understand).

    This has been bugging me for a while too, although I won't pretend to have gotten it figured out - it's just that I keep getting pickier in my lab notes and shopping specs about things like queue depth for a given size transfer for the specified performance. Leave any condition or parameter out, and the specs seem kinda useless. Leave everything in, and then I wonder which ones are pertinent to my usage scenarios.

    Now, your explanation will have me questioning the manufacturer's motives for the selection of each unit of measure chosen for each listed spec. Who says this field of endeavor doesn't lead to obsession? :)

    If any of the specs *are* pertinent to my usage scenarios, then I wonder which ones are *most* pertinent for which of my usage scenarios (laptop, versus general desktop, versus high powered workstation).

    Any relationships that you discover or methods that you develop for the charts to help explain it better are most welcome. I know this is an over-simplification, but I would guess that most workstation users want to know:

    - Is my drive limiting my performance (why did that operation stutter or lag, or why does it take so long - was it the drive)?

    - Is there anything I can do about it that I can afford (mainly, what can I replace it with - new controller card, newer, better designed SSDs, better racks, cables, etc.)?

    I am pleased that you are helping us out by further dissecting performance consistency / variation. I suspect that although SSDs are an order of magnitude (or more) better than HDDs at many tasks, the "devil is in the details," and that there is a reason that many SSD equipped machines still "hiccup," fairly frequently (although not nearly as often or as bad as HDD equipped machines). I also suspect that drive sub-systems are still one of the most common weaker links that is responsible for such hiccups.

    I am particularly interested in the (usually) brutally difficult small file size tests. These tests seem to be able to bring even the best of machines to a crawl, and any device (ie: SSD) that can help performance on those tests seems to be very likely to be noticeable to the end user.

    If you do indeed find "let downs" in performance consistency (or any other drive related performance spec), then maybe the manufacturers will work to improve upon those weaknesses until we get "buttery smooth" performance...

    ...or at least until we can definitively start looking at other sub-systems (compute, memory, I/O, etc.) to solve the hiccups.

    My introductory courses were on 8086 computers running DOS. I don't remember them often stopping to think about anything... until I started "hitting" the disks heavily. The more things change, the more I suspect they stay the same ;)

    So I keep allocating more of my budget to disks and disk sub-systems than anything else. AT's articles are thus *very* helpful in "aiming" that budget and I hope you have some revelations for us soon that show which products are worth the money.
  • iwod - Wednesday, September 3, 2014 - link

    I have been wondering for a bit, why hasn't Enterprise switched to some other interconnect? Surely they could do PCI-E have have it directly connected to the CPU. ( Assuming they dont need those for GPU or other things ).

    And I have been Samsung being extremely aggressive with the web hosting market. Where Intel is lacking behind.
  • FunBunny2 - Wednesday, September 3, 2014 - link

    "real" enterprise has been running SAS and FibreChannel for rather a long time. InfiniBand every now and again. To the extent that enterprise buys X,000 drives to parcel out to offices and such, then that's where the SATA drives go. But that's not really enterprise storage. Real enterprise SSD/flash/etc. doesn't have a list price (well, only in the sense that your car did) and I'd wager that not one of the enterprise SSD/flash companies (and no, Intel doesn't count) has ever offered up a sample to AnandTech.
  • rossjudson - Thursday, September 4, 2014 - link

    Fusion-io (and others) have been doing precisely this (PCIe connected flash storage) for a number of years. They are currently producing modules with up to 6TB of storage.

Log in

Don't have an account? Sign up now