In our last SandForce SSD roundup I talked about how undesirable the consumer SSD market is, at least for those companies who don't produce their own controllers and/or NAND. There's a downward trend in NAND and SSD pricing, which unfortunately squeezes drive manufacturers as they compete for marketshare. The shrinking margins in the consumer SSD space will ultimately drive companies out of the business, consolidating power in those companies who are fine operating with slim margins. There are other effects, such as insufficient validation testing that result from this price crunch.

Public companies are under an even greater pressure to maintain high profit margins. Investors don't care about good product, they care about good returns. What is a public SSD manufacturer like OCZ to do? Go after the enterprise market of course.

OCZ has quietly addressed the enterprise SSD space for a while now. Today you can custom order an OCZ Deneva 2 SSD which is an enterprise focused SF-2000 based solution. OCZ's enterprise drives are fully customizable down to the controller, firmware and NAND type on-board. If you want an SF-2000 drive with SAS support and SLC NAND, OCZ will build it for you.

In the enterprise segment where 1U and 2U servers are common, PCI Express SSDs are very attractive. You may not always have a ton of 2.5" drive bays but there's usually at least one high-bandwidth PCIe slot unused. The RevoDrive family of PCIe SSDs were targeted at the high-end desktop or workstation market, but for an enterprise-specific solution OCZ has its Z-Drive line.

We introduced the Z-Drive R4 in our Computex coverage earlier this year - it's a beast. The Z-Drive R4 is a multi-controller PCIe solution that uses either 4 or 8 SF-2000 controllers behind a SAS-to-PCIe 2.0 x8 controller. The breakdown is as follows:

OCZ Z-Drive R4 Lineup
  CM84 CM88 RM84 RM88
Capacities 300/600/1200GB 800/1600/3200GB 300/600/1200GB 800/1600/3200GB
NAND MLC MLC MLC MLC
Interface PCIe Gen 2 x8 PCIe Gen 2 x8 PCIe Gen 2 x8 PCIe Gen 2 x8
Form Factor Half Height PCIe Full Height PCIe Half Height PCIe Full Height PCIe
Dimensions LxWxH 168.55 x 68.91 x 17.14 mm 242 x 98.4 x 17.14mm 168.55 x 68.91 x 17.14 mm 242 x 98.4 x 17.14mm
SSD Controllers 4 x SF-2282 8 x SF-2282 4 x SF-2582 8 x SF-2582
Power Failure Protection N N Y Y
Max Read 2000MB/s 2800MB/s 2000MB/s 2800MB/s
Max Write 2000MB/s 2800MB/s 2000MB/s 2800MB/s
Max Random Read 250K IOPS 410K IOPS 250K IOPS 410K IOPS
Max Random Write 160K IOPS 275K IOPS 160K IOPS 275K IOPS

The xM84s are half height solutions with four controllers while the xM88s are full height with eight controllers. The C-series use SF-2282 controllers while the R-series use SF-2582. The main difference there is the support for power failure protection. The R-series boards have an array of capacitors that can store enough charge to flush all pending writes to the NAND arrays in the event of a power failure. The C-series boards do not have this feature.

Despite the spec table above, OCZ also offers customized solutions as I mentioned above. The table above simply highlights the standard configurations OCZ builds.

For today's review OCZ sent us a 1.6TB Z-Drive R4 CM88. We have a preproduction board that has a number of stability & compatibility issues. OCZ tells us these problems will be addressed in the final version of the drives due to ship in the coming weeks. OCZ expects pricing on this board to be somewhere in the $6 - $7/GB range depending on configuration. Doing the math that works out to be anywhere between $9600 - $11200 for this single SSD. OCZ typically sells SF-2281 based SSDs at around $2/GB, even accounting for the extra controllers on-board there should be a hefty amount of profit for OCZ in the selling price of these Z-drives.

As with the RevoDrive X2 models the Z-Drive R4 CM88 uses two PCBs to accommodate all of its controllers. Each PCB is home to four SF-2282 controllers and 64 Intel 25nm MLC NAND devices (8 controllers, 128 devices total). Each NAND device has two 8GB die inside, which works out to be 2048GB of NAND on-board. This is an absolutely insane amount of NAND for a single drive. Remember each 8GB MLC NAND die (25nm) is 167mm2, which means this Z-Drive R4 has 42752mm2 of 25nm silicon on-board. A single 300mm wafer only has a surface area of 70685mm2 (even less is usable), which means it takes more than half of a 300mm 25nm MLC NAND wafer to supply the flash for just one of these drives. Roughly 27% of the NAND is set aside as spare area, exposing 1490GiB to the OS.

Thanks to the eight SF-2282 controllers and tons of NAND in close proximity, OCZ requires 100 CFM of air to properly cool the Z-Drive R4. This is clearly a solution for a rack mounted server.


It's OCZ branded but this is a Marvell SAS controller - the same driver works on the RevoDrive 3 X2 and the Z-Drive R4

OCZ continues to use its VCA 2.0 architecture on the Z-Drive R4. Details are still vague but OCZ claims to have written its own driver and firmware for the Marvell SAS controller on the Z-Drive R4 that allows it to manage redirection of IOs based on current controller queue depths rather than a dumb RAID-stripe. The driver accumulates IOs and redestributes them to the drive's controller array, to some degree, dynamically. OCZ's VCA also allows TRIM to be passed to the array although Microsoft's OSes won't pass TRIM to SCSI/SAS drives. You can use OCZ's Toolbox to secure erase the drive but there's no real-time TRIM available, this is a Microsoft limitation that impacts all SAS based drives.

The Test

This is going to be a bit of a disappointing set of performance graphs as it is our first real foray into the world of enterprise SSD testing. You will see results from the RevoDrive 3 X2 as well as a single Vertex 3 and Intel's X25-E, however we have no other high-end PCIe SSDs at our disposal for comparison. We have put in a request to FusionIO for a competing solution however it appears to have fallen on deaf ears. We will use this review to begin assembling our enterprise SSD data and hopefully in the coming weeks and months we'll be able to have a set equivalent to what we have in the consumer space.

We also had to run the Z-Drive R4 on our old X58 testbed instead of our H67 system as it wouldn't complete any tests on our H67 platform. OCZ attributes this to an issue with the preproduction Z-Drive boards which it says will be corrected by the time mass production begins.

CPU

Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled) for Enterprise Bench

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO

Motherboard:

Intel DX58SO (Intel X58)

Intel H67 Motherboard

Chipset:

Intel X58

Intel H67
Chipset Drivers:

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

 

Random Read/Write Speed
Comments Locked

57 Comments

View All Comments

  • caliche - Wednesday, September 28, 2011 - link

    I am sure he is referring to the previous versions of the z-drive, which is all you can use as an indicator.

    I am an enterprise customer. Dell R710s and two Z-Drive R2 M84 512GB models, one in each. I have had to RMA one of them once, and the other is on it's second RMA. They are super fast when they work, but three failures across two devices in less than a year is not production ready. We are using them in benchmarking servers running Red Hat Enterprise 5 for database stores, mostly read only to break other pieces of software talking to it. Very low writes.

    But here is the thing. When they power on, one or more of the four RAID pieces is "gone". This is just the on board software on the SSD board itself, no OS, no I/O on it at all besides the power up RAID confidence check. Power on the server, works one day, next day the controller on the card says a piece is missing. That's not acceptable when you are trying to get things done.

    In a perfect world, you have redundant and distributed everything with spare capacity and this is not a factor. But then you start looking at dealing with these failures and you start to ask yourself is your time better spent on screwing around with an RMA process and rebuilds or optimizing your environment?
  • ypsylon - Thursday, September 29, 2011 - link

    Nobody in the right frame of mind using SSD in enterprise segment (not even interested in them as consumer drives, but that is not the issue here). SSDs are just as unreliable as normal HDDs with ridiculous price point. You can lose all of data much quicker than from normal HDD. RAID arrays built from standard HDDs are just as fast as 1 or 2 "uber" SSDs and cost fraction of a SSD setup (often even including cost of the RAID controller itself). Also nobody running large arrays in RAID0 (except maybe video processing). RAID0 is pretty much non-existent in serious storage applications. As a backup I much more prefer another HDD array than unreliable, impossible to test, super-duper expensive SSD.

    You can't tests NAND reliability. That is the biggest problem of SSDs in business class environment. Because of that SSD will whiter and die in the next 5-10 years. SSDs are not good enough for industry, if you can't hold on to big storage market then no matter how good something is, it will die. Huge, corporate customers are key to stay alive is storage market.
  • Zan Lynx - Thursday, September 29, 2011 - link

    You are so, so wrong.

    Enterprises are loving SSDs and are buying piles of them.

    SSDs are the best thing since sliced bread if you run a database server.

    For one thing, the minimum latency of a PCIe SSD 4K read is almost 1,000 times less than a 4K read off a 15K SAS drive. The drive arrays don't even start to close the performance gap until well over 100 drives, and even then the drive array cannot match the minimum latency. It can only match the performance in parallel operations.

    If you have a lot of operations that work at queue depth of 1, the SSD will win every time, no matter how large the disk array.
  • leonzio666 - Wednesday, November 2, 2011 - link

    Bear in mind though, that enterprises (real heavy weights) probably preffer something like fusion-io io-drives which btw are the only ssd`s running in IBM driven blade servers. With speeds up to 3 Gb/s and over 320 k IOPS it`s not surprising they cost ca 20k $$ per unit :D So it`s not true that SSD`s in general are not good for the enterprise segment. Also, and this is hot - these ssd use SLC NAND...
  • MCS7 - Thursday, September 29, 2011 - link

    I remember Anand doing a VOODOO 2 card review (VIDEO) way way way back at the turn of the millenium! Oh boy..we are getting OLD....lol take care all
  • Googer - Thursday, September 29, 2011 - link

    Statistics for CPU usage would have been handy as some storage devices have greater demands for the CPU than others. Even between various HDD makes, CPU use varies.
  • alpha754293 - Thursday, September 29, 2011 - link

    Were you able to reproduce the SF-2xxxx BSOD issue with this? or is it limited to just the SF-2281?

Log in

Don't have an account? Sign up now