Introduction

One of the main concerns about buying an SSD has always been the limited capacity compared to traditional hard drives. Nowadays we have hard drives as big as 4TB, whereas most SATA SSDs top out at 512GB. Of course, SATA SSDs are usually 2.5” and standard 2.5” hard drives don’t offer capacities like 4TB – the biggest we have at the moment is 1TB. The real issue, however, has been price. If we go a year back in time, you had to fork out around $1000 for a 512GB SSD. Given that a 500GB hard drive could be bought for a fraction of that, there weren’t many who were willing to pay such prices for a spacious SSD. In only a year NAND prices have dropped dramatically and it’s not unusual to see 512GB SSDs selling for less than $400. While $400 is still a lot money for one component, it’s somewhat justifiable if you need to have one big drive instead of a multi-drive setup (e.g. in a laptop).

The decrease in NAND prices has not only boosted SSD sales but also opened the market for bigger capacities. Given the current price per GB, a 1TB SSD should cost around $1000, the same price a 512GB SSD cost a year ago. But there is one problem: 128Gb 20nm IMFT die isn’t due until 2013. Most consumer-grade controllers don’t support more than a total of 64 NAND dies, or eight NAND dies per channel. With a 64Gb die, the maximum capacity for most controllers works out to be 512GB. There are exceptions such as Intel’s SSD 320 as it uses Intel’s in-house controller, a 10-channel design allowing greater capacities of up to 640GB (of which 600GB are user accessible). Samsung's SSD 830 also supports greater capacities (Apple is offering 768GB in the Retina MacBook Pro) but at least for now that model is OEM only.

The reason behind this limitation is simple: with twice as much NAND, you have twice as many pages to track. A 512GB SSD with 25nm NAND already has 64 million pages. That is a lot data for the controller to sort through and you need a fast controller to address that many pages without a performance hit. 128Gb 20nm IMFT NAND will double the page size from 8KB to 16KB, which allows 1024GB of NAND to be installed while keeping the page count the same as before. The increase in page size is also why it takes a bit longer for SSD manufacturers to adopt 128Gb NAND dies. You need to tweak the firmware to comply with the new page and block sizes as well as change program and erase times. Remember that page size is the smallest amount of data you can write. When you increase page size, especially the way small IOs (smaller than the page size) are handled needs to be reworked. For example, if you’re writing a 4KB transfer, you don’t want to end up writing 16KB or write amplification will go through the roof. Block size will also double, which can have an impact on garbage collection and wear leveling (you now need to erase twice as much data as before).

Since 128Gb 20nm IMFT die and 1TB SSDs may be over a half a year away, something else has to be done. There are plenty of PCIe SSDs that offer more than 512GB, but outside of enterprise SSDs they are based on the same controllers as regular 2.5” SATA SSDs. What is the trick here, then? It’s nothing more complicated than RAID 0. You put two or more controllers on one PCB, give each controller their own NAND, and tie up the package with a hardware RAID controller. PCIe SSDs will also have SATA/SAS to PCIe bridges to allow PCIe connectivity. Basically, that’s two or more SSDs in one package but only one SATA port or PCIe slot will be taken and the drive will appear as a single volume.

PCIe SSDs with several SSD controllers in RAID 0 have existed for a few years now but we haven’t seen many similar SSDs in SATA form factors. PCIe SSDs are easier in the sense that there are less space limitations to worry about. You can have a big PCB, or even multiple PCBs, and there won’t be any issues because PCIe cards are fairly big to begin with. SATA drives, however, have strict dimensions limits. You can’t go any bigger than standard 2.5” or 3.5” because people won’t buy a drive that doesn’t fit in their computer. 3.5” SSD market is more or less non-existent so if you want to make a product that will actually sell, you have to go 2.5”. PCBs in 2.5” SSDs are not big; if you want to fit the components of two SSDs inside a 2.5” chassis you need to be very careful.

SATA is also handicapped in terms of bandwidth. Even with SATA 6Gbps, you are limited to around 560-570MB/s, which can be achieved with a single fast SSD controller. PCIe doesn’t have such limitations as you can go all the way to up to 16GB/s with a PCIe 3.0 x16 slot. Typically PCIe SSDs are either PCIe 2.0 x4 or x8, but we are still looking at 2-4GB/s of raw bandwidth—over three times more than what SATA can currently provide. Hence there’s barely any performance improvement from putting two SSDs inside a 2.5” chassis; you’ll still be bottlenecked by the SATA interface.

But what you do get from a RAID 0 2.5" SSD is the possibility for increased capacity. The enclosures are big enough to house two regular size PCBs, which in theory allows up to two 512GB SSD to be installed into the same enclosure. We haven't seen many solutions similar to this previously, but a few months ago OWC released a 960GB Mercury Electra MAX that has two SF-2181 controllers in RAID 0 and 512GiB of NAND per controller. Lets take a deeper look inside, shall we?

The 960GB OWC Mercury Electra MAX and Test Setup
Comments Locked

36 Comments

View All Comments

  • cyrusfox - Thursday, October 18, 2012 - link

    I sort of get it, this thing is kind of affordable but in a year, with next gen Nand available128Gb/16GB), all prices will continue to crash($/gb). And high storage nand in 2.5" form factor is not all that unique. OCZ has had a 1tb drive out since at least may (OCT1-25SAT3-1T), which can be found on newegg or amazon. When you are already spending more than a grand for a drive, might as well grab one that is at least 6 Gbps compatible.

    Either drive will depreciate faster than is acceptable for me. I am still waiting for a $130 fire sale on an Vertez 4 256gb though, right now I have a poor mans raid 0 of a 128gb vertex 4 and a 120gb Agility 3, both of which I paid $140 plus for awhile ago. Its just as bad as when I spent $100 for 4gb of DDR3 3yrs ago. Buyers remorse, the cost of adopting tech early.
  • SpaceRanger - Thursday, October 18, 2012 - link

    If this thing is for Audio and Video Professionals, then it's more than likely targeted for Mac users. Mac users are well known for overpaying for their hardware, so the price for this piece of steaming pile is fitting.
  • ajp_anton - Thursday, October 18, 2012 - link

    You really need to fix your chart making when performance is very low.
    If the performance number doesn't fit the bar, move it next to it instead of overlapping with the product name.
    I commented on this years ago but nothing's happened. We don't often see the bars go so low, but sometimes they do.
  • Kristian Vättö - Thursday, October 18, 2012 - link

    Our CMS makes the graphs automatically so I can't play with small details like where the actual number is placed. I'll pass a word to Anand and see if there is a way to fix it, because I find that irritating as well.
  • JarredWalton - Thursday, October 18, 2012 - link

    Actually, there is a way to do this Kristian: check the "outer labels" box at the top-right of the graph. I've fixed the two random write charts and regenerated.
  • ajp_anton - Thursday, October 18, 2012 - link

    Thanks, hopefully all of you remember to do this when necessary (looks like it requires manual work).

    For consistency between different charts, you should either make this the default, or change it so it only places them outside for too small values. Maybe even for all values below 50% of the largest.
  • Juddog - Thursday, October 18, 2012 - link

    I don't buy that they couldn't afford to put a SATA6G connection on there. The price is already through the roof and newer SSD's hit way past the normal limits of SATA3G (some even bump up against the SATA6G limit).
  • Kristian Vättö - Thursday, October 18, 2012 - link

    OWC didn't exactly specify why they had to stick with SATA 3Gbps, they only said it was a combination of things including price, thermals and space. I wouldn't be surprised if there simply was no SATA 6Gbps controller as the market for such controllers is fairly small. I know Silicon Image doesn't have one, at least.
  • dave_the_nerd - Thursday, October 18, 2012 - link

    I have a Macbook with an HDD + SSD in an optical bay adapter, but I'd sooner duct tape an external drive to the back of the lid that overspend on something like this.

    Hell, I'd rather install a pair of WD Blacks and software RAID them.

    Audio doesn't need as much sequential I/O as video, though, I guess.

    Somebody will buy it though.
  • JonBendtsen - Thursday, October 18, 2012 - link

    What if they used JBOD or linear raid rather than raid0?

Log in

Don't have an account? Sign up now