Intel on Tuesday introduced its new form-factor for server-class SSDs. The new "ruler" design is based on the in-development Enterprise & Datacenter Storage Form Factor (EDSFF), and is intended to enable server makers to install up to 1 PB of storage into 1U machines while supporting all enterprise-grade features. The first SSDs in the ruler form-factor will be available “in the near future” and the form-factor itself is here for a long run: it is expandable in terms of interface performance, power, density and even dimensions.

For many years SSDs relied on form-factors originally designed for HDDs to ensure compatibility between different types of storage devices in PCs and servers. Meanwhile, the 2.5” and the 3.5” form-factors are not always optimal for SSDs in terms of storage density, cooling, and other aspects. To better address client computers and some types of servers, Intel developed the M.2 form-factor for modular SSDs several years ago. While such drives have a lot of advantages when it comes to storage density, they were not designed to support such functionality as hot-plugging, whereas their cooling is a yet another concern. By contrast, the ruler form-factor was developed specifically for server drives and is tailored for requirements of datacenters. As Intel puts it, the ruler form-factor “delivers the most storage capacity for a server, with the lowest required cooling and power needs”.

From technical point of view, each ruler SSD is a long hot-swappable module that can accommodate tens of NAND flash or 3D XPoint chips, and thus offer capacities and performance levels that easily exceed those of M.2 modules.

The initial ruler SSDs will use the SFF-TA-1002 "Gen-Z" connector, supporting PCIe 3.1 x4 and x8 interfaces with a maximum theoretical bandwidth of around 3.94 GB/s and 7.88 GB/s in both directions. Eventually, the modules could gain an x16 interface featuring 8 GT/s, 16 GT/s (PCIe Gen 4) or even 25 - 32 GT/s (PCIe Gen 5) data transfer rate (should the industry need SSDs with ~50 - 63 GB/s throughput). In fact, connectors are ready for PCIe Gen 5 speeds even now, but there are no hosts to support the interface.

One of the key things about the ruler form-factor is that it was designed specifically for server-grade SSDs and therefore offers a lot more than standards for client systems. For example, when compared to the consumer-grade M.2, a PCIe 3.1 x4-based EDSFF ruler SSD has extra SMBus pins for NVMe management, additional pins to charge power loss protection capacitors separately from the drive itself (thus enabling passive backplanes and lowering their costs). The standard is set to use +12 V lane to power the ruler SSDs and Intel expects the most powerful drives to consume 50 W or more.

Servers and backplanes compatible with the rulers will be incompatible with DFF SSDs and HDDs, as well as with other proprietary form-factors (so, think of flash-only machines). EDSFF itself has yet to be formalized as a standard, however the working group for the standard already counts Dell, Lenovo, HPE, and Samsung as among its promotors, and Western Digital as one of several contributors.


It is also noteworthy that Intel has been shipping ruler SSDs based on planar MLC NAND to select partners (think of the usual suspects - large makers of servers as well as owners of huge datacenters) for about eight months now. While the drives did not really use all the advantages of the proposed standard – and I'd be surprised if they were even compliant with the final standard – they helped the EDSFF working group members prepare for the future. Moreover, some of Intel's partners have even added their features to the upcoming EDSFF standard, and still other partners are looking at using the form factor for GPU and FPGA accelerator devices. So it's clear that there's already a lot of industry interest and now growing support for the ruler/EDSFF concept.

Finally, one of the first drives to be offered in the ruler form-factor will be Intel’s DC P4500-series SSDs, which feature Intel’s enterprise-grade 3D NAND memory and a proprietary controller. Intel does not disclose maximum capacities offered by the DC P4500 rulers, but expect them to be significant. Over time Intel also plans to introduce 3D XPoint-based Optane SSDs in the ruler form-factor.

Related Reading:

Source: Intel

Comments Locked


View All Comments

  • ZeDestructor - Thursday, August 10, 2017 - link

    "WHAT about Optane makes it appropriate for the "TB in a rack" mass storage market?"

    Big, big data processing. Think 100s of TBs of data in your working set with 10s of PBs worth of total data. Demand is obviously there, which is why they're talking.

    Besides, even if there weren't, you generally want your server to have only one form factor, so if your mainstream NAND is in ruler, may as well have your optane caching layer as ruler too.
  • extide - Wednesday, August 16, 2017 - link

    Intel said it (it's in one of their slides -- the slide isn't in the article but if you view the gallery it is the last slide.) I could see some people using a a couple of rulers worth of Optane and then the rest as flash and then using the few Optane rulers as a cache for the rest of the flash. Filling the whole thing with Optane would be insane.
  • Deicidium369 - Tuesday, June 23, 2020 - link

    Endurance, transfer speeds, latency ... They are the #1 data center deployed drive.

    I have over 100 Optane 2.5" deployed ... so far for a specialty customer.

    You know nothing about what you speak. Optane SSDs are designed for and used in data centers - don't mix up the 2.5" & Ruler drives for the Optane DIMMMs.
  • twotwotwo - Thursday, August 10, 2017 - link

    Huh. Samsung's "NGSFF" form factor looks more incremental--30.5mm wide PCB vs 38.6 for the whole "ruler". For comparison, 1U is ~44.5mm high, but you can't use all of that for SSD of course. Curious to see which, if either, wins. The height and depth of the "ruler" looks kind of constraining for server designers, but also potentially useful if Intel wants to build really large individual SSDs, like large early XPoint devices might be. Guess we'll see.
  • Billy Tallis - Thursday, August 10, 2017 - link

    The depth of the Ruler is constraining if you plan to fill the entire width of the server with Rulers. If you only put a bank of 16 on one half of the server, you still have plenty of room for as much motherboard area as you could need in 2S server.
  • extide - Wednesday, August 16, 2017 - link

    Seems like you could fill the whole width and still do 2P if you had no PCIe slots on the back and relied only on built in controllers for ethernet and such.
  • Comdrpopnfresh - Sunday, August 13, 2017 - link

    Seems like an odd direction. One might think the evolution of hot-swappable NAND would take place between a central controller and the NAND itself- similar to HDDs as the storage and an IC controller. How much power can be run over those contacts? The dimensions of this thing are huge in areal space and, given densities of current SSDs, wouldn't it be limited by heat dissipation or power consumption/requirements?
    I would think the failure risk of a single module containing a controller and high capacity storage compounds and is worse than an array configuration of individual conventional M.2 and 3.5" SSDs.
    Bonus points for the size comparison photo of it alongside an Eneloop Pro AA though. Eneloop cells are great.
  • Comdrpopnfresh - Sunday, August 13, 2017 - link

    Correction: I meant to say 2.5"
  • Billy Tallis - Monday, August 14, 2017 - link

    You'll probably never see hot-swap capability on the interface between the controller and NAND. Expecting the SSD to reconfigure the flash translation layer on the fly while preserving data integrity is unreasonable.

    Delivering 50W or more over a Ruler connector is not unreasonable, given that M.2 does 8-12.5W at 3.3V, while the Ruler uses 12V. Heat dissipation is improved in two ways: the higher surface area to volume ratio of the Ruler form factor compared to 2.5" 15mm U.2, and the right-angle connectors used on the backplane mean that there's no PCB obstructing airflow behind the drives.
  • robert kao - Thursday, October 26, 2017 - link

    How much mass is it?

Log in

Don't have an account? Sign up now