Users looking to build their own dual EPYC workstation or system, using completely off-the-shelf components, do not have a lot of options. Users can buy most of the CPUs at retail or at OEM, as well as memory, a chassis, power supplies, coolers, add-in cards. But the one item where there isn’t a lot of competition for these sorts of builds is in the motherboard. Unless you go down the route of buying a server on rails with a motherboard already fitted, there are very limited dual EPYC motherboard options for users to just purchase. So few in fact, that there are only two, both from Supermicro, and both are called the H11DSi. One variant has gigabit Ethernet, the other has 10GBase-T.

Looking For a Forest, Only Seeing a Tree

Non-proprietary motherboard options for building a single socket EPYC are fairly numerate – there’s the Supermicro H11DSL, the ASRock EPYCD8-2T (read our review here),  the GIGABYTE MZ31-AR0 (read our review here), or an ASUS KNPA-U16, all varying in feature set and starting from $380. For the dual socket space however, there is only one option. The Supermicro H11DSi, H11DSi-NT, and other potential minor variants, can be found at standard retailers from around $560-$670 and up, depending on source and additional features. All other solutions that we found were part of a pre-built server or system, often using non-standard form factors due to the requests of the customer those systems were built for. In being the only ‘consumer’ focused motherboard, the H11DSi has a lot to live up to.

As with other EPYC boards in this space, users have to know which revision of the board they are getting – it’s the second revision of the board that supports both Rome and Naples processors. One of the early issues with the single socket models was that some of them were not capable of Rome support, even with an updated BIOS. It should be noted that as the H11DSi was built with Naples in mind to begin with, we are limited to PCIe 3.0 here, and not the PCIe 4.0 that Rome supports. As a result, we suspect that this motherboard might be more suited to users looking to extract the compute out of the Rome platform rather than expanded PCIe functionality. Unfortunately this means that there are no commercial dual socket EPYC motherboards with PCIe 4.0 support at the time of writing.

The H11DSI is partly E-ATX standard and part SSI-CEB, and so suitable cases should support both in order to get the required mounting holes. Using the dual socket orientation that it has, the board is a lot longer than what most regular PC users are used to: physically it is one square foot. The board as shown supports all 8 memory channels per socket in a 1 DIMM per channel configuration, with up to DDR4-3200 for the Revision 2 models. We successfully placed 2 TB of LRDIMMs (16 * 128 GB) in the system without issues.

As with almost all server motherboards, there is a baseband management controller in play here – the IPMI ASPEED AST2500 which has become a standard in recent years. This allows for a log in to a Supermicro interface over the dedicated Ethernet connection, as well as a 2D video output. We’ll cover the interface on the next page.

Ethernet connectivity depends on the variant on the H11DSi you look for: the base model has two gigabit ports powered through an Intel i350-AM21 controller, while the H11DSi-NT has two 10GBase-T ports from the Intel X550-AT2 on board. Due to this controller having a higher TDP than the gigabit controller, there is an additional heatsink next to the PCIe slots.

The board has a total of 10 SATA ports: two SATA-DOM ports, and four SATA ports from each CPU through two Mini-SAS connectors. It’s worth noting that the four ports here come from different CPUs, such that any software RAID across the CPUs is going to have a performance penalty. In a similar vein, the PCIe slots also come from different CPUs: the top slot is a PCIe 3.0 x8 from CPU 2, whereas the other slots (PCIe 3.0 x16/x8/x16/x8) all come from CPU 1. This means that CPU 2 doesn’t actually use many of the PCIe lanes that the processor has.

Also on the storage front is an M.2 x2 slot, which supports PCIe and SATA for Naples, but only PCIe for Rome. The power cabling is all in the top right of the motherboard, for the 24-pin main motherboard power as well as the two 12V 8-pin connectors, one each for the CPUs. Each socket is backed by a 5-phase server-grade VRM, and the motherboard has eight 4-pin fan headers for lots of cooling. The VRM is unified under a central heatsink, designed to take advantage of cross-board airflow, which will be a critical part in any system built with this board.

We tested the motherboard with both EPYC 7642 (Rome, 48 core) processors and the latest EPYC 7F52 (Rome, 16 core high frequency) processors without issue. 

BIOS Overview
Comments Locked

36 Comments

View All Comments

  • 1_rick - Wednesday, May 13, 2020 - link

    Yeah, "numerous" was the correct word here.
  • peevee - Thursday, May 14, 2020 - link

    Nope. 1 is not numerous.
  • heavysoil - Friday, May 15, 2020 - link

    He's talking about the options for single socket, and lists three - numerous compared to the single available option for dual socket.
  • Guspaz - Wednesday, May 13, 2020 - link

    $600 enterprise board supporting up to 256 threads, and it's still just using one gigabit NICs?
  • Sivar - Wednesday, May 13, 2020 - link

    "Don't worry, widespread 10-gigabit is just around the corner." --2006
  • Holliday75 - Wednesday, May 13, 2020 - link

    1gb is pennies. 10gg costs a bit more. If you plan on using a different solution you have the option to get the cheaper board and install it. Save the 1gb for management duties or not at all.
  • DigitalFreak - Wednesday, May 13, 2020 - link

    Why waste the money on onboard 10 gig NICs when most buyers are going to throw in their own NIC anyway?
  • AdditionalPylons - Thursday, May 14, 2020 - link

    Exactly. This way the user is free to choose from 10/25/100 GbE or even Infiniband or something more exotic if they wish. I would personally go for a 25 GbE card (about about $100 used).
  • heavysoil - Friday, May 15, 2020 - link

    There's one model with gigabit NICs, and one with 10 gigabit NICs. That covers what most people would want, and PCIe NICs for SPF+, and/or 25/40/100 gigabit covers most everyone else.

    I can see this with the 1 gigabit NICs for monitoring/management and a 25 gigabit PCIe card for the VMs to use, for example.
  • eek2121 - Wednesday, May 13, 2020 - link

    I wish AMD would restructure their lineup a bit next gen.

    - Their HEDT offerings are decently priced, but the boards are not.
    - All of the HEDT boards I’ve seen are gimmicky, not supporting features like ECC, and are focused on gaming and the like.
    - HEDT does not support a dual socket config, so you would naturally want to step up to EPYC. However, EPYC is honestly complete overkill, and the boards are typically cut down server variants.
    - For those that don’t need HEDT, but need more IO, they don’t have an offering at all.

    I would love to see future iterations of Zen support an optional quad channel mode or higher, ECC standardized across the board (though if people realized how little ECC matters in modern systems...), and more PCIE lanes for everything.

Log in

Don't have an account? Sign up now