One significant motherboard update that has been drawn out over time has been the integration of 10 Gigabit Ethernet on consumer level motherboards, and specifically copper based 10GBase-T that is backward compatible with the majority of home networks using RJ-45. While the traction of 10G is scaling in business and enterprise, cost remains a big barrier to home and prosumer networking, but also consumer based implementations. We recently posted a news update with the current 10GBase-T motherboards on the market, and this is the second review of that list: today we are testing ASUS' new high-end LGA2011-3 workstation refresh model, the ASUS X99-E-10G WS. The motherboard uses Intel’s latest 10GBase-T controller, the X550, which runs as a PCIe 3.0 x4 implementation. 

Other AnandTech Reviews for Intel’s LGA2011-3 Platform

The Intel Core i7-6950X, i7-6900K, i7-6850K and i7-6800K Broadwell-E Review
The Intel Core i7-5960X, i7-5930K and i7-5820 Haswell-E Review
The Intel Xeon E5 v3 Fourteen-Core Review (E5-2695 v3, E5-2697 v3)
The Intel Xeon E5 v3 Twelve-Core Review (E5-2650L v3, E5-2690 v3)
The Intel Xeon E5 v3 Ten-Core Review (E5-2650 v3, E5-2687W v3)

X99 Series Motherboard Reviews:
Prices Correct at time of each review

$750: The ASRock X99 WS-E 10G Review [link]
$600: The ASUS X99-E-10G WS Review (this review)
$600: The ASRock X99 Extreme11 Review [link]
$500: The ASUS Rampage V Extreme Review [link]
$400: The ASUS X99-Deluxe Review [link]
$340: The GIGABYTE X99-Gaming G1 WiFi Review [link]
$330: The ASRock X99 OC Formula Review [link]
$323: The ASRock X99 WS Review [link]
$310: The GIGABYTE X99-UD7 WiFi Review [link]
$310: The ASUS X99 Sabertooth Review [link]
$300: The GIGABYTE X99-SOC Champion Review [link]
$300: The ASRock X99E-ITX Review [link]
$300: The MSI X99S MPower Review [link]
$275: The ASUS X99-A Review [link]
$241: The MSI X99S SLI PLUS Review [link]

The State of the 10GBase-T Market

Current integration of 10GBase-T onto a motherboard is an expensive process. In order to get full bandwidth, at the bare minimum, either a PCIe 2.0 x4 or PCIe 3.0 x2 connection per port is needed, and it depends on the controller used. This controller would traditionally interface with the CPU, reducing the PCIe lanes available for other large PCIe devices and co-processors, such as GPUs, storage cards or professional compute cards. In the last generation of consumer chipsets, the ability to run them direct from the large PCIe bandwidth on the 100-series chipsets is a future potential play, although technically the 100-series chipset connects to the CPU via a PCIe 3.0 x4 equivalent link which may be a future bottleneck.

There are three main commercial controllers currently on offer that are used in both PCIe cards and motherboard integration. First, and what we’ve seen so far, is the Intel X540 family of controllers which require x8 lanes and runs at PCIe 2.0 speeds (i.e. in a PCIe 3.0 environment, it still needs x8 as the controller is only PCIe 2.0). The upgrade to this, the Intel X550 family, makes that leap to PCIe 3.0 and requires an x4 link which makes it easier to integrate into a modern platform but might be a touch more expensive by virtue of it being new. Third is an Aquantia / Tehuti Networks solution, which we’ve seen on 10GBase-T PCIe cards bundled with certain motherboard configurations or by third-parties for sale on their own. The Intel X540/X550 parts are families of controllers, offering single and dual port designs, and to our knowledge are better supported and use less motherboard area (but are more expensive) than the Tehuti solution. All these chips output up to 15W on their own, requiring a motherboard built to disperse the extra heat generated.

As a result, any user looking at an integrated 10GBase-T solution has only a few options, and will have to find a way to justify the cost (which is easier in a business perspective). Aside from the 10GBase-T switch cost (cheapest is a 2-port unmanaged switch for $250 from ASUS, an 8-port previous generation Netgear X708 for $700, or a 16-port Netgear for ~$1400), the previous motherboard we reviewed with an integrated X540-T2 controller still runs at $700, over a year after its release. The controller cost is around $100-$200, depending on the motherboard manufacturers deal with Intel, which leads to a direct bill-of-materials (BOM) increase in the base cost. The PCIe cards with single or dual ports can be purchased for around $250-$400, depending on sales, support, and if they are new. (For those looking outside copper, there are also solutions available, but are less likely to be integrated into a home/current SMB setup without prior planning).

For anyone looking to migrate a home network to 10GBase-T has to be aware of this outlay, and a number of users (myself included) are waiting diligently until the cost of such an ecosystem comes down. I do wonder exactly what the tipping point would be for a number of enthusiasts to make the jump, especially with a number of networking technologies in the works (such as 2.5G/5G, or 802.11ad wireless routers now coming into the market for consumers offering gigabit line-of-sight connectivity). I have had some companies ask me what that tipping point is, and to be honest I still think it’s the switch – a 4x10G + 4x1G port managed switch for $250 would sell like hot cakes, regardless of the cost of controllers.

The ASUS X99-E-10G WS Overview

The feature that’s hard to ignore is the 10G ports, and to be honest buying this motherboard relies on needing to use these ports (or trying to be ‘futureproof’ when building a 3-5 year system). Adding in the capability for a motherboard to also support x16/x16/x16/x16 with its main PCIe ports means that extra and expensive hardware is needed for full bandwidth support.

This ability comes through PCIe switches, namely a pair of Avago PLX8747 switches. These ~$50ea final cost add-ons convert (mux/demux) sixteen lanes of PCIe 3.0 from the processor into thirty-two (32) lanes that are converted to x16/x16. As the main processors for this motherboard, such as the Intel Core i7-6950X, offer 40 lanes of PCIe, taking 32 away leaves eight lanes. This final eight lanes is split into four for the 10G controller and four for the U.2/M.2 PCIe 3.0 x4 slot at the bottom of the board. ASUS intends to make this motherboard the single port of call for all your PCIe needs.  

One of the benefits of the PCIe configuration is that the board can support a full complement of GPUs for 4-way SLI or 4-way Crossfire (or even more for compute tasks, depending on GPU size or riser cables). One of the main criticisms of using PCIe switches is that there is a small amount of overhead which could reduce peak performance, but in gaming as we’ve tested before, it is sub 1%. In fact, this is the only way to support 4-way x16, and allows for faster GPU-to-GPU communication (for adjacent GPUs), which can be required for compute tasks.

As this is a premium motherboard, ASUS didn’t skip on the ‘regular’ features either. Starting their OC socket for premium LGA2011-3 platforms, the power delivery is enforced using ASUS’ high-end chokes as well as an extended heatsink arrangement for the high powered ICs present. The X99-E-10G WS will support 128GB of DDR4-2133, including up to ECC registered memory with the appropriate Xeon E5 v4 processor, and will have profiles up to DDR4-3333 for non-ECC gaming memory. Aside from the 10 SATA ports, U.2 and M.2, ASUS’ WS line is designed to be verified with a longer list of workstation-like hardware, such as RAID cards and FPGAs, to ensure compatibility. Thus given seven 16-way RAID cards, the motherboard makes an interesting storage proposition. Or add in more 10G ports.

Due to the 10G ports, ASUS does not include any 1G ports, however the 10G ports do work at 1G speeds. For the audio, ASUS has their upgraded Realtek ALC1150 solution with filter caps, PCB separation and additional audio software. On the rear panel, ASUS has removed any USB 2.0 ports and left a pair of USB 3.1 (one A, one C) and a set of four USB 3.0 ports.

The PCIe slots also get an upgrade here, with the four main GPU slots featuring semi-transparent latches that the user can light up via a DIP switch to indicate which slots are needed to maximize 2-way, 3-way or 4-way GPU use. Each of the seven slots also has extra metallic reinforcement embedded into the slot itself, designed to maintain rigidity when heavy PCIe devices are used or PCIe devices are installed during bumpy transit.

Performance wise it is sufficient to say that the idle power of this WS board is higher than that of standard X99 motherboards however for consumer CPUs Multi-Core Turbo is enabled by default, giving a little extra speed (at the expense of a bit of power). Metrics such as DPC Latency and Audio Quality are both in the better halves of the tables for the tests, but with most WS boards with extra features there is a little more POST time than normal. We tested the board up to 3-way SLI (I didn’t have a fourth GTX 980, sorry), seeing game-dependent enhancements at 4K.

Quick Links to Other Pages

In The Box and Visual Inspection
Test Bed and Setup
Benchmark Overview
BIOS
Software
System Performance (Audio, USB, Power, POST Times on Windows 7, Latency)
CPU Performance, Short Form (Office Tests and Transcoding)
Single GPU Gaming Performance (R7 240, GTX 770, GTX 980)
Testing up to 3xGTX 980 and 10G
Conclusions

Board Features, Visual Inspection
Comments Locked

63 Comments

View All Comments

  • kgardas - Tuesday, November 8, 2016 - link

    Looks really nice, ~6W for 10Gbit is good and very low on todays standard. The only drawback in comparison with Intel is PCIe 2.0 support only, so for 10Gbit you need 4 PCIe lanes. Otherwise I'm looking forward to see this card here...
  • Notmyusualid - Friday, December 2, 2016 - link

    @ kgardas: You should have seen our 10G DWDM telecom equipment, back in late 1998... more than 6W I can tell you :) , in fact we couldn't get it to work without forced air, each transceiver taking up a whole rack shelf, and we could only fit three shelves / rack space. The electrical complexity / number of boards to make it work was astounding.

    Incredible to see it done on a single card now, and more often now, even multiples of, on a single card.

    So yes, tech moves on...
  • Lolimaster - Tuesday, November 8, 2016 - link

    I think you should dive the PSU's used.

    Only a high wattage for multigpu test (850w+)
    500-650w Titanium for any cpu + single gpu / APU-intel IGP powered systems
  • ads295 - Wednesday, November 9, 2016 - link

    You know how those clickbait websites show cleavage or a$$?
    The thumbnail for this article led me to open it in the same vein. :O
  • Breit - Thursday, November 10, 2016 - link

    Thanks for this review Ian, very informative.

    While reading the comments here, the single feature that seems to attract the most attention is the inclusion of 10G Ethernet. As it seems rather hard to implement a good performing 10G network compared to 1G, maybe an AnandTech-style in-depth article about 10G networking in general would be appreciated by the readers of this site. Just a suggestion.
    At least I would appreciate it... ;)
  • JlHADJOE - Friday, November 11, 2016 - link

    Didn't think I'd see the day when an ASUS motherboard is both cheaper and has more features than it's ASRock counterpart.
  • Notmyusualid - Friday, December 2, 2016 - link

    More features?

    I don't see a SATA DOM port.

    It is missing 2x 1GB Ethernet ports.

    It is missing 2 SATA ports (12 vs 10)

    It has only 10-phase power solution, vs 12 phase.

    It has no USB 2.0 ports did I read correctly?

    It has no fan on the 10G heatsink also, which allows the case temp to equalize with outside temps for some time after shutdown, to avoid condonsation building up in the case.

    Can you mount the same range of M.2 SSDs in this? I see only two mounting holes, mine has four...

    Board-mounted USB port, for DRM-related stick, or whatever you need connecting / secured on the INSIDE of a case.

    I also believe I have LAN LED headers to put network activity on the front panel, as one does with their hard disks.

    So tell me if I'm wrong, please.

    One thing I'll say, I do find the 6-pin board power connector much more elegant than my 4-pin Molex connector. And I cannot STAND my anodized blue... the black on the ASUS is also more elegant.

    Anybody who needs their pcie slots lit, to choose the right combo shouldn't be allowed to buy it..
  • Notmyusualid - Friday, December 2, 2016 - link

    also @ Jihadjoe

    Mine has TB header too. Almost forgot about that...
  • Hixbot - Tuesday, November 22, 2016 - link

    Don't understand the move to 10G copper. We should be transitioning towards 10G fiber. Copper can't carry 10G a practical distance. 55 meters for unshielded Cat 6 cable. That't not very far. 100 meters for shielded Cat 6, thats more reasonable. but has anyone priced Cat 6 shielded cable? It's very expensive, and good luck terminating the shielded RJ45 yourself to Cat 6 standards. In my workplace, we've had to order pre-terminated lengths of shielded Cat 6. Whenever we use fiber it's easier to terminate, costs are much cheaper, and distance is practically unlimited.

    So what is with the move to 10G copper?
  • Notmyusualid - Friday, December 2, 2016 - link

    As an owner of the asrock, I too would have preferred SFP sockets.

    But SMBs CAN afford $700 for a switch, and many of them have little fiber. My 2c.

Log in

Don't have an account? Sign up now