For a special occasion, and with what looked like a pricing error, I decided to splash out on a 10GBase-T switch for my testing lab. Coming in at almost £800, reduced from £1900, this beast was not cheap but surprisingly below my personal cost-per-port to get into the 10-gigabit game. Rather than review the switch (how do you review a switch anyway? ), I just want to go through what this thing is and what I can do with it. Plus some rough point-to-point bandwidth speeds.

The Quest for 10G on Copper

One of my personal crusades in recent years has been to push 10-gigabit networking – specifically Ethernet over copper (10GBase-T) – into a price range that is more amenable to home users. For a long time, this technology has been priced for commercial and enterprise: upwards of $100 per port for the switch and $100-$200 per port for the add-in cards. This is partly because the technology has a lot of enterprise bells and whistles, such as QoS, but also there has never been a big drive for more than gigabit Ethernet in the home.

Recently this changed somewhat. After a decade of Intel’s 10G silicon on the shelves, Aquantia came in and started offering add-in cards below $100 – and not only for 10G but also the new 2.5G and 5G standards as well. Their idea is to expand the market for this technology, given that they’ve been in the backhaul and networking backbone markets for a while. They had a 2 year lead over others on the 2.5G/5G silicon, but the key issue (as I explained to them over a year ago) was that in order to make it happen in the home it would require switches. These switches could either be managed or unmanaged, but really there needs to be a $50/port or $30/port series of switches for multi-gigabit to really take off. I make an online poll just for this.

Out of 137 voters at the time, about 10% said they would jump on the technology at $80 per port. Around a third said $50 per port, and 60% or so said $30 per port. To be honest, these results were around what I expected. Personally, I think a $250 5-port switch would be a great point to enter the market.

All that being said, and as much as the good folks at Aquantia agree with me, they don’t make the switches – it’s up to the Netgears, the D-Links, the TP-Links, and such to actually build them. I don’t have contacts with any of them to say what their thoughts are, but they haven’t been as quick as I hoped. One thing is that, I guess, they don’t want to build cheap 10G switches which might pull business away from the high margin enterprise hardware.

The State of 10GBase-T

A while back, before Aquantia burst into the scene, we did a piece about every consumer motherboard with 10GBase-T built in. This article saw insane traffic for a short piece, but it also showed every motherboard that was using Intel’s X540-T2 controller chip. For these boards, the chip was expensive (adding ~$250 to the board retail price), power hungry, and it required a good number of PCIe lanes. The upside was that most of these boards were dual port.

Since then, we’ve seen boards with Aquatia AQC107 (and AQC108) chips on board, which raise the price of the board by $70-$100 for a single port, but this is still a far more accessible way of enabling anything better than gigabit Ethernet on a PC. Then there's the range of 10GbE PCIe cards available, running at around $100.

As for the switches, the only options were a number of managed 8-port models from the likes of Netgear, such as the Netgear XS708E, which was around the $750 mark. Shelling out almost $80-$100 per port (after taxes), as we saw in the poll above, is a little insane for a home network and doesn’t appeal to very many users.

In the last year or so, there have been a number of switches that have hit the market offering two 10GBase-T ports and eight 1G ports. This includes switches such as the ASUS XG-U2008, which has been on sale for $250-$300 or so, the Netgear gaming-focused GS810EMX at $250-$300, and the Netgear GS110EMX which is a non-gaming version for slightly less. The problem with these switches is that they only have two ports – there’s no way to make a ‘tree’ from them, it essentially becomes expensive point-to-point connection, given the cheap cost of gigabit switches.


The Netgear Gaming 2x10G + 8x1G managed switch

So as of this week, the state of play was this for 10G offerings:

The offerings are still pretty abysmal for anyone looking for a ‘quick fix’ to enable 10GBase-T in the home.

It Was A Misprice or Something

So this week, when a family member asked me what I wanted for my birthday, I idly flicked through some switch listings. Thinking I might just splurge for a 2-port, I was hoping that an 8-port had come down in price. What I found, without too much trouble, was the Netgear XS724EM, a 24-port 10GBase-T switch. My search hadn’t been for that many ports – I assumed it would automatically be too expensive.

The XS724EM had an RRP of £1700. The price in front of me was £782. After a quick rant on Twitter, it was a no brainer (ed: I still think you're insane). At £782 / $858, this was a 55% discount, and comes in at just under $36 per port. I expected at some point that the cost of the switches would come down in price, although I didn’t anticipate the first one to do so would be a super-large one. Not only that, but it supports both 5G and 2.5G as well, so it's still beneficial with existing Cat5e runs.

If you go to the page today, you will see that this might have been a misprice.

The unit is currently up for £1280, almost £500 more than what I paid for it. Bargain. Prime delivery too.

Unboxing the XS724EM

After showing the box to the resident feline population, it was time to see what we had. On the side it gives a lot of pertinent information. This unit weighs 3.72kg / 8.21 lbs, which will be a key point for some users.

In the box, the unit is well packaged with foam blocks, although there is little space above and below it should the box be punctured.

Aside from the manual, the box came with two power cords (one UK, one EU), along with rubber feet for users putting the switch on a desk somewhere, and brackets to extend the unit to a standard 19-inch rack. Some of the comments online state that in a rack, using the screws, it actually ends up very rear heavy, putting a lot of torque on the screws if the unit isn’t directly above a server. In this case, it might be good to invest in rails.

The manual gives examples of how to connect the switch to multiple devices. Interestingly it thinks that gaming laptops with 2.5G connections are somewhat ubiquitous – I think someone should tell Netgear this is not the case.

There is also an app for the smartphone to help with additional management.

The cables for the switch are designed to be put in the front, and we get 24x 1G/2.5G/5G/10GBase-T ports for RJ45 cables. There is also two 10G SFP+ ports on the right, muxed with the final two 10GBase-T ports so only one pair can be used at once.

The lights on a normal gigabit Ethernet port are both orange and flicker with data. In this case, to discern 2.5G, 5G, and 10G, the LEDs go green and will have different patterns based on the connectivity.

There’s a Kensington Lock on the rear for physical security.

Airflow through the unit is provided by three fans near the outtake, with the intake on the other side.

Opening the chassis takes two screws on either side and three on the rear. It slides off like a standard server chassis, keeping the front panel.

At the rear of the chassis, covered in a shroud, in the built in power supply. The main PCB has several big heatsinks on it, which we’ll get to in a bit.

The fans in the chassis are Delta AFB0412SHB brushless fans, and these can kick up quite a noise at full blast. Luckily the only time I’ve heard them on full is when turning the unit on.

On the PCB are the controllers covered in aluminium heatsinks. These heatsinks are big and heavy, and there’s even a metal plate on top of the main switching fabric.

I actually tried to take this plate off to see the controllers underneath, but that was a no-go. The heatsinks actually use additional thermal pads to keep the plate attached and to conduct the heat energy through the unit. As I bought this unit personally for my use, rather than AnandTech’s money or a review sample, I wasn’t willing to potentially break things. Sorry.

After fitting it all back together, and putting the rubber feet on, it was time to hook it up to my home network.

This switch is going to sit at the crossroads of my five main test beds, along with a steam cache server (to enable quicker downloads), a local NAS, and a few other devices. For sure, I’ll be doing some office rearrangement soon to make the most out of the switch.

Using the Switch

This is a managed switch, which means there is the opportunity to go in and organise all of the settings. However, for users who just want to use it as a switch, it is almost as easy as plug and play. In fact, it was plug and play to begin with – in order to make the process a bit easier, I went into the web interface for the switch and disabled DHCP to make it perfectly clear (DHCP is handled by my router).

Logging in was straightforward (IP and password are on the bottom of the switch, and the password default is password), and the management control seems suitable for what it was designed for. Users can tell which ports are connected at what speeds, and also limit connectivity per port, and set up VLANs. In my case I’m not going to be using much of any of this, but the VLAN and QoS options are going to be key for office users.

Performance

As it turns out, testing networking hardware is difficult. If you really want to get a detailed overview of a switch, it requires the best part of 12-16 systems hitting it hard, aggregating the results for latency and bandwidth, and also keeping track of power, temperature, and noise. Unfortunately I have neither the time nor the facilities to do that, but a quick blast of iperf for peak-to-peak speeds is what we have at hand.

For our testing systems, on one end I have an AMD Ryzen Pro 2400GE (35W) APU system with an Intel X540T2 PCIe card equipped, and the other end is an X170 motherboard with a Core i3-7100T (35W) and an Aquantia AQC107 PCIe card equipped. Both systems were running Win10 x64 Enterprise 1803. I ran installed cards and drivers, made no other settings changes, and ran iperf by varying the number of parallel connections.

The default settings in iperf and on the two systems showed that we could, in theory, reach transfer rates of around 9.3 Gbps. The cards could also be the limiting factor here – the dangers of testing networking is that typically a 10G card is connected to a 10G switch is connected to another 10G card; either one of those three parts could be the bottleneck. I did note that iperf very easily used 85% of one thread on each system, so it could be that we need a faster CPU for better performance as well.

But a more initial concern when buying a switch like this is noise. This is a switch designed for the hubbub of a small office, or a server rack – not necessarily a home office where I might be recording audio. However in my initial use, the only time the fans have come on is when the machine is turned on (like some motherboards turn all fans to full until the startup sequence finishes). After that initial 15 second startup, the fans go to silent. When testing point-to-point peak speeds over several minutes, the unit is still silent. It naturally gets warm to touch, but in my setup it is out of the way on a desk. I’m sure I can find a place for the cats to sit on it and enjoy.

The Final Word

As mentioned, I forked over my own cash for this hardware. At $36 a port, I’m still amazed that the first one that crossed my $50 line was a massive 24-port switch, so now I have overkill for whatever I have planned (ed: it may involve CPUs and motherboards). The key thing here, for me, will be my testing – every new testbed requires 100GB of CPU tests and 800G+ of gaming tests, so copying these over takes time. On a gigabit network, using my new Steam Cache means that at a speed of 70MB/s a big game like GTA5 can still take 13 minutes. I’m hoping that with 10G, if can push that transfer speed to SATA limits, that the total time will be down to around two minutes. There's also the possibility of doing some network card testing in the future now.

 

Related Reading

Comments Locked

51 Comments

View All Comments

  • abufrejoval - Sunday, September 30, 2018 - link

    Beat you to it a couple of weeks ago using a 12-port Buffalo Technology BS-MP2012 at €600 or €50/port including taxes, initial report is somewhere here on this site.

    The Aquantia NICs were down to €80/piece for a week or so, so I upgraded all my home-lab’s core servers.

    Been on that very same mission for 10 years and only stumbled across that 12-port NBase-T switch in summer. I had been using direct connect cables with Intel and Broadcom 10Gbase-T adapters before, but removed them from my home-lab, because those NICs required too much cooling at 10Watts/port: Those were dual port NICs targeting rack-mount servers with serious air-flow, and they kept dying in my desktops.

    With Aquantia this is down to 3Watts/port (1xx series on the NIC, three 4xx series chips on the switch for a total of around 40Watts TDP), which works just fine with my noise-optimized home-lab desktop-technology servers.

    And noise was the major challenge with the Buffalo switch, too, as the original fans are just not “desktop-compatible”, but need to remove 40 Watts of heat. I installed Noctua 40x40x20mm fans with constant air-flow, voiding all warranties and putting the life of my family at risk, but I can no longer hear it, while it just gets a little warm, not hot.

    Incidentally last week I also went the next step to 100Gbit/s in the corporate lab!

    Mellanox offers hybrid NICs, ConnectX-5 adapters that will support both Ethernet and Infiniband semantics, even NVMe over fabric so you get “memory”, “network” and “storage” semantics across a single fabric at close to PCIe 3.0 x16 limits.

    Since the NIC and the switch silicon is essentially the same, only a different size, the Mellanox engineers decided to include a “host-chaining” mode, which allows you to daisy-chain NICs using cheap direct-connect cables (€100/piece) without a switch, similar to ARC-Net, Token-Ring or Fibre-Channel/Arbitrated Loop (FC-AL). Of course it means a shared medium, so it doesn’t scale, but at 100Gbit it takes 10 ports to surpass 10Gbit in star formation. And then you can just create meshes etc. adding more NICs to your servers: Composable, hyper-converged hardware, a CIO’s whet dream!

    Obviously Mellanox management wasn’t too happy about that, so currently it only works with the Ethernet personality of the VPI NICs and I only managed to massage 30Gbit/s out of these links, even if the boxes are beefy Scalable Gold Xeons.

    I find this daisy chaining mode extremely intriguing because you can build all sorts of interconnect topologies, while you save on the jump costs of central switches.
  • oRAirwolf - Monday, October 1, 2018 - link

    Nice catch. I would have happily paid that. I would be very interested to see some comparisons done between an aqc107, x540, x550, and a mellanox connectx-3. I use the aqc107 and connectx-3 in my home network and would love to see some data about CPU usage and latency.
  • piroroadkill - Monday, October 1, 2018 - link

    The cheapest I know of (that I also bought) is the Netgear MS510TX. It has a 10Gbit SFP port, a 10Gbit copper port, and then 2× 5Gbit ports, and then 2×2.5Gbit ports, alongside 4× 1Gbit ports.
    I feel like that's actually a pretty decent setup for the price ($270).

    However, I have nothing but issues with the Aquantia cards and this switch, regardless of cables. I've tried newer drivers, older drivers, different operating systems, forget it. It's so flaky that I go back to using the onboard Intel NICs, which never have issues.
  • cm2187 - Monday, October 1, 2018 - link

    What I am really waiting for to fully adopt 10GBE is the ability to have longer thunderbolt 3 cables. Almost all laptops have no ethernet ports anymore, let alone 10 gigabit, but a laptop connected by a single cable to a dock that would do power + 10GBE would work for me. But it's not lapable with a 50cm cable. I kind of need 2m-ish.
  • doggface - Tuesday, October 2, 2018 - link

    I would imagine my use case would be common in that I have a NAS, that has peak transfer speed of 1Gbit (~100MB/s). According to WD my Nas drives can hit ~2-250MBps so 5G would change my bottleneck to the HDDs not the network interface and make things like SSD caching a valid exercise. SSD based storage would also be an interesting proposition too especially since the $ p/GB is dropping. But this would require at minimum a 5 port 5G switch which I am probably not going to buy above $100-150.

    I don't do pro-sumer or pro-grade stuff so I don't have the necessity that others have here and I need to get this past the WAF.
  • alpha754293 - Thursday, October 4, 2018 - link

    @Ian
    Thank you for this preview/"review".

    In regards to your question in your article about "how do you review a switch anyways?", what I would be looking for would be the full round trip point-to-point latency in a variety of loading conditions as well as raw throughput.

    Sadly though, there aren't very many (if ANY at all), "consumer grade" apps that would never measure or know anything about that because on the flip side of the coin, there are also only a handful of commercial/enterprise apps that would also care (mostly database apps) and pretty much ALL HPC and distributed processing/distributed computing apps would care about round trip point-to-point latencies and raw throughput.

    To give you an example, I have a home office where I perform engineering analysis and simulations and at any given point, a single simulation can be pushing ~2 TB of data over the network (which I currently only have a (1) GbE network as my interconnect fabric).

    (There are MPI testing tools to help measure point-to-point network latency and bandwidth and basically is you thrash the switch in a number of different loading conditions (i.e. n x n computers talking to each other, or somehow simulate that if you have actually have n x n computers to play with).)

    I've been looking to upgrade my interconnect fabric for what I do and need/use something like this for at home, and ~$30/port is significantly more affordable than IB EDR (where a 36-port IB EDR switch is $11,525 (~9000 GBP) or approximately $320/port and the adapter cards can range from almost $400 per port to $660 per port). In other words, VERY expensive. (Granted, IB EDR has a peak theorectical throughput rate that's 10 TIMES that of 10G-BaseT, but it's still very expensive.)

    So it would be interesting to read in-depth tests in regards to these topics when reviewing the switch.

    It would also be interesting to see if the switch was managed such that you can aggregate multiple links together for high performance and to see how much of an increase in throughput that can deliver vs. just using multiple unbanded ports.
  • Fratslop - Monday, June 17, 2019 - link

    Or, you can get yourself a brocade 6610 for ~300 on ebay. 48 1gig poe ports, 8 10gig sfp+, and 4 40gig ports in the back.

Log in

Don't have an account? Sign up now