... In the Cloud

In the early age of ARM server hype, the word micro server was used a lot. Than that word was associated with "wimpy cores" and marketing people avoided it at almost any cost. But the word might make a comeback as developers are starting to write more and more micro services, a way of breaking down complex software in small components that perform a distinct task.

One of the cool things micro services make possible is to make software scale horizontally very quickly when necessary, but run on instances/virtual machines/servers with modest resources when that is not the case. And this helps a lot to keep costs down and performance high when you are running on top of a public cloud. In other words, public clouds are encouraging this kind of development.

Granted, at the moment, micro services mostly run inside virtual machines on top of the brawnier Xeon E5s. But it is not too far fetched to say that some of the I/O intensive micro services could find a home in a cheaper and "right sized" physical server. All these micro services need low latency network connections as one of the disadvantages of micro services is that you get software components talking to each other over the network instead of exchanging data and messages in RAM.

And of course, webfarms already moved to this kind of architecture, way before before the rise of micro services. Caching servers, static and dynamic webservers, databases are all running on separate machines. The distributed architecture of these webfarms craves fast and low latency networking.

The Silver Lining

Remember our coverage of the first ARM based server, the Calxeda based server? The single thread performance was pretty bad, the pricing was too high and the 32 bit A9 cores limited each server node to a paltry 4 GB. But the low power, high performance network fabric and the server node technology delivered a pretty amazing performance/watt ratio and very low network latency. Calxeda's fabric came too early as the ARM SoCs were not simply not good enough at that time. An A15 based ECX-2000 was developed as stop gap measure, but Calxeda run out of money. But that was not the end of the story.

Yes, Silver Lining has bought up the IP of Calxeda. The current offering is still based upon the ECX-2000 (A15 cores). Once they adopt the Opteron A1100, the "Calxeda Fabric" is finally freed from the old 32 bit ARM shackles.

We don't have to wait for a "fully clustered server". Silver Lining also has an FIA-2100 fabric switch available, a PCIe card. Basically you can now have a Calxeda cluster, but then at rack level.

You buy one top of rack (ToR) switch (the light blue bar above) and 12 FIA fabric switches to make a cluster of 12 servers. You connect only one out of three servers to the Tor switch and you interconnect the other servers via the FIA-2100 NICs. The (programmable) intelligence of the FIA-2100 fabric then takes over and gives you a computer cluster with very low latency, redundancy and failover at much lower costs than the traditional networking, just like the good old Calxeda based cluster. At least, that is what Silver Lining claims, but we give them the benefit of the doubt. It is definitely an elegant way to solve the networking problems.

The FIA-2100 NICs is supported on the new A1100 platform. However, it is not all good news for AMD and ARM. This technology used to be limited to just ARM SoCs, but now the Calxeda fabric is PCIe technology, so it will also work with all Intel x86 servers. There is good chance that the first "Calxeda fabric based cluster in a rack" will be powered by Xeon Ds.

We might assume though that the "non-rack" or "inside one server" product of Silver Lining will be most likely A1100 based as their current product is also ARM based.

So there is a chance that the AMD A1100 will find a home in its own "MoonShot alike chassis". A Silver Lining in the dark clouds of delays.

Conclusion

So how do we feel about the A1100? It is late, very late. The expected performance and power consumption are most likely not competitive with what Intel has available, let alone what Intel will launch in a few months. But at last, AMD has managed to launch a 64 bit ARM server SoC which has the support of all major Linux distributions and which can benefit from all progress that the Linux community makes, instead of relying on a special adapted distribution.

The most important things like ACPI and support for PCI Express seems to be working. AMD has paid a high "time to market" price for being an 64 bit ARM server pioneer. The A1100 time schedule suffered from the teething problems of the ARM server ecosystem. Still, the A1100 might be a good way to finally kickstart the ARM server market. Thanks to the Linaro "96boards enterprise edition", a 300-400$ SoC + board should be available soon and make it much cheaper to build software for the 64 bit ARM ecosystem. Thanks to Silver Lining, complete clusters of A1100 servers might get the attention of the cloud providers.

This may pay off in the near future, on the condition that the K12 core is delivered in a timely manner (2017). Because at the end of the day there are no excuses left for AMD or for ARM. If ARM servers are to be successful they will finally have to deliver instead of promising dreamt up server marketshare percentages on a progressively further in the future date.

Why It Took So Long
Comments Locked

37 Comments

View All Comments

  • eldakka - Friday, January 15, 2016 - link

    With 14 SATA ports, I wonder how this would perform as a ZFS-based NAS server?
  • beginner99 - Friday, January 15, 2016 - link

    True but you could just stick an expansion card into a Xeon-D server, if you need the CPU speed.
  • beginner99 - Friday, January 15, 2016 - link

    This thing probably sucks but that isn't surprising. I never ever got the micro server hype. It does also not make sense when you can run stuff on VMs on beefier CPUs and get better performance/watt and $.
  • The_Assimilator - Friday, January 15, 2016 - link

    "But there is more than meets the eye or we would not bother to write this article."

    And then at the end:

    "So the new AMD SoC has no performance/watt advantage and no price/performance advantage over Intel's offerings."

    AMD has failed, again.
  • Minion4Hire - Friday, January 15, 2016 - link

    That's not the end. You have failed to read two entire pages if you think that is the end of the article.
  • duploxxx - Friday, January 15, 2016 - link

    no it does not suck. it is based on ARM while it is not fully comparable to a x86 socket.

    AMD had to release this socket on the default A57 to pave the road for there next gen k12 arm.
    With this platform they are able to give initial go for drivers/support etc.
  • bill.rookard - Friday, January 15, 2016 - link

    I would disagree with that comment about performance per watt. In certain fairly common use cases, such as a storage server compared to the C2000 series you would not only have to add a raid card to the Intel setup, but also provide for dual 10gb nic ports. Both of those cards will add both cost and wattage to the total overall system...
  • Xeus32 - Friday, January 15, 2016 - link

    Dear Anandtech,
    I'm very disappointed about this review.
    I have from long time a Atom server and the basic concept of power consuming used in the standard review is not right when you are evaluating a system below 50W.
    A single hard disk can consume from 5W (if you use low performance , high density disk ex: western digital red 2.5”) to 10W (if you use high performance hard disk).
    We need also the RAM that for each bank have a power consumed more or less of 3W.
    In my system ( ATOM with 2GB DDR2 RAM, LSI controller and 3 Hard disk 3.5” Raid 5) the power consumed of the only motherboard is around 35W and the hard disks have the same consumption. The power consume of the CPU is reported as 10W.
    I don’t want speak about this processor but 40W or 20W is the same thing for me because if we add 4 hard disks to our hypothetical system, the power consumed from the storage is greater than the CPU.
  • JohanAnandtech - Saturday, January 16, 2016 - link

    20W more or less per server node is a lot in a system like the HP Moonshot where have 40+ nodes in a high density system. It means that your 1 KW cluster now needs 1.8 KW.

    And I do make the point that it less of an issue in a storage rich system and that AMD might have a chance there.
  • mosu - Friday, January 15, 2016 - link

    So an upgrade to A72, 14nm and USB 3.0 or 3.1 with actual SATA and PCI will make a great chip someday, now that the road was opened.Maybe a 16 core A72 will do even better...

Log in

Don't have an account? Sign up now