Stage 2: Datacenter

Intel’s Datacenter Group SVP, Navin Shenoy, also took to the stage at CES in order to discuss some new products in Intel’s portfolio, as well as to deliver updates on ones that were disclosed last year. Back in 2018, Intel held its Datacenter Summit in August, where it lifted the lid on Cascade Lake, Cooper Lake, and 10nm Ice Lake. Along with this, we saw new instruction support for AI and security as the top two areas of discussion.

Cascade Lake: Get Yours Today

Intel’s first generation of Xeon Scalable processors, Skylake-SP, was launched over 18 months ago. We’ve been hearing about the update to that family, Cascade Lake-SP, for a while now, along with its brother Cascade Lake-AP and how it will tackle the market. The announcement today from Intel is that the company is now shipping Cascade Lake for revenue.

This means, to be crystal clear, that select customers are now purchasing production-quality processors. What this doesn’t mean is retail availability. These select customers are part of Intel’s early sampling program, and have likely been working with engineering samples for several months. These customers are likely the big cloud providers, the AWS / Google / Azure / Baidus of the world.

It’s worth pointing out that at Intel’s Datacenter Summit, they said that half of all of its Xeons sold were ‘custom’ processor configurations that were not sold though its distributors – these parts are often described as ‘off roadmap’. It is likely that when Intel says Cascade Lake-SP is shipping for revenue to select customers that they are likely to be purchasing these off-roadmap processors. They might be running at a higher TDP than Intel expects for the commercial parts, or have different core/cache/frequency/memory configurations as and when they are needed.

One of the big draws for Cascade Lake is Intel’s Optane DC Persistent Memory support, which will enable several terabytes of memory per socket, but also the hardware security patches for Spectre v2. Businesses who want to be sure their hardware is patched can guarantee security if it's in the hardware, rather than relying on a firmware/software stack. So this might be part of why Intel’s demand for 14nm CPUs is at an all-time high and outstripping supply – if a company wants to be 100% sure it is protected, they need the hardware with baked-in security.

The full retail launch of Cascade Lake is expected in 2019. Based on what we saw at Supercomputing in November, given by a rolling slide deck at the booth of one of Intel’s OEM partners, that time frame looks to be somewhere from March to May.

Nervana for Inference: NNP-I coming in 2019

To date, when Intel has discussed the Nervana family of processors, we have only known about them in the context of large-scale neural network acceleration. The idea is that these big pieces of silicon are designed to accelerate the types of compute commonly found in neural network training, at performance and power efficiency levels above and beyond what CPUs and GPUs can do. It has been disclosed that Intel is working on that family of parts, NNP-L, for a while now, and we are still waiting on a formal launch. But in the meantime, Intel is announcing today that it is working on a part that's optimized for inference as well.

There are two parts to implementing machine learning with neural networks: making the network learn (training), and then using the trained network on new information to do its job (inference). The algorithms are often designed such that the more you can train a network, the more accurate it is and sometimes the less computationally intensive it is to apply it to an external problem. The more resources you put into training, the better. But the scale of compute between training and inference is several orders of magnitude: you need a big processor for training, but don’t need a big processor for inference. This is where Intel’s announcement comes in.

The NNP-I is set to be a smaller version of the NNP-L and built specifically for inference, with Intel stating that it will be coming in 2019. Exact details are not being disclosed at this time, so we don't have any information on the interface (likely PCIe), power consumption, die size, architecture, etc. However, we can draw some parallels from Intel’s competition. NVIDIA has big Tesla V100 GPUs with HBM2 for training that can draw 300-350W each, with up to eight of them in a system at once. However for inference it has the Tesla P4, which is a small chip below 75W, and we’ve seen systems designed to hold 20 of NVIDIA's various inference processors at once. It is likely that this new NNP-I design is along the same lines.

Snow Ridge on 10nm: An SoC for Networking and 5G (Next-Gen Xeon-D?)

The Data Center Group will be making two specific announcements around 10nm. The first is disclosing the Snow Ridge family of processors, focused on networking and specifically targeting the wide array of 5G deployments coming up over the next decade. The purpose of Snow Ridge is to enable wireless access base stations and deployments, as well as functions required at the edge of the network, such as compute, virtualization, and potentially things like artificial intelligence.

Intel gave no other details, however going back in my mind, I realise that we’ve heard this before with Intel. They already have processors on their roadmap focused specifically on networking, with 40 GbE support and features like QuickAssist Technology to accelerate networking cryptography: the Xeon-D line of processors. This makes me believe that Snow Ridge will be the name for the next generation of Xeon D, either the Xeon D-2500 or Xeon D-3100, depending on the power envelope Intel is going for.

Given this assumption, and the fact that Intel has said that this is a 10nm processor, I suspect we’re looking at a multi-core Sunny Cove enterprise design with integrated networking MACs and support for lots of storage and lots of ECC memory. There’s an outside chance that it might support Optane, allowing for bigger memory deployments, although I wouldn’t put money on it at this stage.

Ice Lake Xeon Scalable on 10nm

To finish up Intel’s announcements, Nevin also talked about Ice Lake Xeon Scalable. At Intel’s Architecture Day, a processor was shown at the event that was described as Ice Lake Xeon, so this is just Intel repeating the fact that they now have working silicon in the labs. There is still no word as to how Intel is progressing here, with question marks over the yields of the smaller dies, let alone the larger Xeon ones. Working silicon in this case is just a functional test to make sure it works – what comes now is the tuning for frequency, power, performance, and optimizing the silicon layout to get all three. I’m hoping that Intel keeps us apprised of its progress here.

 

 

What Happened at CES 2018, and why CES 2019 is Different

A memory that will stick in my mind is Intel’s CES 2018 announcements. At the heart of the show, we wanted to know about the state of Intel’s 10nm process, and details were not readily available. 10nm wasn’t mentioned in the keynote, and when I tried to ask then-CEO Brian Krzanich about it, another Intel employee hastily cut in to the conversation saying that nothing more would be said. In the end we got a single sentence from Gregory Bryant at an early morning presentation the day after the keynote, and that sentence was only after 10 minutes of saying how well Intel was executing. That single sentence was to say that Intel was shipping 10nm parts in 2017, although so far only two consumer products (in limited quantities, and region specific) have ever been seen.

This year, coming off the back of Intel’s Architecture Day last month, shows that Intel is becoming more open to discussing future products and roadmaps. A lot of us in the press and analyst community are actively encouraging this trend to continue, and the contrast between CES 2018 and CES 2019 is clear to see. Companies tend to hide or obfuscate details when product execution isn’t going to plan; now that Intel is starting to open up with details, the outlook is clearly returning to one with more optimism.

Stage 1: Consumer
Comments Locked

60 Comments

View All Comments

  • PeachNCream - Tuesday, January 8, 2019 - link

    In the original Legend of Zelda, the Overworld was the exterior areas above any dungeon in which a piece of the Triforce was stored. I presume this means that Intel is ensuring software support for such top-level areas, but refuses to actually do anything to reassemble the Triforce.
  • KPOM - Tuesday, January 8, 2019 - link

    Any word on when the Ice Lake-Y chips will be available, and whether they will support LP-DDR4X and Thunderbolt 3?
  • sbrown23 - Tuesday, January 8, 2019 - link

    "A lot of discussion has been held that this was an Apple request, and given Apple’s device portfolio, its volume of sales, and its desire to drive down power with optimized unique designs, the argument for Apple holds some water. But it doesn't sit right with me. This is more a low-powered chip, perhaps even lower power than the A12X in the iPads, so I don’t think Apple would want that chip in one of its MacBooks."

    I'm going with Microsoft. Seems perfect for a next-gen Surface Go.Current one is clocked too low, weak GPU, and dual-core with no HT. The Foveros 10nm SoC would give them an improvement so many ways: lower idle power draw, probably a better GPU, increased core count with the Tremont cores, and likely a much higher clock on the Sunny Cove core for single core burst duties than they have on the current 1.6GHz Pentium Gold.

    Given that Qualcomm is a long way from matching Apple's A12 performance, Lakefield in Surface Go seems like a perfect fit.
  • ilkhan - Tuesday, January 8, 2019 - link

    *yawn*.

    Wake me up when Intel actually releases a meaningful upgrade instead of yet another 2-3% of worthless.
  • iwod - Tuesday, January 8, 2019 - link

    We will have to wait for Lakefield Pricing. If it is sub $50, may be we could see a $899 MacBook.
  • peevee - Tuesday, January 8, 2019 - link

    "This is surprising, given that Intel is usually conservative with supported memory speed declarations – they still make a DDR4-2933 processor in their lineup – so jumping to 3200 would indeed be an unexpected shift for the company."

    Frequency increase of 9% is not jump, it is a tiny half-step. But LPDDR4x instead of hopelessly outdated and power-hungry DDR4 is a very different matter, both in mobile and (would be) server space.
  • Hixbot - Tuesday, January 8, 2019 - link

    Are there no moderators here? I come to the comments for thoughtful tech discussion on article, usually the author joins in. Instead, I find walls of text ranting and screaming.
  • DeepLearner - Wednesday, January 9, 2019 - link

    Seriously, the first half of this comment section is worthless.
  • zodiacfml - Tuesday, January 8, 2019 - link

    Looks like AMD will own the desktop CPU market for several months. Good for AMD but not pricing. They will charge more than they used to
  • Ozymankos - Thursday, January 17, 2019 - link

    AMD already has 64 cores and 128 thread processors on 10 nm(whom they call 7 nm:)))
    and Intel now ..gets big money with their non-linear chips:12 cores,18 cores,etc

Log in

Don't have an account? Sign up now