Intel Starts to Close Omni-Path: OPA1 Xeon CPUs on EOL, OPA2 Axedby Anton Shilov on October 10, 2019 8:30 AM EST
- Posted in
- Xeon Phi
- Xeon Scalable
Intel this week announced plans to discontinue its 1st Generation Xeon Scalable processors with Omni-Path interconnect a year from now. With no 2nd Generation Xeon Scalable products announced to date supporting the technology as well as already cancelled plans for Omni-Path 200 Gbps fabric, Intel is canning the whole project.
Early in the decade, Intel acquired Cray’s interconnect assets as well as QLogic’s InfiniBand technology in a bid to build its own fast, highly-scalable, low-latency interconnect technology for datacenters and supercomputers. The result of Intel’s design efforts was creation of the Omni-Path network fabric technology that was developed primarily with Intel Xeon Phi-based HPC systems in mind. Indeed, Intel’s 2nd Generation Xeon Phi (Knights Landing) were the first to get Omni-Path and eventually the technology found its way into Xeon Scalable F-series CPUs as well as add-in cards for regular Xeon systems. Meanwhile, a lot has changed since 2012 when longer-term roadmap for Omni-Path was set. Intel’s Xeon Phi products have been discontinued and their underline MIC architecture seems to be gone. Besides, there are also a host of new interconnection technologies that would be competing with Omni-Path. As a result, the company canned development of its 2nd Generation Omni-Path interconnect that promised speeds of up to 200 Gbps earlier this year and reportedly advised its customers not to start designs using the OPA 100 technology.
This week Intel said that it would discontinue its Xeon Gold 5117F, Xeon Gold 6126F, Xeon Gold 6130F, Xeon Gold 6138F, Xeon Gold 6142F, Xeon Gold 6148F, Xeon Platinum 8160F, and Xeon Platinum 8176F processors. These are the first generation Xeon Scalable processors with OPA built in to the package. The company’s partners have to make final orders on CPUs by April 24, 2020, whereas the final chips will be shipped on October 9, 2020.
It remains to be seen whether Intel will commit to development of a new high-speed interconnect for HPC in the near future, or will rely on Infiniband HDR 400G or technologies for its next-gen supercomputer designs.
- Exploring Intel’s Omni-Path Network Fabric
- Intel @ SC15: Launching Xeon Phi “Knights Landing” & Omni-Path Architecture
- A Few Notes on Intel’s Knights Landing and MCDRAM Modes from SC15
Post Your CommentPlease log in or sign up to comment.
View All Comments
Kevin G - Thursday, October 10, 2019 - linkNot unexpected but a bad move. On package fabric was one of the few niche reasons to go with a Xeon SP build over the second generation Epyc chips. Now the list is down to three niche scenarios: quad/octo socket support, Optane DIMM support and on package FPGA. Support for AVX-512 is a plus for the Xeon line up but even then it isn't enough to win some benchmarks against the new Epyc chips. 2020 is going to be a very rough year for Intel.
lefty2 - Thursday, October 10, 2019 - linkOptane DIMM is also dead end tech. DDR4 memory prices have come down so much that it's now more expensive to equip a server with Optane memory than regular DIMMs
Jorgp2 - Thursday, October 10, 2019 - linkBut that's not persistent.
GreenReaper - Thursday, October 10, 2019 - linkBut it's a lot *more* expensive than persistent SSD, which is also available in much larger, easily-swappable units. See how that works?
davej2019 - Friday, October 11, 2019 - linkIt's not byte addressable and also significantly more latency even with NVMe interconnect. SSD is a completely different storage tier. Also, you can't have DDR4 up to the capacity that Optane DIMMs provide. In database world this is significant.
tuxRoller - Thursday, October 10, 2019 - linkOr as dense.
Rοb - Friday, October 11, 2019 - linkYou can buy NVDIMM for 4x the price of ECC RDIMM ( https://www.mouser.com/Embedded-Solutions/Memory-D... ) or 3x the cost of Optane, but it runs at full speed and works like regular memory (though the OS needs to know and support NVRDIMM when the UPS kicks in). If you have an in-memory database or anything that needs to be rebuilt after a reboot using persistent memory saves a LOT of downtime. It's not cost that is the concern, it's failure (or downtime) that is the cost.
KAlmquist - Friday, October 11, 2019 - linkI haven't found any benchmarks showing a quad socket Xeon system beating dual socket Epyc 7742 (which has 128 cores), so I'm not sure that quad socket support is a reason to buy Intel. Of course Intel has 8 socket systems as a niche where AMD doesn't have an equivalent offering.