I got to have a play around with a Canon AS400 modular PC back in the early 90s. This thing came to market before the IBM PC - it had a 640x480 colour screen. It also featured a metal balled mouse that connected to the keyboard, which in turn connected to the base unit. The modularity was in that you could snap on a hard disk module (5MB I think - maybe 10MB). There were single and dual floppy modules too from memory.
The hard disk tech was very cool too - it used power to hold the heads down over the disk surface so - in the event of a power failure - the head would actually lift off the disk rather fall onto it.
Sounds they took a laptop mainboard and redesigned it to slot into a backplane. Maybe the reason you're drawn to it is nostalgia? Reminds me of those industrial cards that had a CPU socket and RAM slots, plus IDE connectors, that would slot into a baseplate (which could slot several). Or those ill-fated "CPU upgrade cards" that were basically a motherboard+CPU+RAM on a PCI card.
CPU upgrade cards on ISA made a degree of sense. I mean, there's still the bottleneck that is ISA, but... everything connected to the processor through ISA ANYWAYS so there wasn't really a huge performance loss. I admit I don't recall the PCI upgrade cards, and my mind is locking up trying to figure out how that would work.
Kind of reminds me of the Apple Power PC "Dos Compatibility" card they made, effectively an entire 486 PC on a card, embedded Sound Blaster Vibra and bus interconnects to share keyboard\mouse\internal speaker with the host PC. Really radical for the time.
For years now, I've been wishing for an Android/ ARM (tablet/phone) cpu system on a PCI-e card to help bridge the gap between Windows PC's and Android OS apps. Such a card would function like GPU compute and would work with virtualization with in the OS to run android or other OS.
You may not remember the days of the original Athlon and Pentium 3 where the processors were on cards with a special slot. In those days, you didn't have cache on the CPU, so putting it onto a card with the CPU was the solution. There was also the company ALR that had an 80386SX machine that had a special slot for a daughter card to add a 386DX to bring it up to 32 bit. Slower than just having a 386DX in the first place in the machine, but it did allow upgrade capabilities.
Gen-Z uses the entire idea of a system fabric as well, but isn't locked into a given vendor. I've been waiting for AMD to put out a prototype system with Ryzen support for the new design.
My thoughts exactly - this should have gone into the "Truth be told ..." paragraph up there in the article. This thing is hardly a novel concept, it's just that the author is a bit on the young side and cannot possibly remember them.
Yeah I'm also confused by the statement. Instead of "(This is where a cynic might say that Razer got there first… Either way, everyone wins.)" some explanation might have helped.
The idea is not a new one. I cut my eye teeth hand building computers on the S-100 bus which used a passive backplane along with separate cards for CPU memory & System logic, Display, I/O and video back in the mid '70s. With that kind of setup you could build a PC out of just about any cpu architecture including Intel, Zilog and Motorola processors. Was quite a fun learning experience.
Initially I read this not as "I cut my eye teeth hand-building..." but as "I cut my eye, teeth and hand building computers on an S-100 bus," which made me nod my head and think "yep, a lot of those chassis were real pains to work around...."
My very first MS-DOS computer was exactly like this. 8/4.77MHz 8088 on one ISA card, with RAM and some of the IO. Zenith Z-158, purchased through my university. I later experimented with a 286 card that plugged in to an available ISA slot and had a 4 pin ribbon cable that plugged in to where the 8088 was pulled, but even at those relatively low speeds, the ribbon cable was a horrible idea. I tried shielding it to no avail, ended up taking it back and just living with the machine with a V20 and an 8087. Skipped 286's all together, my next machine was a 386-25..
Imagine in the future you could hot-swap 2 of these system cards: the first system would mirror its state onto the second and you could then remove the first, giving a system upgrade with no downtime. Also, you could imagine a high reliability version where 2 or 3 system cards have to stay in sync with each other (like redundant systems on spacecraft).
exactly.... only the OS and drivers were a headache. I remember those good old days. Stratus was totally blown out of the market by VMware HA... not the same level of protection but good enough for a way lower price...
So here is big difference with this designed - you can start out CPU and GPU card and possibly IO card, but later add addition CPU or GPU cards. Further more if they come out with faster and more powerful version of CPU card, you can add it - but a question can you mix and match them. What about different vendors Intel and AMD CPU's on same box or AMD and NVidia GPUs on same box.
This is Xeon system, so multiple CPU are in the pictures - so how about 24 CPU' modules all working with each CPU modules containing multiple core 8 or 16 or more depending on design. Better yet if one of modules fail - then replace just it.
People need to get past the old desktop designs and move toward to future. This is not 70's design but 2020 design.
You forgot AGP and PCI-X in there for historical purposes.
The big change was PCI(-X) to PCIe as that dropped the shared bus to a dedicated point-to-point serial link. The driver model was same to permit rapid adoption and transition but the underlying layers were all different.
The problem with point-to-point is that for a modular system like this, additional IO slots are either bifurcated form the core card or route into a massive PCIe switch. Either way it is an easy means to introduce a bottleneck in IO.
Exactly. Either this architecture will be incredibly limited, or it will require a PCIe switch, in the backplane.
IMO, not worth it. Blade servers are already good for what they do. PCs are good for what *they* do. This is a rather pointless waste of time.
What would make more sense is to standardize on some mechanical housing form factors for USB4 devices, so that they can stack nicely or slot into enclosures. That's the way to expand small-form-factor devices like NUCs and mini PCs.
PCI-X servers used to give me nightmares. Specifically the compatibility, or lack there of. Fortunately it died a quick death in the mainstream...Apple supported it for a long time after PCIe took the server space in their G3\G4 workstations. Not sure why they loved it so much. But Apple.
PCI-X actually made it to the single-core G5s, believe it or not... The answer is probably that they were between a rock and a hard place with what Motorola (and IBM) could give them with chipsets.
Keep in mind, this is when Apple was in love with USB... because it finally meant that they no longer had to either push OEMs hard to make Mac-compatible peripherals (or rely on costly, low-volume Mac specialty suppliers like Elgato). The ability for high-end Macs to be able to use off-the-shelf components was a plus, not a minus, back before Apple was the 800 lb gorilla in the room.
This. Of course, it may work for a lot of consumer use cases - GPUs don't typically saturate PCIe 3.0 x16 connectors, so really, once you go to 4.0, you'll have a reasonable amount of bandwidth for whatever else you'll need (as long as they keep Thunderbolt on the host card). Seeing as it looks like PCIe 5.0 might happen sooner rather than later, you might, on net, be OK.
You completely missed the point about PCIe not being a bus. As such, you can't just have multiple CPU modules that share (or can flexibly remap) the peripherals, and that pretty much blows a hole in your design concept.
Also, the multiple CPU modules probably can't use a fast interconnect like UPI, due to distance, connectors, etc. So, they're connected *how*? 100 Gbps Ethernet? Oh, then I guess you just re-invented a blade server.
I was thinking this exactly. Buses are not new. Computers today are logically bases on a tree of buses. Technically most of this can be done already, they just want it in a proprietary form factor.
interesting from a modularity standpoint, but ugh on giving up on the cooling potential we have with standard motherboards. Imagine going from a tower cooler or water cooler to that fan
There are just so many things wrong with this. Intel has taken a laptop motherboard and put it on a backplane so users won't have to throw away their computer cases? As mentioned, storage will have to be swapped out and windows/ linux possibly reinstalled with the new element.
As for OEM's, how is support going to deal with generations of Elements plugged into a backplane? Will those Elements be vendor agnostic? Could I plug a Dell element into my HP chassis and when something goes wrong who provides the support?
I don't think anyone except enthusiasts and businesses would upgrade their computers in this day and age. Most people have moved on from desktops and are happy with laptops and phones. I don't think you will bring them back with the promise of easier hardware upgrades.
The mobile functions yes but lot of of business usercases require zero mobility and may have specialized needs which may be unreasonably expensive to aqquire in laptops. Price/performance still matters. The desktop isn't going anywhere.
Funny, almost everyone I know still has one. Other devices supplement it, but don't replace it. Don't mistake the fact that there is little to no growth in sales as meaning that no one is using them. They are using them, they just aren't buying new ones. The old ones work fine.
I see the OEM PC as being the primary place for this. Enthusiasts don't want it. For example I may go and buy a Ryzen 3700X and swap it out with my Ryzen 1700 on my X470 motherboard. Its as easy as unplugging one CPU and plugging in the new one. No, I won't get PCIE4.0 but I'l be using the same RAM, storage and everything else.
For OEM's the picture is different. They could essentially just make a single system and just slap a different Element in there depending on what the customer ordered. How will support deal with different generations of elements? What about a Dell element in an HP? Easy. They won't support a configuration they didn't sell. If you want an upgrade to your HP, you will buy it from HP. but as you said, most non-enthusiasts don't upgrade anyway. What this would really be about from the OEM point of view is to be cheaper and easier to assemble. It would also be much easier for an OEM to offer made to order systems. You ordered an I5 and I ordered an I9? No problem. They just plug in the appropriate element into their standard chassis and ship it.
superunknown98 " Most people have moved on from desktops " as Ratman6161 said, out of say 20 people i know, maybe 4 have a notebook, 1 has a tablet. i have a notebook, and i barely use it. so no, most people havent moved away from desktops, at least not those that i know, but their desktops, could use an upgrade :-)
I am kind of interested in this concept for a dual-pc build. If you could mount your NAS + drives in same chassis and via pcie have power + 10gb ethernet connection to the host system as well as external connections you could externalise the load without needing a new PSU/case.
Kinda weird that Intel is going through this idea as this isn't their first time. As pointed out, the VCA cards throw several CPUs onto a card. But Intel did the same thing to a degree with the first commercial Xeon Phi. There is also the Open Pluggable Specification ( OPS ) that is designed for more commercial video applications but similar concept where the host PC is on a removable modular card.
I will say that a card like this is interesting if intel would permit the usage of a 'sub host' for virtualization environments. It'd provide some physical seperation in terms of execution domains while permitting IO to be shared or dedicated based upon card. With the recent spat of Intel security issues, this would provide another layer of protection for guest OSs. It'd also let VM farms mix latency sensitive/serial workloads that benefit from high clocks (which benefit from the >4 Ghz consumer parts more) into a wider more throughput oriented architecture of traditional servers.
For the average mass market consumer, the only niche I see this fulfilling is the video streamer that wants to do something like game and encoding on the same box. This would certainly help there but it is difficult to imagine another scenario where this would be ideal.
I also see the usage of a normal PCIe slot only beneficial to the consumer market. Realistically Intel should be leveraging SFF-TA-1002 with a high power connector for server usage. Being able to pop these out in server hot swap bays simplifies things greatly, at least in terms of node expansion. The high power connector, at +48 W can provide some truly insane amounts of power at 1 kW in a single slot and ~650W at +12V. These are also rated up to 112 Gbit per serial lane using PAM4 encoding (see PCIe 6.0).
not weird but calculated intel and other similar suppliers do this for repeat sales and to lock you into a deal think Gillette and razors once you buy the basic idea your trapped into the whole system unless you spend big to leave and of course you have a nice intel account manager to keep you in line with offers and discounts to further reduce your horizon, thats why intel has a whole eco system for you to buy into AMD really is bringing the fear
Oh, its not new concept. CPU backplanes has been around for a long time. Its still here today but not really popular. Below is an example of a modern one.
I can see the advantage to Intel of coming up with new products. I'd rather see the industry to establish a new form factor that enables smaller (than M-ATX) footprint dual PCI-E motherboards. I would have liked to have seen some specific dimensions for the element.
The only way I'd support this is if it had some advantage over current PCs, such as cooling.
Otherwise, I see it as just a way for parts makers to boost margins at the expense of weight, bulk, and reduced selection, only to benefit a relatively small number of users who are unwilling to open a standard PC case.
It's an oooooold system layout. Blade servers, the ol' CompactPCI setup, etc. Not sure why Razer's fancy render got singled out for praise, they didn't even build one!
Is this Intel's answer to changing sockets every generation? I'll stick with AMD. This just seems like a way for Intel to make sure that more of the percentage of a computer's cost goes in their pocket.
The co-processor boards used to plug into Sun Microsystems' UltraSPARC systems (you can read this installation guide for the latest version/iteration of it and what systems still support them: https://docs.oracle.com/cd/E19085-01/pci3.card/817...
The host Sun Microsystem's UltraSPARC system would run the SPARC version of SunOS (Solaris) and then you can boot up the co-processor card and run kind of whatever you want on it.
I had the SunPCi IIpro which had a Celeron 733 MHz processor so I threw Windows on it and CATIA V5 for Windows. It was super slow because the framebuffer was super slow, but it TECHNICALLY worked.
That way, you could have a SPARC system running the RISC V9/SPARC ISA AND running the x86 ISA at the same time, off the same machine.
I'm very skeptical this will go anywhere in the enthusiast or desktop space. I expect if it ever is released that it'll be priced high and not be any more practical than a traditional mb/cpu combo. I'd love to be surprised.
Sense you can only have one master, and all other cards plug in like you would plug into a regular motherboard pcie slot, what benefit does this provide?
imagine someone buys one and plugs it into a Ryzen X570 board ....and it beats the intel system because it has pcie4! then again its probably limited to intel board speeds...just saying
" On the card was also two M.2 slots, two slots for SO-DIMM LPDDR4 memory, a cooler sufficient for all of that, and then additional controllers for Wi-Fi, two Ethernet ports, four USB ports, a HDMI video output from the Xeon integrated graphics, and two Thunderbolt 3 ports."
Approaching from different directions. SBCs and it's like, the Raspberry Pi are approaching the problem from a different direction, but a similar goal. More ports and one gets closer.
So it's the return of S-100. This idea is literally older than the PC is. Many of the first home computers like the Altair used this concept. It has some advantages and some disadvantages. The biggest pro is that the level of expandability and versatility is greater than conventional ATX. You aren't tied to one motherboard. The downside is that its far less space efficient. Now everything is a card and the motherboard is simply a bunch of slots. Not great on room, but its a simpler layout. For enthusiast and enterprise its better but for consumer machines its likely worse.
So the advantage is that you don't have to shop for a motherboard... on the other hand, you lose the flexibility of having a board with the # of RAM, NVMe, and USB (including Thunderbolt) slots you want, and probably severely limit the CPU and cooling options you can have, since all those (except maybe USB if you nix Thunderbolt) will need to be built in to the 'host' Element.
So this is a great solution that's not uncommon in servers, but for consumer PCs, is really only useful if your use case is "I can't survive off integrated graphics, but don't want to have to deal with a motherboard."
Intel may have gotten a clue that AMD supported the Gen-Z consortium for a good reason, so now wants to figure out something to compete conceptually. Server racks have used blades for a long long time now as well, so moving that concept to add-in cards for EVERYTHING isn't terribly innovative.
This would make the perfect upgrade fit the old Mac Pro 5,1 build it on to a tray pop it in you will be able to upgrade your Xeon past the x5690 to newer fast models with more cores and treads at a fraction of the cost of a new Mac Pro there might be a trick in the old dog yet.
Royal UK Essay Writers help students in their academic writing service form past many years. The number of students we have helped is innumerable in the UK but also in all around the world. Royal UK Essay Writers provides services assignments, Essay, thesis and help in scientific work. Royal UK Essay Writers provide the highest quality services in writing, assignments, Essay, thesis, as well as provide writing assignments, Essay, thesis in lowest price.
Thank you for posting such a great Work!It contains wonderful and helpful posts. Keep up the good work.Royal British Essay Writers provides services assignments, Essay, thesis and help in scientific work. Royal British Essay Writers provide the highest quality services in writing, assignments, Essay, thesis, as well as provide writing assignments, Essay, thesis in less amount.
Great post I really appreciate your effort and your knowledge, which use in this part of work.British Essay Writers Empire help students in their academic writing service form past many years. The number of students we have helped is innumerable in the UK but also in all around the world.British Essay Writers Empire provides services assignments, Essay, thesis and help in scientific work. British Essay Writers Empire provide the highest quality services in writing, assignments, Essay, thesis, as well as provide writing assignments, Essay, thesis in lowest cost.
My version of this idea was a server rack type of thing you'd have in a closet somewhere, where you plug in modular pieces to fit your needs for storage, computing power, etc. It's also where your internet and/or TV would come in.
And then you have high-bandwidth wireless transmitters in every room that your monitors, TVs, wireless speakers, input devices, etc., connect to.
If you want to add something new, all you have to do is plug it in, connect it to the network, and then let the server know where it is (or have the system figure it out through triangulation) and what usage groups it should be added to.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
86 Comments
Back to Article
Ian Cutress - Monday, October 7, 2019 - link
"Aren't desktops modular already?"Technically... yes. 😉
For some reason I'm drawn to this as a concept. Perhaps we're going back to the old days of CPUs on cards.
d0nk3y - Monday, October 7, 2019 - link
Talking about old days, and modular PCs...I got to have a play around with a Canon AS400 modular PC back in the early 90s. This thing came to market before the IBM PC - it had a 640x480 colour screen. It also featured a metal balled mouse that connected to the keyboard, which in turn connected to the base unit. The modularity was in that you could snap on a hard disk module (5MB I think - maybe 10MB). There were single and dual floppy modules too from memory.
The hard disk tech was very cool too - it used power to hold the heads down over the disk surface so - in the event of a power failure - the head would actually lift off the disk rather fall onto it.
Alexvrb - Monday, October 7, 2019 - link
Sounds they took a laptop mainboard and redesigned it to slot into a backplane. Maybe the reason you're drawn to it is nostalgia? Reminds me of those industrial cards that had a CPU socket and RAM slots, plus IDE connectors, that would slot into a baseplate (which could slot several). Or those ill-fated "CPU upgrade cards" that were basically a motherboard+CPU+RAM on a PCI card.Lord of the Bored - Monday, October 7, 2019 - link
CPU upgrade cards on ISA made a degree of sense. I mean, there's still the bottleneck that is ISA, but... everything connected to the processor through ISA ANYWAYS so there wasn't really a huge performance loss.I admit I don't recall the PCI upgrade cards, and my mind is locking up trying to figure out how that would work.
Samus - Tuesday, October 8, 2019 - link
Kind of reminds me of the Apple Power PC "Dos Compatibility" card they made, effectively an entire 486 PC on a card, embedded Sound Blaster Vibra and bus interconnects to share keyboard\mouse\internal speaker with the host PC. Really radical for the time.Googer - Wednesday, October 16, 2019 - link
For years now, I've been wishing for an Android/ ARM (tablet/phone) cpu system on a PCI-e card to help bridge the gap between Windows PC's and Android OS apps. Such a card would function like GPU compute and would work with virtualization with in the OS to run android or other OS.bigfurrymonster - Friday, October 25, 2019 - link
Why? You can emulate Android in a VM on a PCTargon - Friday, October 11, 2019 - link
You may not remember the days of the original Athlon and Pentium 3 where the processors were on cards with a special slot. In those days, you didn't have cache on the CPU, so putting it onto a card with the CPU was the solution. There was also the company ALR that had an 80386SX machine that had a special slot for a daughter card to add a 386DX to bring it up to 32 bit. Slower than just having a 386DX in the first place in the machine, but it did allow upgrade capabilities.Gen-Z uses the entire idea of a system fabric as well, but isn't locked into a given vendor. I've been waiting for AMD to put out a prototype system with Ryzen support for the new design.
futurepastnow - Monday, October 7, 2019 - link
Everything old is new again. This is how the first PCs, like the Altair, worked.Arnulf - Tuesday, October 8, 2019 - link
My thoughts exactly - this should have gone into the "Truth be told ..." paragraph up there in the article. This thing is hardly a novel concept, it's just that the author is a bit on the young side and cannot possibly remember them.Oliseo - Tuesday, October 8, 2019 - link
Clearly, when they stated that Razor got their first...Um, nooope, not by a long shot.
s.yu - Tuesday, October 8, 2019 - link
Yeah I'm also confused by the statement.Instead of "(This is where a cynic might say that Razer got there first… Either way, everyone wins.)" some explanation might have helped.
secretmanofagent - Wednesday, October 9, 2019 - link
They're referencing Project Christine, which was more more modular: https://www2.razer.com/christineBedfordTim - Tuesday, October 8, 2019 - link
Industrial PCs have been doing it for decades as well.cyberguyz - Tuesday, October 8, 2019 - link
The idea is not a new one. I cut my eye teeth hand building computers on the S-100 bus which used a passive backplane along with separate cards for CPU memory & System logic, Display, I/O and video back in the mid '70s. With that kind of setup you could build a PC out of just about any cpu architecture including Intel, Zilog and Motorola processors. Was quite a fun learning experience.sing_electric - Wednesday, October 9, 2019 - link
Initially I read this not as "I cut my eye teeth hand-building..." but as "I cut my eye, teeth and hand building computers on an S-100 bus," which made me nod my head and think "yep, a lot of those chassis were real pains to work around...."deil - Tuesday, October 8, 2019 - link
sooo mxm for cpu?808Hilo - Tuesday, October 8, 2019 - link
CPU on cards make sense. AMD is doing this already.29a - Thursday, October 10, 2019 - link
CPUs in the late 90s were on cards from Intel and AMD.rrinker - Wednesday, October 9, 2019 - link
My very first MS-DOS computer was exactly like this. 8/4.77MHz 8088 on one ISA card, with RAM and some of the IO. Zenith Z-158, purchased through my university. I later experimented with a 286 card that plugged in to an available ISA slot and had a 4 pin ribbon cable that plugged in to where the 8088 was pulled, but even at those relatively low speeds, the ribbon cable was a horrible idea. I tried shielding it to no avail, ended up taking it back and just living with the machine with a V20 and an 8087. Skipped 286's all together, my next machine was a 386-25..stephenbrooks - Monday, October 7, 2019 - link
Imagine in the future you could hot-swap 2 of these system cards: the first system would mirror its state onto the second and you could then remove the first, giving a system upgrade with no downtime. Also, you could imagine a high reliability version where 2 or 3 system cards have to stay in sync with each other (like redundant systems on spacecraft).tspacie - Monday, October 7, 2019 - link
You just described a Stratus ftServer from the early 2000s.duploxxx - Tuesday, October 8, 2019 - link
exactly.... only the OS and drivers were a headache. I remember those good old days.Stratus was totally blown out of the market by VMware HA... not the same level of protection but good enough for a way lower price...
thunderbird32 - Monday, October 7, 2019 - link
So it's the modern equivalent of the old Intel 8080 based S-100 bus systems from the late 70's?HStewart - Monday, October 7, 2019 - link
Every desktop system is basically derived in someway from S-100 bus system designS-100 -> ISA -> PCI -> PCIe …. 1.0, 2.0, 3.0, 4.0, 5.0....
So here is big difference with this designed - you can start out CPU and GPU card and possibly IO card, but later add addition CPU or GPU cards. Further more if they come out with faster and more powerful version of CPU card, you can add it - but a question can you mix and match them. What about different vendors Intel and AMD CPU's on same box or AMD and NVidia GPUs on same box.
This is Xeon system, so multiple CPU are in the pictures - so how about 24 CPU' modules all working with each CPU modules containing multiple core 8 or 16 or more depending on design. Better yet if one of modules fail - then replace just it.
People need to get past the old desktop designs and move toward to future. This is not 70's design but 2020 design.
Kevin G - Monday, October 7, 2019 - link
You forgot AGP and PCI-X in there for historical purposes.The big change was PCI(-X) to PCIe as that dropped the shared bus to a dedicated point-to-point serial link. The driver model was same to permit rapid adoption and transition but the underlying layers were all different.
The problem with point-to-point is that for a modular system like this, additional IO slots are either bifurcated form the core card or route into a massive PCIe switch. Either way it is an easy means to introduce a bottleneck in IO.
mode_13h - Tuesday, October 8, 2019 - link
Exactly. Either this architecture will be incredibly limited, or it will require a PCIe switch, in the backplane.IMO, not worth it. Blade servers are already good for what they do. PCs are good for what *they* do. This is a rather pointless waste of time.
What would make more sense is to standardize on some mechanical housing form factors for USB4 devices, so that they can stack nicely or slot into enclosures. That's the way to expand small-form-factor devices like NUCs and mini PCs.
Samus - Tuesday, October 8, 2019 - link
PCI-X servers used to give me nightmares. Specifically the compatibility, or lack there of. Fortunately it died a quick death in the mainstream...Apple supported it for a long time after PCIe took the server space in their G3\G4 workstations. Not sure why they loved it so much. But Apple.sing_electric - Wednesday, October 9, 2019 - link
PCI-X actually made it to the single-core G5s, believe it or not... The answer is probably that they were between a rock and a hard place with what Motorola (and IBM) could give them with chipsets.Keep in mind, this is when Apple was in love with USB... because it finally meant that they no longer had to either push OEMs hard to make Mac-compatible peripherals (or rely on costly, low-volume Mac specialty suppliers like Elgato). The ability for high-end Macs to be able to use off-the-shelf components was a plus, not a minus, back before Apple was the 800 lb gorilla in the room.
sing_electric - Wednesday, October 9, 2019 - link
This. Of course, it may work for a lot of consumer use cases - GPUs don't typically saturate PCIe 3.0 x16 connectors, so really, once you go to 4.0, you'll have a reasonable amount of bandwidth for whatever else you'll need (as long as they keep Thunderbolt on the host card). Seeing as it looks like PCIe 5.0 might happen sooner rather than later, you might, on net, be OK.escksu - Tuesday, October 8, 2019 - link
Yes, they are available today as blade servers.mode_13h - Tuesday, October 8, 2019 - link
Huh. I thought you worked at Intel or something.You completely missed the point about PCIe not being a bus. As such, you can't just have multiple CPU modules that share (or can flexibly remap) the peripherals, and that pretty much blows a hole in your design concept.
Also, the multiple CPU modules probably can't use a fast interconnect like UPI, due to distance, connectors, etc. So, they're connected *how*? 100 Gbps Ethernet? Oh, then I guess you just re-invented a blade server.
Oliseo - Tuesday, October 8, 2019 - link
This was exactly what I was thinking.It's like fashion, what's old is new again.
GreenReaper - Tuesday, October 8, 2019 - link
I was thinking this exactly. Buses are not new. Computers today are logically bases on a tree of buses. Technically most of this can be done already, they just want it in a proprietary form factor.Jorgp2 - Monday, October 7, 2019 - link
Why tho?shabby - Monday, October 7, 2019 - link
Something new to sell, new revenue stream, it's the future something something something.CharonPDX - Monday, October 7, 2019 - link
So I can plug one of these in to my Mac so I can run Windows on it, right?HollyDOL - Monday, October 7, 2019 - link
Sorry, Mac is beyond redemption :-)))Total Meltdowner - Monday, October 7, 2019 - link
You got one of those $36k Mac Pros? Baller.Lord of the Bored - Monday, October 7, 2019 - link
Wouldn't be the first time.http://www.edibleapple.com/2009/12/09/blast-from-t...
alumine - Monday, October 7, 2019 - link
So, uh, basically similar to industrial PCs (PICMG/COM Express/etc) but in a consumer-friendly form factor...?MamiyaOtaru - Monday, October 7, 2019 - link
interesting from a modularity standpoint, but ugh on giving up on the cooling potential we have with standard motherboards. Imagine going from a tower cooler or water cooler to that fansuperunknown98 - Monday, October 7, 2019 - link
There are just so many things wrong with this. Intel has taken a laptop motherboard and put it on a backplane so users won't have to throw away their computer cases? As mentioned, storage will have to be swapped out and windows/ linux possibly reinstalled with the new element.As for OEM's, how is support going to deal with generations of Elements plugged into a backplane? Will those Elements be vendor agnostic? Could I plug a Dell element into my HP chassis and when something goes wrong who provides the support?
I don't think anyone except enthusiasts and businesses would upgrade their computers in this day and age. Most people have moved on from desktops and are happy with laptops and phones. I don't think you will bring them back with the promise of easier hardware upgrades.
mode_13h - Tuesday, October 8, 2019 - link
Yeah, they should focus on USB4-based expansion. How about standard form factors for those peripherals?nevcairiel - Tuesday, October 8, 2019 - link
This sounds very much to be designed for OEM Business systems, which would still be a huge market.Many Consumers are moving away from having a PC at all. Maybe a laptop, but mostly just portables.
mode_13h - Tuesday, October 8, 2019 - link
Businesses are moving in the direction of laptops + cloud.Kvaern1 - Wednesday, October 9, 2019 - link
The mobile functions yes but lot of of business usercases require zero mobility and may have specialized needs which may be unreasonably expensive to aqquire in laptops.Price/performance still matters. The desktop isn't going anywhere.
Ratman6161 - Tuesday, October 8, 2019 - link
"Most people have moved on from desktops"Funny, almost everyone I know still has one. Other devices supplement it, but don't replace it. Don't mistake the fact that there is little to no growth in sales as meaning that no one is using them. They are using them, they just aren't buying new ones. The old ones work fine.
I see the OEM PC as being the primary place for this. Enthusiasts don't want it. For example I may go and buy a Ryzen 3700X and swap it out with my Ryzen 1700 on my X470 motherboard. Its as easy as unplugging one CPU and plugging in the new one. No, I won't get PCIE4.0 but I'l be using the same RAM, storage and everything else.
For OEM's the picture is different. They could essentially just make a single system and just slap a different Element in there depending on what the customer ordered. How will support deal with different generations of elements? What about a Dell element in an HP? Easy. They won't support a configuration they didn't sell. If you want an upgrade to your HP, you will buy it from HP. but as you said, most non-enthusiasts don't upgrade anyway. What this would really be about from the OEM point of view is to be cheaper and easier to assemble. It would also be much easier for an OEM to offer made to order systems. You ordered an I5 and I ordered an I9? No problem. They just plug in the appropriate element into their standard chassis and ship it.
Korguz - Wednesday, October 9, 2019 - link
superunknown98" Most people have moved on from desktops " as Ratman6161 said, out of say 20 people i know, maybe 4 have a notebook, 1 has a tablet. i have a notebook, and i barely use it. so no, most people havent moved away from desktops, at least not those that i know, but their desktops, could use an upgrade :-)
nerd1 - Monday, October 7, 2019 - link
All our desktops are (still) modular and desktop cases are dirt cheap too.doggface - Monday, October 7, 2019 - link
I am kind of interested in this concept for a dual-pc build. If you could mount your NAS + drives in same chassis and via pcie have power + 10gb ethernet connection to the host system as well as external connections you could externalise the load without needing a new PSU/case.UpSpin - Tuesday, October 8, 2019 - link
It's already there, just Google for Dell VRTX.Kevin G - Monday, October 7, 2019 - link
Kinda weird that Intel is going through this idea as this isn't their first time. As pointed out, the VCA cards throw several CPUs onto a card. But Intel did the same thing to a degree with the first commercial Xeon Phi. There is also the Open Pluggable Specification ( OPS ) that is designed for more commercial video applications but similar concept where the host PC is on a removable modular card.I will say that a card like this is interesting if intel would permit the usage of a 'sub host' for virtualization environments. It'd provide some physical seperation in terms of execution domains while permitting IO to be shared or dedicated based upon card. With the recent spat of Intel security issues, this would provide another layer of protection for guest OSs. It'd also let VM farms mix latency sensitive/serial workloads that benefit from high clocks (which benefit from the >4 Ghz consumer parts more) into a wider more throughput oriented architecture of traditional servers.
For the average mass market consumer, the only niche I see this fulfilling is the video streamer that wants to do something like game and encoding on the same box. This would certainly help there but it is difficult to imagine another scenario where this would be ideal.
I also see the usage of a normal PCIe slot only beneficial to the consumer market. Realistically Intel should be leveraging SFF-TA-1002 with a high power connector for server usage. Being able to pop these out in server hot swap bays simplifies things greatly, at least in terms of node expansion. The high power connector, at +48 W can provide some truly insane amounts of power at 1 kW in a single slot and ~650W at +12V. These are also rated up to 112 Gbit per serial lane using PAM4 encoding (see PCIe 6.0).
mode_13h - Tuesday, October 8, 2019 - link
> Intel did the same thing to a degree with the first commercial Xeon Phi.No, I don't think so. Did it talk to other peripherals over PCIe? I highly doubt that.
The Phi add-in-cards were just self-contained accelerators that just happened to be built around x86-64 CPUs.
alufan - Tuesday, October 8, 2019 - link
not weird but calculated intel and other similar suppliers do this for repeat sales and to lock you into a deal think Gillette and razors once you buy the basic idea your trapped into the whole system unless you spend big to leave and of course you have a nice intel account manager to keep you in line with offers and discounts to further reduce your horizon, thats why intel has a whole eco system for you to buy into AMD really is bringing the fearescksu - Tuesday, October 8, 2019 - link
Oh, its not new concept. CPU backplanes has been around for a long time. Its still here today but not really popular. Below is an example of a modern one.https://www.ieiworld.com/_upload/news/images/PEMUX...
danielfranklin - Tuesday, October 8, 2019 - link
Isnt this really more of a blade server?Or does the PCI-E interface enable some sort of inter-system transport for a purpose im not thinking about?
Gadgety - Tuesday, October 8, 2019 - link
I can see the advantage to Intel of coming up with new products. I'd rather see the industry to establish a new form factor that enables smaller (than M-ATX) footprint dual PCI-E motherboards. I would have liked to have seen some specific dimensions for the element.mode_13h - Tuesday, October 8, 2019 - link
The only way I'd support this is if it had some advantage over current PCs, such as cooling.Otherwise, I see it as just a way for parts makers to boost margins at the expense of weight, bulk, and reduced selection, only to benefit a relatively small number of users who are unwilling to open a standard PC case.
PCs are modular enough. This is just wasteful.
edzieba - Tuesday, October 8, 2019 - link
It's an oooooold system layout. Blade servers, the ol' CompactPCI setup, etc.Not sure why Razer's fancy render got singled out for praise, they didn't even build one!
29a - Tuesday, October 8, 2019 - link
Is this Intel's answer to changing sockets every generation? I'll stick with AMD. This just seems like a way for Intel to make sure that more of the percentage of a computer's cost goes in their pocket.GNUminex_l_cowsay - Tuesday, October 8, 2019 - link
Yo dawg, I heard you like computers. So put a computer in your computer so you can ??? profit?I am completely lost on what the use case for this is.
alpha754293 - Tuesday, October 8, 2019 - link
So...this is just a more advanced/updated version of Sun Microsystem's Penguin.https://www.hardwarejet.com/sun-microsystems-375-0...
alpha754293 - Tuesday, October 8, 2019 - link
Razer didn't get their first.SunPCi was out as early as October 1999. (Source: https://docs.oracle.com/cd/E19085-01/pci1.card/806...
The co-processor boards used to plug into Sun Microsystems' UltraSPARC systems (you can read this installation guide for the latest version/iteration of it and what systems still support them: https://docs.oracle.com/cd/E19085-01/pci3.card/817...
The host Sun Microsystem's UltraSPARC system would run the SPARC version of SunOS (Solaris) and then you can boot up the co-processor card and run kind of whatever you want on it.
I had the SunPCi IIpro which had a Celeron 733 MHz processor so I threw Windows on it and CATIA V5 for Windows. It was super slow because the framebuffer was super slow, but it TECHNICALLY worked.
That way, you could have a SPARC system running the RISC V9/SPARC ISA AND running the x86 ISA at the same time, off the same machine.
alpha754293 - Tuesday, October 8, 2019 - link
*edit*dammit!!
*there
andrewaggb - Tuesday, October 8, 2019 - link
I'm very skeptical this will go anywhere in the enthusiast or desktop space. I expect if it ever is released that it'll be priced high and not be any more practical than a traditional mb/cpu combo. I'd love to be surprised.Dug - Tuesday, October 8, 2019 - link
Sense you can only have one master, and all other cards plug in like you would plug into a regular motherboard pcie slot, what benefit does this provide?alufan - Tuesday, October 8, 2019 - link
imagine someone buys one and plugs it into a Ryzen X570 board ....and it beats the intel system because it has pcie4! then again its probably limited to intel board speeds...just sayingballsystemlord - Tuesday, October 8, 2019 - link
You asked to have RGB lighting? Isn't that a sin? :)phoenix_rizzen - Wednesday, October 9, 2019 - link
CPU, GPU, RAM, storage. What, exactly, is left in the case this plugs into?You want to upgrade, you replace ... Basically the entire system? How is this useful? What are you saving?
You're essentially slotting am entire laptop inside a desktop case, but for what? The case provides power and ...?
Threska - Wednesday, October 9, 2019 - link
" On the card was also two M.2 slots, two slots for SO-DIMM LPDDR4 memory, a cooler sufficient for all of that, and then additional controllers for Wi-Fi, two Ethernet ports, four USB ports, a HDMI video output from the Xeon integrated graphics, and two Thunderbolt 3 ports."Approaching from different directions. SBCs and it's like, the Raspberry Pi are approaching the problem from a different direction, but a similar goal. More ports and one gets closer.
zmatt - Wednesday, October 9, 2019 - link
So it's the return of S-100. This idea is literally older than the PC is. Many of the first home computers like the Altair used this concept. It has some advantages and some disadvantages. The biggest pro is that the level of expandability and versatility is greater than conventional ATX. You aren't tied to one motherboard. The downside is that its far less space efficient. Now everything is a card and the motherboard is simply a bunch of slots. Not great on room, but its a simpler layout. For enthusiast and enterprise its better but for consumer machines its likely worse.sing_electric - Wednesday, October 9, 2019 - link
So the advantage is that you don't have to shop for a motherboard... on the other hand, you lose the flexibility of having a board with the # of RAM, NVMe, and USB (including Thunderbolt) slots you want, and probably severely limit the CPU and cooling options you can have, since all those (except maybe USB if you nix Thunderbolt) will need to be built in to the 'host' Element.So this is a great solution that's not uncommon in servers, but for consumer PCs, is really only useful if your use case is "I can't survive off integrated graphics, but don't want to have to deal with a motherboard."
JohnMD1022 - Thursday, October 10, 2019 - link
Nothing revolutionary at all.The Altair 8800 was a backplane-based S-100 computer, as were many of the early systems.
Check the Byte archives for articles relation to the Cromemco S-100 systems used by Jerry Pournelle.
zodiacfml - Thursday, October 10, 2019 - link
Yes but should be turned around where the focus on large/aftermarket cooling is on the GPU. Recycle or reuse existing CPU coolers for GPUs.Targon - Friday, October 11, 2019 - link
Intel may have gotten a clue that AMD supported the Gen-Z consortium for a good reason, so now wants to figure out something to compete conceptually. Server racks have used blades for a long long time now as well, so moving that concept to add-in cards for EVERYTHING isn't terribly innovative.Microbits - Saturday, October 12, 2019 - link
This would make the perfect upgrade fit the old Mac Pro 5,1 build it on to a tray pop it in you will be able to upgrade your Xeon past the x5690 to newer fast models with more cores and treads at a fraction of the cost of a new Mac Pro there might be a trick in the old dog yet.2tacos99cents - Sunday, October 13, 2019 - link
Umm... Am I in the upside down? Why does Intel think technology from decades past is somehow the technology of the future? This is bizarre.peevee - Monday, October 14, 2019 - link
How is it modular when there is everything but video on the card itself?Royal Essay Writers - Tuesday, October 15, 2019 - link
Royal UK Essay Writers help students in their academic writing service form past many years. The number of students we have helped is innumerable in the UK but also in all around the world. Royal UK Essay Writers provides services assignments, Essay, thesis and help in scientific work. Royal UK Essay Writers provide the highest quality services in writing, assignments, Essay, thesis, as well as provide writing assignments, Essay, thesis in lowest price.Royal Essay Writers - Tuesday, October 15, 2019 - link
Thank you for posting such a great Work!It contains wonderful and helpful posts. Keep up the good work.Royal British Essay Writers provides services assignments, Essay, thesis and help in scientific work. Royal British Essay Writers provide the highest quality services in writing, assignments, Essay, thesis, as well as provide writing assignments, Essay, thesis in less amount.Royal Essay Writers - Tuesday, October 15, 2019 - link
Great post I really appreciate your effort and your knowledge, which use in this part of work.British Essay Writers Empire help students in their academic writing service form past many years. The number of students we have helped is innumerable in the UK but also in all around the world.British Essay Writers Empire provides services assignments, Essay, thesis and help in scientific work. British Essay Writers Empire provide the highest quality services in writing, assignments, Essay, thesis, as well as provide writing assignments, Essay, thesis in lowest cost.Royal Essay Writers - Tuesday, October 15, 2019 - link
Thank you for posting It contains wonderful and helpful posts. Keep up the good work.Royal Essay Writerstwtech - Friday, October 25, 2019 - link
My version of this idea was a server rack type of thing you'd have in a closet somewhere, where you plug in modular pieces to fit your needs for storage, computing power, etc. It's also where your internet and/or TV would come in.And then you have high-bandwidth wireless transmitters in every room that your monitors, TVs, wireless speakers, input devices, etc., connect to.
If you want to add something new, all you have to do is plug it in, connect it to the network, and then let the server know where it is (or have the system figure it out through triangulation) and what usage groups it should be added to.
Gugge - Friday, January 17, 2020 - link
Would it be possible to use a card like this for extra rendering power when edit movies?peterm42 - Tuesday, June 9, 2020 - link
Ah! exactly the same principle as the Honeywell 716 mini-computer from the 1970's.Nice to know Intel are finally catching up with old technology.