It would now appear we are saturated with two phase immersion liquid cooling (2PILC) – pun intended. One common element from the annual Supercomputing trade show, as well as the odd system at Computex and Mobile World Congress, is the push from some parts of the industry towards fully immersed systems in order to drive cooling. Last year at SC19 we saw a large number of systems featuring this technology – this year the presence was limited to a few key deployments.

Two Phase Immersion Liquid Cooling (2PILC) involves a server with next to no heatsinks, and putting it into a liquid that has a low boiling point. These liquids are often organic compounds (so not water, or oil) that give direct contact to the silicon and as the silicon is used it will give off heat which is transferred into the liquid around it, causing it to boil. The most common liquids are variants of 3M Novec of Fluorinert, which can have boiling points around 59C. Because it turns the liquid into a gas, the gas rises, forcing convection of the liquid. The liquid then condenses on a cold plate / water pipe and falls back into the system.


GIGABYTE from a previous show

These liquids are obviously non-ionic and so do not transfer electricity, and are of a medium viscocity in order to facilitate effective natural convection. Some deployments have extra forced convection which helps with liquid transport and supports higher TDPs. But the idea is that with a server or PC in this material, everything can be kept at a reasonable temperature, and it also supports super dense designs.


OTTO automated system with super dense racking

We reported on TMGcore’s OTTO systems, which involve this 2PILC technology to create data center units up to 60 kilowatts in 16 square feet – all the customer needs to do is supply power, water, and a network connection. Those systems also had automated pickup and removal, should maintenance be required. Companies like TMGcore cite that the 2PILC technology often allows for increased longevity of the hardware, due to the controlled environment.

One of the key directions of this technology last year was for crypto systems, or super-dense co-processors. We saw some of that again at SC19 this year, but not nearly as much. We also didn’t see any 2PILC servers directed towards 5G compute at the edge, which was also a common theme last year. All the 2PILC companies on the show floor this year were geared towards self-contained easy-to-install data center cubes that require little maintenance. This is perhaps unsurprising, given that 2PILC support without a dedicated unit is quite difficult without a data center ground up design.

One thing we did see was that component companies, such as companies building VRMs, were validating their hardware for 2PILC environments.

Typically a data center will discuss its energy efficiency in terms of PUE, or Power Usage Effectiveness. A PUE of 1.50 for example means that for every 1.5 megawatts of power used, 1 megawatt of useful work is performed. Standard air-cooled data centers can have a PUE of 1.3-1.5, or purpose built air-cooled datacenters can go as low as a PUE of 1.07. Liquid cooled datacenters are also around this 1.05-1.10 PUE, depending on the construction. The self-contained 2PILC units we saw at Supercomputing this year were advertising PUE values of 1.028, which is the lowest I’ve ever seen. That being said, given the technology behind them, I wouldn’t be surprised if a 2PILC rack would cost 10x of a standard air-cooled rack.

Related Reading

Comments Locked

35 Comments

View All Comments

  • FullmetalTitan - Friday, November 29, 2019 - link

    Looks to be operating in the nucleate boiling regime, the most efficient for heat transfer. It would be a problem if you saw it transition to film boiling, but I'm guessing the fluids are designed with a viscosity and boiling point to maximize heat transfer in the expected operational window. Pretty basic undergrad-level heat transfer equations to find those values
  • Santoval - Saturday, November 30, 2019 - link

    The minor thermal insulation of the gas bubbles is more than compensated by the convection of the liquid these bubbles drive upward. The CPU placement is vertical by design, because this is two-phase immersion cooling. If the CPU was horizontal then heat would only be released by the gas bubbles and that would be inefficient.

    In this design heat is released by both the gas bubbles and the liquid which these bubbles slide over the CPU. So unlike how it might seem the very bottom (the bottom few mm) of the chip is most likely cooled a little *worse* than the rest of the chip, because these few mm are only cooled via the gas bubbles (i.e. via radiative cooling) and not via convective cooling.
  • mode_13h - Tuesday, December 3, 2019 - link

    Phase change is how most of the energy is removed from the chip. So, you'd have to see whether enough liquid was contacting the top of the chip. If so, then you're good.

    But yeah, I had the same reaction. There are a range of possible solutions, including convection, possibly varying viscosity or the boiling point of the fluid, and increasing the surface area of the chip's heat spreader.
  • mode_13h - Tuesday, December 3, 2019 - link

    Also, the heatsreader should conduct heat, itself. So, some heat should be conducted from the top to the bottom of the chip.
  • eachus - Friday, November 29, 2019 - link

    "That being said, given the technology behind them, I wouldn’t be surprised if a 2PILC rack would cost 10x of a standard air-cooled rack."

    May be true, but what is the packing density of the 2PILC rack? In the supercomputing realm, replacing 100 racks with 10 racks would be a no brainer. I think that ten times the density would be unlikely, but upping the CPU density by 4x or 5x and leaving the disk storage untouched might be very useful. In practice though Cray is going for boards with water cooling of CPUs, memory, and VRMs. This will be at least twice the density of air-cooled. An advantage of water cooling is that the water can be pumped through a radiator on the roof. For 2PLIC, the heat will need to be transferred to air or water in the server room.
  • ksec - Saturday, November 30, 2019 - link

    Excuse my ignorance, why would Rack Density be a problem when CPU number are exactly the same, i.e Interconnect are still a problem.

    Rent per Square Feet should hardly make a different in TCO.
  • destorofall - Monday, December 2, 2019 - link

    In a normal 2PILC system the vapor is condensed on an array of condenser coil, positioned just above the liquid, and the condenser water is pumped to a liq-air HX. The water outlet temps can be run at 50°C if the conditions are right. Assuming a working fluid of FC-72 that means you could potentially setup a system to run in a desert provided adequate airflow is present, and low fouling is present. With Tj-Tf resistances being around 0.04°C/W that can put Tj around 66°C at 250W
  • mode_13h - Tuesday, December 3, 2019 - link

    The downside of water-cooling everything is all that tubing that has to be over-built to minimize the chance of leaks. This way, you have minimal overhead and just dunk the whole thing in fluid. Then, you need just one big heat exchanger to remove heat from the entire unit.

    I don't think it's a given that these units would be more expensive than Cray's approach.
  • GreenReaper - Friday, November 29, 2019 - link

    Only two phases? I smell a Gilette-style opening for plasma-cooled components!
  • mode_13h - Tuesday, December 3, 2019 - link

    Why stop there? Let's throw in some ices!

Log in

Don't have an account? Sign up now