Show coverage sponsored by ASUS

On the eve of the first full day of CES 2008, NVIDIA held an editor’s conference at the New York, New York casino here in Las Vegas to announce some of their new technologies to the press so that we can distribute some of this information before the show gets in to full swing. Here’s the short list of what NVIDIA announced
  • Hybrid SLI
  • nForce 780a
  • nForce 750a
  • nForce 730a
  • GeForce 8200 Motherboard GPU
  • New PureVideo HD features

Over the coming days we’ll touch on all of this, but for now we’ll stick with the most exciting pieces of news, related to Hybrid SLI.

We’ll start things off with a shocker then: NVIDIA is going to putting integrated GPUs on all of their motherboards. Yes, you’re reading that right, soon every NVIDIA motherboard will ship with an integrated GPU, from the highest end enthusiast board to the lowest end budget board. Why? Hybrid SLI.

It’s no secret that enthusiast-class GPUs have poor power performance, even when idling they eat a lot of power. It’s just not possible right now to make a G80 or G92 based video card power down enough that it’s not sucking at least some non-trivial amount of power. NVIDIA sees this as a problem, particularly with the most powerful setups (2 8800GTXs for example) and wants to avoid this problem in the future. Their solution to this is the first part of Hybrid SLI and the reasoning for moving to adding integrated GPUs in every board, what NVIDIA is calling HybridPower.

If you’ve read our article on AMD’s Hybrid Crossfire technology then you already have a good idea of where this is leading. NVIDIA wants to achieve better power reductions by switching to an integrated GPU when doing non-GPU intensive tasks and shutting off the discrete GPU entirely. The integrated GPU eats a fraction of the power of even an idle discrete GPU, with the savings increasing as we move up the list of discrete GPUs towards the most powerful ones.

To do this, NVIDIA needs to get integrated GPUs in more than just their low-end boards, which is why they are going to begin putting integrated GPUs in all of their boards. NVIDIA seems rather excited about what this is going to do for power consumption . To illustrate the point they gave us some power consumption numbers from their own testing, which while we can’t confirm at this point and are certainly the worst (best) case situation, we know from our own testing that NVIDIA’s numbers can’t be too far off.

Currently NVIDIA is throwing around savings of up to 400W, this assumes a pair of high-end video cards in SLI, where they eat 300W each at load and 200W at idle. NVIDIA doesn’t currently have any cards with this kind of power consumption, but they might with future cards. Even if this isn’t the case, we saw a live demo where NVIDIA showed us using this technology saving around 120W on a rig they brought, so 100W looks like a practical target right now. The returns will diminish as we work our way towards less powerful cards, but NVIDIA is arguing that the savings are still worth it. The electricity to idle a enthusiast GPU is not trivial, so even cutting 60W is approaching a level on-par with getting rid of an incandescent lightbulb.

NVIDIA is looking at this as a value-added feature of their nForce line and not something they want to be charging people for. What we are being told is that the new motherboards supporting this technology will not cost anything more than the motherboards they are replacing, which relieves an earlier concern we had about basically being forced to buy another GPU.

We also have some technical details about how NVIDIA will be accomplishing this, which on a technical level we find impressive, but we’re also waiting on more information. On a supported platform (capable GPU + capable motherboard, Windows only for now) an NVIDIA application will be active that controls the status of the HybridPower technology. When the computer is under a light load, the discrete GPU will be turned off entirely via the SMbus, and all rendering will be handled by the integrated GPU. If it needs more GPU power, the SMbus will wake up the discrete GPU and rendering will be taken over by it.

Because of the switching among GPUs, the motherboard will now be the location of the various monitor connections, with NVIDIA showing us boards that have a DVI and a VGA connector attached. Operation when using the integrated GPU is pretty straightforward, while when the discrete GPU is active things get a bit more interesting. When in this mode, the front buffer (completed images ready to be displayed) will be transferred from the discrete GPU to the front buffer for the integrated GPU (in this case a region of allocated system memory) and finally displayed by the electronics housed in the integrated GPU. By having the integrated GPU be the source of all output, this allows seamless switching between the two GPUs.

All of this technology is Vista only right now, with NVIDIA hinting that this will rely on some new features in Vista SP1 due this quarter. We also got an idea of the amount of system memory required by the integrated GPU to pull this off, NVIDIA is throwing around a 256MB minimum frame buffer for that chip.

NVIDIA demonstrated this at their event with some alpha software which will be finished and smoothed out as the software gets closer to shipping. For now the transition between GPUs isn’t smooth (the current process has some flickering involved) but NVIDIA has assured us this will be resolved by the time it ship and we have no reason not to believe them. The other issue we saw with our demonstration is that the software is currently all manual; the user decides which mode to run in. NVIDIA wasn’t as straightforward on when this would be handled. It’s sounding like the first shipping implementations will still be manual, with automatic switching already possible at the hardware level but the software won’t catch up until later this year. This could be a stumbling block for NVIDIA, but we’re going to have to wait and see.

The one piece of information we don’t have that we’d like to have is the performance impact of running a high-load situation with a discrete GPU with HybridPower versus without. Transferring the frame buffer takes resources (but nothing that PCIe can’t handle) but we’re more worried about what this means for system memory bandwidth. Since the copied buffer ends up there first before being displayed, this could take up a decent chunk of system memory bandwidth in copying large buffers in to memory and then back out to display it (a problem that would be more problematic with higher end cards and the higher resolutions commonly used). Further we’re getting the impression that this is going to add another frame of delay between rendering displaying (basically this could operate in a manner similar to triple buffering) which would be a problem for users who start getting input lagged to the point where it’s affecting the enjoyment of their game. NVIDIA is talking about the performance not being a problem, but it’s something we really need to see for ourselves. It becomes a lot harder to recommend and sell HybridPower if it’s going to cause any significant performance issues and we’ll be following this up with solid data once we have it.

Moving on, the other piece of Hybrid SLI that we heard about today is what NVIDIA is calling GeForce Boost. Here the integrated GPU will be put in to SLI rendering with a discrete GPU, which can greatly improve performance. Unlike HybridPower this isn’t a high-end feature and rather is a low-end feature; NVIDIA is looking to use this in situations where low-end discrete GPUs like the 8400GS and 8500GT. Here the performance of the integrated GPU is similar to that of the discrete GPU, so they’re fairly well matched. Since the rendering mode will be AFR, this is critical as it would have the opposite impact and slow down rendering performance if paired with a high end GPU (plus the performance impact otherwise would be tiny, the speed gap is just too great).

NVIDIA is showcasing this as a way to use the integrated GPU to boost the performance of the discrete GPU, but since this also requires new platforms, we’d say this is more important for the upgrade market for now; if you need more performance on a cheap computer drop in a still-cheap low end GPU and with SLI watch performance improve. We don’t have an idea on the exact performance impact, but we already knows how SLI scales today on higher end products, so the results should be similar. It’s worth noting that while GeForce Boost can be used in conjunction with HybridPower, that’s not the case for now; since low-end discrete GPUs use so little power in the first place, it doesn’t make much sense to turn them off. This will be far more important when it comes to notebooks later on, where both the power savings of HybridPower and GeForce Boost can be realized. In fact once everything settles down, notebooks could be the biggest beneficiary of the complete Hybrid SLI package out of any segment.

NVIDIA’s roadmap for Hybrid SLI calls for the technology to be introduced in Q1 with the 780a, 750a, 730a, and GeForce 8200 moherboard chipsets, followed by Intel in Q2 and notebooks later in the year along with other stragglers. On the GPU side the current 8400GS and 8500GT support GeForce Boost but not HybridPower. Newer GPUs coming in this quarter will support the full Hybrid SLI feature set, and it’s sounding like the G92 GPUs from the 8800GT/GTS may also have HybridPower support once the software is ready.

As we get closer to the launch of all the requisite parts of Hybrid SLI we’ll have more details. Our initial impressions that HybridPower could do a lot to cut idle power consumption, while GeForce Boost s something we’ll be waiting to test. Frankly the biggest news of the day may just be that NVIDIA is doing integrated GPUs on all of its motherboard chipsets, everything else seems kind of quaint in comparison.

Comments Locked

39 Comments

View All Comments

  • shabby - Monday, January 7, 2008 - link

    Next gen gpu's in q1 :)
  • chizow - Monday, January 7, 2008 - link

    There better be a way to disable this....

    Honestly sounds like a horrible plan, the last thing NV needs is an integrated GPU mucking up their already buggy and underwhelming MCP. Not only will the the MCP now run hotter with all that extra traffic going through the north bridge, performance will be worst as well as the frame buffer is forced to go through system RAM. And that's before considering potential system bandwidth problems noted in the article.

    I may be wrong here, but didn't we learn our lesson with some of the earliest NForce integrated GPU designs, like the GF2 MX series? Some head to head comparisons basically showed that system RAM was simply not fast enough to keep up with GDDR and ultimately became the bottleneck when comparing comparable integrated vs. discrete graphics.

    NV should really focus on making a chipset that can compete with Intel's P35/X38/X48 etc., or maybe even support Penryn to start. Instead we get re-hashed boards with Tri-SLI and mGPU. That's all fine and good, but running Quad-SLI-BoostForce with only Conroe/Kentsfield support isn't going to cut it when Nehalem is out and cruising along at 466-500MHz FSB speeds.
  • Rasterman - Monday, January 7, 2008 - link

    I totally agree, the NB/SB on my 680i board are hot as hell, there is no way you can add any more heat to them. This seems like the totally opposite way to go about fixing the power problem, they are lowering power by duplicating components, adding yet more specs and connectors to a board, adding another area for bugs to appear, its insane.

    Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.
  • Rasterman - Monday, January 7, 2008 - link

    I totally agree, the NB/SB on my 680i board are hot as hell, there is no way you can add any more heat to them. This seems like the totally opposite way to go about fixing the power problem, they are lowering power by duplicating components, adding yet more specs and connectors to a board, adding another area for bugs to appear, its insane.

    Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.
  • roadrun777 - Thursday, January 10, 2008 - link

    [Quote]Why don't they simply put another chip on their new graphics cards, then it is totally transparent to the system and works on any motherboard. If you have more than 1 card disable them, and run off the dedicated new chip, this would seem like a much better solution.[/Quote]
    There is a reason chip makers keep trying to integrate the chips into one chip. Cost, for one. Secondly, speed! It is so much easier to have tighter timings when the interconnects are so close to each other.
    I think they are moving this way because everyone in the industry knows that the merging of GPU/CPU on the same core is inevitable. This gives the engineers a few generations to get it right before moving the entire chip set core onto the CPU. This is the easiest way to reduce latency.
    I predict that it will happen because the power consumption is getting out of hand. I also predict that they will eventually virtualize the whole graphics card / bus, so that it appears to be one video card to the OS, while in reality it could be several GPU's that scale as needed.
    I also think that possibly this year you will see GPU chips manufactured so that they are suspended in a non conductive liquid and sandwiched in a housing to drastically increase heat dissipation from the chip.

    chip |
    v
    _
    | -------
    ------| / | \ <-- interconnections all around the chip
    | |_
    | ^
    | very small mounting pole
    |
    --- Non conductive liquid compressed around the chip.

  • roadrun777 - Thursday, January 10, 2008 - link

    ........chip.|
    .............v
    ......._
    ......|....-------
    ------|.../...|...\...<-- interconnections all around the chip
    |.....|_
    |.............^
    |.....very small mounting pole
    |
    ---.Non conductive liquid compressed around the chip.

    Ignore the dots
  • DigitalFreak - Monday, January 7, 2008 - link

    They need some reason to keep the chipset/GPU tying around.
  • knowom - Monday, January 7, 2008 - link

    "Because of the switching among GPUs, the motherboard will now be the location of the various monitor connections, with NVIDIA showing us boards that have a DVI and a VGA connector attached."

    Makes me wonder if NVIDIA will get rid of the on board DVI/VGA connectors all together for future discrete video cards. I'm sure doing so could be beneficial to NVIDIA and the consumer I'd think it would help lower costs, complexity, heat, and power plus with everyone having integrated graphics at least that's what they're moving towards they could probably set it up so integrated graphics could double as a physics processor.
  • Olaf van der Spek - Monday, January 7, 2008 - link

    Why would removing connectors reduce power usage?
    It'd merely ensure the cards can't be used in other motherboards.
  • Shark Tek - Monday, January 7, 2008 - link

    They should look for ways to reduce power consumption even more and producing more processing power. Their video cards (ATI/NVIDIA) still very power hungry. At least CPUs (AMD/Intel) are looking for ways to reduce more the power requirements in their chips in the last couple of years and they are aiming to cut more those numbers.

    Can you imagine an upcoming mainstream card (9600GT) that will be requiring at least a 400W PSU with 26A in the 12V rail?

    That sucks pretty bad.

Log in

Don't have an account? Sign up now