Show coverage sponsored by ASUS

On the eve of the first full day of CES 2008, NVIDIA held an editor’s conference at the New York, New York casino here in Las Vegas to announce some of their new technologies to the press so that we can distribute some of this information before the show gets in to full swing. Here’s the short list of what NVIDIA announced
  • Hybrid SLI
  • nForce 780a
  • nForce 750a
  • nForce 730a
  • GeForce 8200 Motherboard GPU
  • New PureVideo HD features

Over the coming days we’ll touch on all of this, but for now we’ll stick with the most exciting pieces of news, related to Hybrid SLI.

We’ll start things off with a shocker then: NVIDIA is going to putting integrated GPUs on all of their motherboards. Yes, you’re reading that right, soon every NVIDIA motherboard will ship with an integrated GPU, from the highest end enthusiast board to the lowest end budget board. Why? Hybrid SLI.

It’s no secret that enthusiast-class GPUs have poor power performance, even when idling they eat a lot of power. It’s just not possible right now to make a G80 or G92 based video card power down enough that it’s not sucking at least some non-trivial amount of power. NVIDIA sees this as a problem, particularly with the most powerful setups (2 8800GTXs for example) and wants to avoid this problem in the future. Their solution to this is the first part of Hybrid SLI and the reasoning for moving to adding integrated GPUs in every board, what NVIDIA is calling HybridPower.

If you’ve read our article on AMD’s Hybrid Crossfire technology then you already have a good idea of where this is leading. NVIDIA wants to achieve better power reductions by switching to an integrated GPU when doing non-GPU intensive tasks and shutting off the discrete GPU entirely. The integrated GPU eats a fraction of the power of even an idle discrete GPU, with the savings increasing as we move up the list of discrete GPUs towards the most powerful ones.

To do this, NVIDIA needs to get integrated GPUs in more than just their low-end boards, which is why they are going to begin putting integrated GPUs in all of their boards. NVIDIA seems rather excited about what this is going to do for power consumption . To illustrate the point they gave us some power consumption numbers from their own testing, which while we can’t confirm at this point and are certainly the worst (best) case situation, we know from our own testing that NVIDIA’s numbers can’t be too far off.

Currently NVIDIA is throwing around savings of up to 400W, this assumes a pair of high-end video cards in SLI, where they eat 300W each at load and 200W at idle. NVIDIA doesn’t currently have any cards with this kind of power consumption, but they might with future cards. Even if this isn’t the case, we saw a live demo where NVIDIA showed us using this technology saving around 120W on a rig they brought, so 100W looks like a practical target right now. The returns will diminish as we work our way towards less powerful cards, but NVIDIA is arguing that the savings are still worth it. The electricity to idle a enthusiast GPU is not trivial, so even cutting 60W is approaching a level on-par with getting rid of an incandescent lightbulb.

NVIDIA is looking at this as a value-added feature of their nForce line and not something they want to be charging people for. What we are being told is that the new motherboards supporting this technology will not cost anything more than the motherboards they are replacing, which relieves an earlier concern we had about basically being forced to buy another GPU.

We also have some technical details about how NVIDIA will be accomplishing this, which on a technical level we find impressive, but we’re also waiting on more information. On a supported platform (capable GPU + capable motherboard, Windows only for now) an NVIDIA application will be active that controls the status of the HybridPower technology. When the computer is under a light load, the discrete GPU will be turned off entirely via the SMbus, and all rendering will be handled by the integrated GPU. If it needs more GPU power, the SMbus will wake up the discrete GPU and rendering will be taken over by it.

Because of the switching among GPUs, the motherboard will now be the location of the various monitor connections, with NVIDIA showing us boards that have a DVI and a VGA connector attached. Operation when using the integrated GPU is pretty straightforward, while when the discrete GPU is active things get a bit more interesting. When in this mode, the front buffer (completed images ready to be displayed) will be transferred from the discrete GPU to the front buffer for the integrated GPU (in this case a region of allocated system memory) and finally displayed by the electronics housed in the integrated GPU. By having the integrated GPU be the source of all output, this allows seamless switching between the two GPUs.

All of this technology is Vista only right now, with NVIDIA hinting that this will rely on some new features in Vista SP1 due this quarter. We also got an idea of the amount of system memory required by the integrated GPU to pull this off, NVIDIA is throwing around a 256MB minimum frame buffer for that chip.

NVIDIA demonstrated this at their event with some alpha software which will be finished and smoothed out as the software gets closer to shipping. For now the transition between GPUs isn’t smooth (the current process has some flickering involved) but NVIDIA has assured us this will be resolved by the time it ship and we have no reason not to believe them. The other issue we saw with our demonstration is that the software is currently all manual; the user decides which mode to run in. NVIDIA wasn’t as straightforward on when this would be handled. It’s sounding like the first shipping implementations will still be manual, with automatic switching already possible at the hardware level but the software won’t catch up until later this year. This could be a stumbling block for NVIDIA, but we’re going to have to wait and see.

The one piece of information we don’t have that we’d like to have is the performance impact of running a high-load situation with a discrete GPU with HybridPower versus without. Transferring the frame buffer takes resources (but nothing that PCIe can’t handle) but we’re more worried about what this means for system memory bandwidth. Since the copied buffer ends up there first before being displayed, this could take up a decent chunk of system memory bandwidth in copying large buffers in to memory and then back out to display it (a problem that would be more problematic with higher end cards and the higher resolutions commonly used). Further we’re getting the impression that this is going to add another frame of delay between rendering displaying (basically this could operate in a manner similar to triple buffering) which would be a problem for users who start getting input lagged to the point where it’s affecting the enjoyment of their game. NVIDIA is talking about the performance not being a problem, but it’s something we really need to see for ourselves. It becomes a lot harder to recommend and sell HybridPower if it’s going to cause any significant performance issues and we’ll be following this up with solid data once we have it.

Moving on, the other piece of Hybrid SLI that we heard about today is what NVIDIA is calling GeForce Boost. Here the integrated GPU will be put in to SLI rendering with a discrete GPU, which can greatly improve performance. Unlike HybridPower this isn’t a high-end feature and rather is a low-end feature; NVIDIA is looking to use this in situations where low-end discrete GPUs like the 8400GS and 8500GT. Here the performance of the integrated GPU is similar to that of the discrete GPU, so they’re fairly well matched. Since the rendering mode will be AFR, this is critical as it would have the opposite impact and slow down rendering performance if paired with a high end GPU (plus the performance impact otherwise would be tiny, the speed gap is just too great).

NVIDIA is showcasing this as a way to use the integrated GPU to boost the performance of the discrete GPU, but since this also requires new platforms, we’d say this is more important for the upgrade market for now; if you need more performance on a cheap computer drop in a still-cheap low end GPU and with SLI watch performance improve. We don’t have an idea on the exact performance impact, but we already knows how SLI scales today on higher end products, so the results should be similar. It’s worth noting that while GeForce Boost can be used in conjunction with HybridPower, that’s not the case for now; since low-end discrete GPUs use so little power in the first place, it doesn’t make much sense to turn them off. This will be far more important when it comes to notebooks later on, where both the power savings of HybridPower and GeForce Boost can be realized. In fact once everything settles down, notebooks could be the biggest beneficiary of the complete Hybrid SLI package out of any segment.

NVIDIA’s roadmap for Hybrid SLI calls for the technology to be introduced in Q1 with the 780a, 750a, 730a, and GeForce 8200 moherboard chipsets, followed by Intel in Q2 and notebooks later in the year along with other stragglers. On the GPU side the current 8400GS and 8500GT support GeForce Boost but not HybridPower. Newer GPUs coming in this quarter will support the full Hybrid SLI feature set, and it’s sounding like the G92 GPUs from the 8800GT/GTS may also have HybridPower support once the software is ready.

As we get closer to the launch of all the requisite parts of Hybrid SLI we’ll have more details. Our initial impressions that HybridPower could do a lot to cut idle power consumption, while GeForce Boost s something we’ll be waiting to test. Frankly the biggest news of the day may just be that NVIDIA is doing integrated GPUs on all of its motherboard chipsets, everything else seems kind of quaint in comparison.

Comments Locked

39 Comments

View All Comments

  • roadrun777 - Thursday, January 10, 2008 - link

    I am also leaning towards the totally stupid idea - comments, because

    1) Why heating up the motherboard more and adding features that cost money and no one needs

    They are prepping the production line for complete integration, this has more to do with whats going on down the line then right now.

    2) And why make concurrent use of the memory bus...? Especially during gaming the memory bus should be exclusive to game-data...also forking gigs of framebuffer data concurrently should not help the fps...

    Eventually the entire system will be virtualized eliminating any delays and the graphics scaling will be transparent, this eliminates the need for complex software rewriting. If they are going to do this, they need the memory bus shared between all components. The more memory controllers you have, the more latency in between each access.

    3) They should rather invest to power domn the primary GPU. Why can a CPU consume so little power in idle and a GPU cannot? Just invest more on throtteling mechanisms and idle states with reduced MHz and low power consumption should also be achievable for high-end card (e.g. reduce frequencies and voltage, shutdown shader units, shutdown 3/4 of the rem, etc. etc...)

    I suspect it has to do more with the capabilities of their fabrication plants and the eventual redesign of the entire system. They are not going to invest any engineers into making "design type C" video cards power effecient because they know they will have to eventually integrate everything, so why not work on that now and release older "design type C" video cards just to fill the buffer until the other tech is ready? Just a guess.

    4) And if they are not able to do this, why not implementing a "mini-GPU" on the graphics board? Than all switching can be done on the board and you are platform independent...ummm...whoops...my bad. Than I do not need to buy an nVidia mobo...no, that's not an option then... :-P

    Yes that probably has something to do with it, but I suspect it has more to do with the fact that the entire PCB may be eliminated in the future and your GPU will just be a drop in replacement (complete with socket), similar to what CPU's are now.
    I for one am tired of seeing a slower memory system for the CPU and faster ones for the GPU. Why not let them both share the same bandwidth, memory, and controllers? It takes care of a lot of performance issues.
  • kevinkreiser - Monday, January 7, 2008 - link

    they should put an SLI-like connector on the motherboard itself. then we can connect the add-in card directly to the motherboard's IGP. So the onboard and the add-in cards can talk directly to one another. It'd probably require extra mem on the onboard card though, but who cares, as long as it doesn't tie up other system resources.
  • johnsonx - Monday, January 7, 2008 - link

    They've done this backwards. They need to set things up so that the monitor connects to the discrete graphics card, and add a internal cable connection so that when the discrete GPU is powered down the on-board GPU can get it's signal to the monitor. This way there's no requirement to copy the frame buffer from the discrete GPU back to main RAM for output. Also they could then eliminate the main-board VGA and DVI connectors as well - just use a stub card with monitor ports if there isn't a discrete GPU card installed.

    Yes, they would have to change more to make it work this way. It would also work a hell of a lot better. Yes, every card and motherboard would have to include the extra 'Hybrid SLI' connection and appropriate signal switch logic... so what? NVidia can require that just like every other feature and connector they've required in the past.
  • JonnyDough - Monday, January 7, 2008 - link

    the whole graphic card and PCI-E bus is stupid. Why not just have replaceable dedicated processors ON the motherboard that share system memory and turn on/off parts of the processor as needed? Or at LEAST have cards that can support adding processor cores/memory. That way you can still build a computer in a square box.

    What I want is to be able to increase graphic processing power by adding memory or a processor to my system. Period. When will the day come that I can upgrade my computer by just adding another processor, instead of having to swap one out. We have the ability to add a graphics card now for increased performance. The problem is that they don't turn off when not in use. SLI is fine, just make it so the system doesn't use my $500 video card when it isn't needed. Crazy and radical ideas like making me purchase an additional onboard GPU is silly.

    For NVidia to say that it won't increase the cost of my motherboard is just plain outright retarded. Maybe with new process technologies that save NVidia money I won't see an INCREASE, but I certainly won't see a DECREASE in the cost of the board with smaller chipset processes. NVIDIA, quit being a jerk and trying to keep chipset prices high and making us buy extra components from you. Send the savings on down to us please.
  • roadrun777 - Thursday, January 10, 2008 - link

    I think your missing the point. They are preparing for all-in-one cores. So of course they are going to be making integrated GPU's to "train" their fabrication engineers. Eventually it will move into the CPU, or at least on top of the CPU with some kind of socket, along with the memory controller. The CPU should share the same memory and controller the GPU does and they should be very close to one another to reduce voltage, latency, and timing problems.
  • EateryOfPiza - Monday, January 7, 2008 - link

    A lot of these concerns are based on the understanding that the integrated GPU seems to be the "primary GPU". That's the understanding I got from the article.

    1. Old games that are not actively developed and don't support SLI. Which GPU will they use? If these old games use only the integrated GPU, can we still expect good frame rates?

    2. What about Windows XP or Linux? Will users on these OSs be limited to using the integrated GPU? Or will there be a BIOS setting that can enable/disable Hybrid SLI.

    3. This is going to take up a lot of space on the I/O panel, space that is sorely needed for USB, audio, eSATA, Ethernet, etc etc. Every bit of space on my current I/O panel is used, I don't want to be missing out on extra USBs or Ethernets because of some monitor connections.

    4. This will make the NB run way hotter with all the extra traffic running through it.

    5. Seems like this plan will be limited by RAM speeds. High performance RAM, higher bus speeds, lower CLs will finally make a significant difference. I guess those memory manufacturers are happy. (I don't think this is good for laptops though, a lot of OEMs deliberately buy low quality slow RAM to cut costs.)

    6. What about using multiple GPUs to run more than two monitors? Will that ability be taken away? (ie. Use two graphics cards to run four monitors.)
  • emilyek - Monday, January 7, 2008 - link

    In summary:

    1. Onboard GPU allows discrete GPU to shut off and save power, especially helpful for notebooks.

    2. Allows the onboard chip to go 'SLI' with a low-end discrete card.

    Not a lot for the enthusiast here, as far as I can tell. Sounds more like a marketing scheme to get the uninformed who buy onboard graphics to have a moment where they say: 'It will be SLI if I buy this low-end card!11one!'

    A crap idea. More problematic junk on the motherboard we don't need. Make single-card solutions that aren't power hogs? That would be nice.
  • Spuke - Monday, January 7, 2008 - link

    So none of these features work in XP?
  • tcool93 - Monday, January 7, 2008 - link

    Notice how Nvidia copies most things that ATI does. Apparently they can't think up anything themselves.
  • Cygni - Monday, January 7, 2008 - link

    Could that extra GPU, when bypassed during gaming by the higher end videocard, perhaps be used for Nvidias hardware physics acceleration? If so, Nvidia may have an ace up there sleeve in a future software revision...

Log in

Don't have an account? Sign up now