Every year NVIDIA launches quite a few new products; some are better than others, but they're all interesting. This fall, the big news is Maxwell 2.0, aka GM204. Initially launched last month as the GTX 980 and GTX 970, NVIDIA is hopefully changing the way notebook gamers get treated by launching the mobile version of the GM204 just one month later.

We've already covered all of the new features in the desktop launch, so things like DSR, FXAA, VXGI, DX12, and GameWorks are all part of the notebook launch marketing materials. Of course, as a notebook GPU there are a few extra features available that you don't see on desktop GPUs, mostly because such features aren't really needed. Optimus Technology has been around for several years now so there's not much to add; it allows laptops to dynamically switch between the lower power integrated graphics when you're not doing anything that requires a faster GPU, and it can turn on and utilize the faster discrete NVIDIA GPU when needed. BatteryBoost is a related technology that was first introduced with the GTX 800M series of GPUs, and it seeks to improve gaming battery life. Our test platform at the time didn't really give us the gains we were hoping to see, but NVIDIA assures us that the new GM204 mobile graphics chips will do much better at providing decent battery life while running games. We'll be diving into this in more detail once we get our test notebooks.

Speaking of which, no, we don't have a notebook yet. It was supposed to arrive late last week but ended up shipping Monday instead, which means it should be arriving about the time you're reading this. We'll be posting a separate look at gaming performance as soon as we're able, and we'll have a full review of the MSI GT72 in the coming week as well. For now, what we have are specifications for the mobile versions of GM204 and an overview of what to expect from the mobile versions of NVIDIA's new GPU.

If you've been following the computing industry to any degree over the past few years, a few trends are clearly becoming ever more important. One is that many PC desktop users are migrating to laptops and notebooks, but perhaps just as important is the migration of PC users to smartphones and tablets. There are numerous reasons for the shift – convenience along with increasing performance from handheld devices – but the result is a reduction in the growth of the PC industry. The good news for NVIDIA is that gaming notebooks are still a growing market, though how you define a "gaming notebook" is certainly something that can be manipulated.

NVIDIA's own figures show a 5X growth in gaming notebook sales during the past three years, so clearly there's a demand for getting more graphics performance into laptops. In fact, that's generally the number one desire from notebook gamers: "I want desktop class performance!" NVIDIA is aiming to do just that with the launch of the GTX 980M and GTX 970M.

Closing the Performance Gap with Desktops
POST A COMMENT

68 Comments

View All Comments

  • chizow - Tuesday, October 7, 2014 - link

    Except most professionals don't want to be part of an ongoing beta project, they want things to just work. Have you followed the Adobe CS/Premiere developments and OpenCL support fiasco with the Mac Pros, and how much of a PITA they have been for end-users? People in the real world, especially in these pro industries that are heavily iterative, time sensitive, and time intensive cannot afford to lose days, weeks or months waiting for Apple, AMD and Adobe to sort out their problems. Reply
  • JlHADJOE - Tuesday, October 14, 2014 - link

    This. As cool as open standards are, it's also important to not stuck. The industry has shown that it will embrace open when it works. Majority of web servers use open source, and Linux has effectively displaced UNIX out of all sectors except for extreme big-iron.

    But given a choice between open and "actually working", people will choose "working" every time. IE6 became the standard for so long because all of the "open" and "standards-compliant" browsers sucked for a very long time.
    Reply
  • mrrvlad - Thursday, October 9, 2014 - link

    I have to work with both CUDA and openCL (for amd GPU) for compute workloads. The main advantage of CUDA is thier toolset - AMD compiler is ages behind and does not allow developer sufficient control of the code being generated. It's more of "randomizing" compiler, not optimizing... I would never even think about using openCL for GPU compute if I will start a new project now. Reply
  • Ninjawithagun - Sunday, June 28, 2015 - link

    The problem is that what you are stating is only half the story. Unfortunately, each company does have a superior solution. Going with OpenGL is a compromise at best because the coding does not maximize or optimize automatically for hardware specific architectures. Maybe in a perfect world we would have an open non-proprietary standard across all programming schemes, but it's just not possible. Competition and more importantly profit is the key to making money and neither AMD or Nvidia will budge. Both parties are just as guilty as the other in this aspect. Reply
  • atlantico - Wednesday, October 15, 2014 - link

    Apple will *never* embrace CUDA. OpenCL is an important part of the future vision and strategy of Apple, whatever Nvidia is pushing, Apple is not buying. Reply
  • RussianSensation - Tuesday, October 7, 2014 - link

    If all Apple cares about was performance/watt, the MacPro would not feature AMD's Tahiti cores. There is a paragraph even dedicated to explaining the importance of OpenCL for Apple:

    "GPU computing with OpenCL.
    OpenCL lets you tap into the parallel computing power of modern GPUs and multicore CPUs to accelerate compute-intensive tasks in your Mac apps. Use OpenCL to incorporate advanced numerical and data analytics features, perform cutting-edge image and media processing, and deliver accurate physics simulations."
    https://www.apple.com/mac-pro/performance/

    Apple is known to switch between NV and AMD. Stating that AMD is not in good graces with Apple is ridiculous considering the MacPro has the less power efficient Tahiti vs. GK104s. And that is for a reason -- because HD7990 beats 690 at 4K, and destroys it in compute tasks -- which is proof performance/watt is no the only factor Apple looks at for their GPU selection.
    Reply
  • Omoronovo - Wednesday, October 8, 2014 - link

    I didn't mean to imply that it was *literally* the only factors taken into account; they clearly wouldn't use a GPU that cost $3,000 if a competing one with similar (but worse) performance/watt was $300.

    I was trying to emphasize that, all factors being equal - ie, standards compliance, compatibility, supply, etc, then performance/watt is the prime metric used to determine hardware choices. The tahiti vs GK104 comparison is a great one - AMD has extremely heavily pushed OpenCL and their support for it was basically unanimous - nVidia was slow on the uptake of OpenCL support as they were pushing for CUDA.
    Reply
  • bischofs - Tuesday, October 7, 2014 - link

    I may be wrong, but it seems like the only reason the mobile chips are catching up to the desktop is that they haven't really improved PC cards in 5+ years. Instead of pushing the limits on the PC, building architectures that are based on pure performance and not efficiency, and scaling it down they are doing the opposite, thus the performance difference is getting smaller. It is strange that they are marketing this as a good thing being that there is a rather large difference in power and cooling availability on a tower, thus there should be a large performance gap. Reply
  • Razyre - Tuesday, October 7, 2014 - link

    Not at all. Haiwaii shows this if anything, the 290X is balls to the walls in OpenCL, while Nvidia's cards are more conservative and gaming optimised they still pack an as good and usually better punch in frame rates.

    Cards are getting too hot at 2-300W, you need silly cooling solutions which are either expensive, make your card larger or louder.

    The Maxwell series is phenomenal; it drastically improves frame rates while halving the power consumption of the same series chips from 2 years ago.

    GPUs have come on SO far since 2009 when you are touting they've barely improved. Let's say you pit a 5870 against a 290X. The 7970 is about twice as powerful as the 5870 (slightly less in places), a 2012 GPU and the current 290X is about 30% better than a 7970. So you're effectively seeing there a theoretical 130% improvement in performance over 4 years (I say this because the 290X is now a year old), so that's an average of 30%ish improvement per year.

    Considering the massive R&D costs and costs associated with moving to smaller dies to fit more transistors on a chip (which increases heat, hence why Nvidia's Maxwell is a great leap since they can now jam way more transistors on there for the GK110 replacement) GPUs have come on leaps and bounds.

    The only reason it might look like they haven't is because instead of jumping from a standard let's say 1680x1050 to 1920x1080, we jumped to 3840x2160 a FOUR TIMES increase in resolution.

    Mobile GPUs are even more impressive in progress really. That chart showing the performance closing between mobile and desktop GPUs isn't too untrue.
    Reply
  • bischofs - Tuesday, October 7, 2014 - link

    I don't know much about the AMD stuff you are talking about, but I have probably more anecdotal evidence. Software and more importantly games for PCs have been pretty stagnant as far as resources go, I used a GTX 260 card for about 5 years and never had problems running anything until recently. Seeing as Games are the largest driver of innovation most games are built for consoles, with a large percentage also being built on mobile devices. Ive been playing games that look pretty much the same at 1080p for 5+ years on my pc, the only thing that has been added is more graphical features. Further support of my argument is processors, I remember the jump to Nehalem from the Core 2 was astounding, but from then on ( still running my i7-920 from 2008 ) its been lower power consumption and more cores with small changes in architecture. So you might throw some percentages around but I just don't see it. Reply

Log in

Don't have an account? Sign up now