Yeah, it's superior to the mobile Kepler..at this level, mobile based architecture is still better than downscaled performance based architecture. The Maxwell could be the jack of all trades though.
Well Maxwell will do just the opposite and upscale from mobile solution to desktop so will probably make Nvidia no 1 in mobile space too but Tegra K1 is still a big improvement and equal or almost equal to the competition.
The problem is that nVidia has had plans to go to a unified shader architecture in ultra mobile for while now. I believe their first attempts were with a Fermi based design but that didn't pan out so it was cancelled in favor of the existing separate pixel and vertex shader designs (though with some tweaks). Kepler is the first generation to scale down, mainly due to its ability to perform at far lower clock speeds (and thus voltages) than Fermi could.
Maxwell will certainly be more ideal in mobile but it likely wasn't ready for inclusion in their next mobile SoC. You go to war with the army you have, not the army you want. Not showing up to the fight isn't an option.
Remember the time when "everyone" agreed that Nvidia's Tegra K1 was going to be invincible if not as a total SoC then at least as a GPU?
Well, it seems like the Tegra business is going to take a nosedive once again. Their primary advantage has melted away and their SoC *still* doesn't have an integrated LTE solution.
Well I think full open gl 4.4 support is still a clear advantage to nvidia, along with recognized support from the gaming industry. And if I were to guess, nvidia probably has the price advantage as well, their chips are usually lower cost.
... actually I don't really remember that. I remember people like you saying K1 would be obsolete by the time it came to market, even though by Ryan Smith's own estimates the 6XT series won't come to market before 2015, likely 2H 2015.
I have hope for mobile Kepler, but after so many disappointing over-hyped releases (in large part because of their perpetual use of the non-unified shader/DirectX 9-level architecture .. well and low memory bandwidth) It is hard not to be negative.
I beg to defer. This time NV has cleared carrier certification on I500 modem so the time lag is no longer an issue. They have K1 on silicon and in Lenovo's product so the chip is in production although the quantities depend on the number of customers they can get. M1 is so close, it is not even funny but they do not need to use that as yet!. So they have T4, K1 and potentially M1 to use. All depending on the market direction. This situation has NEVER arisen whatsoever. So it is the reverse of the old dilemma which is a good thing. Remember M1 is going to be mobile efficient using 128 cores to do what 192 cores could do but 90% of it, so per core efficiency is up. Based on how they would want to structure the cores for mobile, they have flexible room to move many ways including tweaking clock speed like in desktop GPU cards. Imagine, if Intel has this advantage, they would have conquered the whole mobile market in 2 years flat. But alas, Intel does not do ARM!. NV is NV and not Intel. NV sure took more than 3 years already to have their arm 64bit chip finished. Not there yet ?!. At that rate, the Denver chip would look like a desktop part!. {AMD's arm division would be laughing ...}
Up to now Apple seems to like to stick with a GPU architecture for 2 generations of SoC (such as Series 5 SGX535 in the 3GS and A4 devices and Series 5XT SGX543MP in the A5/A5X and A6/A6X devices) which no doubt simplifies design and driver development. I wouldn't be surprised if the A8 moves to a higher clock speed G6630 from the G6430 in the A7 similar to the transition to a higher clock speed SGX543MP3 in the A6 vs the SGX543MP2 in the A5.
Yes I would have thought of that as well. Until Apple did their A7 SoC which actually shocked everyone in the world apart from themselves. Their 64bit ARM SoC came WAY earlier that anyone expected including ARM. So it may not be much of a surprise if Apple pull the same trick again for 6XT.
Apple doesn't develop their GPU's... Apple developed their 64-bit ARM architecture... it's two completely different things. Apple wasn't first to market with 64-bit because they were on some sort of greatly accelerated development schedule compared to other chipmakers. Apple was first because they started development first, and made investments in 64-bit before anyone else.
We know when the 6XT series became available for licensing and integration, and at this point it would be nearly impossible for Apple to use a 6XT derivative in the A8 if they keep to their regular release schedule. The A8 is probably nearing production at this point.
Patterns of past behavior can be useful in predicting the future, but I think the best thing to take away from Apple's past behavior is that they are going to do what they can do, and need to do, to deliver great products with great user experiences.
If that means a more aggressive ramp in GPU, then they will find a way to do it. Choices that would have been impossible in the past may be practical now since they keep adding design/engineering capability, and have more and more volume to work with, particularly if they continue using the same die for both iPad and iPhone.
The key to this comparison is really the power consumption for the same performance metric. The one with lower gpu_ops/watt wins. It is about efficiency. Both vendors have challenges in this area main due to process node and design factors. K1 claims it can do max-throttle at 2 watts while it looks like the Rogue 6430 (in iPhone and ipad) was straining at about 5 to 6 w in Anand's test. Let us say 5w but it is only 2/3 the power of 6650. This means the 6650 will pull 7.5w alone at full throttle!. They will have to throttle down the frequency a lot to bring it under 4 watts. I say K1 will win with better battery performance compared to PVR 6650. So NV's claim of 1.5X power efficiency might have some merit. We shall see but it is interesting to see head-to-head competition as the result would be a way better product from either side. Each can optimise for whatever their design excels. Note: One can bet that NV must have done a lot of super-computer simulations of their target designs to choose the number of functional units they had in their gpu. They had the Tesla super on tap for such things. Other, would not likely have such access.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
16 Comments
Back to Article
blanarahul - Monday, February 24, 2014 - link
Nvidia should have launched Tegra M1 instead of Tegra K1 + 750 (Ti).darkich - Monday, February 24, 2014 - link
Yeah, it's superior to the mobile Kepler..at this level, mobile based architecture is still better than downscaled performance based architecture.The Maxwell could be the jack of all trades though.
vladz - Monday, February 24, 2014 - link
Well Maxwell will do just the opposite and upscale from mobile solution to desktop so will probably make Nvidia no 1 in mobile space too but Tegra K1 is still a big improvement and equal or almost equal to the competition.Kevin G - Monday, February 24, 2014 - link
The problem is that nVidia has had plans to go to a unified shader architecture in ultra mobile for while now. I believe their first attempts were with a Fermi based design but that didn't pan out so it was cancelled in favor of the existing separate pixel and vertex shader designs (though with some tweaks). Kepler is the first generation to scale down, mainly due to its ability to perform at far lower clock speeds (and thus voltages) than Fermi could.Maxwell will certainly be more ideal in mobile but it likely wasn't ready for inclusion in their next mobile SoC. You go to war with the army you have, not the army you want. Not showing up to the fight isn't an option.
Mondozai - Monday, February 24, 2014 - link
Remember the time when "everyone" agreed that Nvidia's Tegra K1 was going to be invincible if not as a total SoC then at least as a GPU?Well, it seems like the Tegra business is going to take a nosedive once again. Their primary advantage has melted away and their SoC *still* doesn't have an integrated LTE solution.
grahaman27 - Monday, February 24, 2014 - link
Well I think full open gl 4.4 support is still a clear advantage to nvidia, along with recognized support from the gaming industry. And if I were to guess, nvidia probably has the price advantage as well, their chips are usually lower cost.dragonsqrrl - Monday, February 24, 2014 - link
... actually I don't really remember that. I remember people like you saying K1 would be obsolete by the time it came to market, even though by Ryan Smith's own estimates the 6XT series won't come to market before 2015, likely 2H 2015.dragonsqrrl - Monday, February 24, 2014 - link
In which case it would probably be more accurate to call the 6XT series competition for the Tegra variant of Maxwell, than the K1.winterspan - Monday, February 24, 2014 - link
I have hope for mobile Kepler, but after so many disappointing over-hyped releases (in large part because of their perpetual use of the non-unified shader/DirectX 9-level architecture .. well and low memory bandwidth) It is hard not to be negative.fteoath64 - Tuesday, February 25, 2014 - link
I beg to defer. This time NV has cleared carrier certification on I500 modem so the time lag is no longer an issue. They have K1 on silicon and in Lenovo's product so the chip is in production although the quantities depend on the number of customers they can get. M1 is so close, it is not even funny but they do not need to use that as yet!. So they have T4, K1 and potentially M1 to use. All depending on the market direction. This situation has NEVER arisen whatsoever. So it is the reverse of the old dilemma which is a good thing.Remember M1 is going to be mobile efficient using 128 cores to do what 192 cores could do but 90% of it, so per core efficiency is up. Based on how they would want to structure the cores for mobile, they have flexible room to move many ways including tweaking clock speed like in desktop GPU cards.
Imagine, if Intel has this advantage, they would have conquered the whole mobile market in 2 years flat. But alas, Intel does not do ARM!.
NV is NV and not Intel. NV sure took more than 3 years already to have their arm 64bit chip finished. Not there yet ?!. At that rate, the Denver chip would look like a desktop part!. {AMD's arm division would be laughing ...}
ltcommanderdata - Monday, February 24, 2014 - link
Up to now Apple seems to like to stick with a GPU architecture for 2 generations of SoC (such as Series 5 SGX535 in the 3GS and A4 devices and Series 5XT SGX543MP in the A5/A5X and A6/A6X devices) which no doubt simplifies design and driver development. I wouldn't be surprised if the A8 moves to a higher clock speed G6630 from the G6430 in the A7 similar to the transition to a higher clock speed SGX543MP3 in the A6 vs the SGX543MP2 in the A5.dragonsqrrl - Monday, February 24, 2014 - link
This will probably be the case since the 6XT series won't come to market before 2015, according to an earlier article published here on Anandtech.iwod - Monday, February 24, 2014 - link
Yes I would have thought of that as well. Until Apple did their A7 SoC which actually shocked everyone in the world apart from themselves. Their 64bit ARM SoC came WAY earlier that anyone expected including ARM. So it may not be much of a surprise if Apple pull the same trick again for 6XT.dragonsqrrl - Wednesday, February 26, 2014 - link
Apple doesn't develop their GPU's... Apple developed their 64-bit ARM architecture... it's two completely different things. Apple wasn't first to market with 64-bit because they were on some sort of greatly accelerated development schedule compared to other chipmakers. Apple was first because they started development first, and made investments in 64-bit before anyone else.We know when the 6XT series became available for licensing and integration, and at this point it would be nearly impossible for Apple to use a 6XT derivative in the A8 if they keep to their regular release schedule. The A8 is probably nearing production at this point.
easp - Tuesday, February 25, 2014 - link
Patterns of past behavior can be useful in predicting the future, but I think the best thing to take away from Apple's past behavior is that they are going to do what they can do, and need to do, to deliver great products with great user experiences.If that means a more aggressive ramp in GPU, then they will find a way to do it. Choices that would have been impossible in the past may be practical now since they keep adding design/engineering capability, and have more and more volume to work with, particularly if they continue using the same die for both iPad and iPhone.
fteoath64 - Tuesday, February 25, 2014 - link
The key to this comparison is really the power consumption for the same performance metric. The one with lower gpu_ops/watt wins. It is about efficiency. Both vendors have challenges in this area main due to process node and design factors. K1 claims it can do max-throttle at 2 watts while it looks like the Rogue 6430 (in iPhone and ipad) was straining at about 5 to 6 w in Anand's test.Let us say 5w but it is only 2/3 the power of 6650. This means the 6650 will pull 7.5w alone at full throttle!. They will have to throttle down the frequency a lot to bring it under 4 watts.
I say K1 will win with better battery performance compared to PVR 6650. So NV's claim of 1.5X power efficiency might have some merit. We shall see but it is interesting to see head-to-head competition as the result would be a way better product from either side. Each can optimise for whatever their design excels.
Note: One can bet that NV must have done a lot of super-computer simulations of their target designs to choose the number of functional units they had in their gpu. They had the Tesla super on tap for such things. Other, would not likely have such access.