exactly the problem, current atom is a horrible cpu in wathever device, whatever frequency you put it. have used them in notebooks and even now in a tablet. Bobcat on the other hand was awesome in the netbook range. THe temash would be way better suited for all these devices but as usual OEM focus on the blue brand with market jingles and dominancy and in the end its the end consumer (WE) that suffer from it and if it continues like this we will even suffer more. (less innovative, higher prices, dominant predefined design (something already horrible today) but many people fail to see that........as if they think there Intel system they just bought is a better suited device for everything...
Intel's been going easy on AMD these past few years.
Plus, Atom is finally due for its big update this year, following which we'll be seeing a more frequent update schedule in line with their Core processors.
The heterogeneous solution was what won the PS4 and XB1 for AMD.
I'd read an interesting perspective on why Intel refrained from "kicking AMD to the curb & on down into the storm sewer" (sorry; can't recall the source). In essence, given the scope of Intel's unquestioned dominance in their chosen markets, (& mobile's on the radar), were they to act with any obvious & direct intent to further weaken, (or even try & finish off), AMD, Intel would find themselves in an extremely difficult, exceedingly complex & decidedly unpleasant set of circumstances.
By decimating their only possible source of true competition, Intel would be responsible for invoking upon themselves intense anti-trust scrutiny; a result that would be inevitable assuming regulatory agencies were functioning properly.
By backing off a bit, Intel may well cede some amount of business to AMD, but they retain a legitimate market competitor & at the same time continue collecting very healthy margins. The premiums charged on sales can then be used to continue funding aggressive Intel R&D, and, uh, marketing related expenditures, too.
Yeah, but regardless AMD's had the FAR better CPU now for years. I've been running the lowest end version of it for a couple of years in a tiny notebook, and from the beginning wished people were using it for tablets.
When you say current Atom, are you referring to Baytrail? Previous Atoms were pretty bad for Windows boxes, in my personal experience. But the new generation is significantly more powerful (About on par with a Core 2 Duo on benchmarks). They seem pretty reasonable for basic office tasks, this may not include all the lowest-end versions.
Still don't see why OEM's would choose AMD's APU's in Android tablets over ARM, though.
It's weaker CPU wise, and most likely weaker GPU wise, too. We'll see when they come out if their GPU's can stand up to Adreno 330, PowerVR Series 6 and Mali T628. Plus, it requires quite a bit of power.
In my book no chip that can't be used in a smartphone (and I'm talking about the exact same model, not the "brand") should be called a "mobile chip".
This idea about "tablet chips" is nonsense. Tablet chips is just another way of saying our chip is not efficient enough, so we're just going to compensate for that with a much larger battery, that adds more to weight, charging time, and of course price.
Even an Atom chip is more powerful per cycle than ARM, and AMD's stuff is more powerful than Atom. I'm not exactly sure what you are using to state that ARM is more powerful, but AnandTech did a great comparison themselves.
By the way, the comparisons in some cases are for quad core ARM vs. single core Atom at comparable speeds. Again, really not sure where your "facts" come from.
ARM actually isn't all that...and it's quite the other way around.
As I have seen it stated elsewhere, "Simply: x86 IPC eats ARM for lunch while actual performance and power usage will scale together. That is why ARM currently has no real business competing against x86"
not to mention, the IPC's in these processors follow suit with their server Opteron processors, which means, they achieve even greater IPC's per cycle. This is how you are able to have 1.6ghz CPU's that can compete with many common 3-4ghz desktop processors.
Huh? AMD's Bobcat parts are more powerful than ARM's stuff. ARM's only just now managing to sort of compete with first gen Atom at best, and that with a CPU that's not actually used in much.
And "tablet chips" is NOT nonsense. You have more power budget, and higher expectations for performance in a tablet. If it's "nonsense", why does Apple put bigger chips in their tablets? Why can tablets run Core i CPUs? Why do they typically get bigger chips first even with Android?
I think the main point of this is getting into the mobile market. Temash looks like a great chip for a tablet. The only problem is OEMs not biting because they think they have to put an Intel sticker on the box for it to sell.
Personally, I'm waiting on a good tablet with Temash to finally jump into the Win8 tablet club. Whatever OEM makes a good one first will be getting my money.
I think with tablets, OEMs are even less likely to think they need to put an Intel sticker on it. Joe Schmo knows Intel doesn't mean as much for tablets. The most well known tablets aren't Intel. This is an opening for AMD to be able to get in the game late if they want to.
Ironically when I see an Intel sticker on a tablet (unless it's a Core i part), I avoid it like the plague. Bobcat would have been perfect for tablets, and a BIG selling point.
Yeah, I really have no interest in an Atom tablet, partially even just because of the horrible video.
I've got an 11.6" AMD c50 (lowest end Bobcat) based notebook, and while it's slow, it's still impressive how it runs anything, and in a pinch can even function as a main PC. AMD's got an even lower power Bobcat part with the exact same performance for tablets, but I don't know of shipping computers that used it, and it really would have been perfect. These new ones of course will be even better.
I wonder if the companies building these understand that using AMD would be a selling point... I see "Atom" and my eyes glaze over....
"I should point out that ARM is increasingly looking like the odd-man-out here, with both Jaguar and Intel’s Silvermont retaining the dual-issue design of their predecessors."
It's not just ARM, it's three different current gen ARM cores.. if you're going to pose it as ISA shouldn't it then just be ARM vs x86 and not ARM vs Silvermont and Jaguar?
Besides, MIPS is 3-way in its CPUs targeting this power budget too (proAptiv), and so is PowerPC (e600 for instance). The reason why Silvermont and Jaguar is 2-way is really undeniable: x86 decoders are substantially more expensive than those for any of these ISAs, even Thumb-2. There's some validity to the argument that x86 instructions are more powerful (after first negating where they aren't - most critically, lack of three-way addressing adds a lot of extra move instructions for non-AVX processors) but nowhere close to 50% more powerful.
Qualcomm hasn't said an awful lot about the internals of the uarch but several sources report 3-way decode and I haven't seen any say 2-way. It's possible that isn't fully symmetric or limited in some other way, we don't really know.
Don't worry, I had no idea either until I started working in the industry :) It just means custom circuits that are hand crafted by a human. This is as opposed to "synthesis", in which the RTL code (written in a hardware description language such as Verilog) are "synthesized" by design software into circuits.
quasi was more accurate than his name implies, but just to expand on it:
The count of custom macros is important because when you switch manufacturing processes, the work you have to re-do on the new process is the macros. Old cpus were "all custom macro", meaning that switching the manufacturing process meant re-doing all the physical design. A cpu that has a very limited amount of custom macros can be manufactured at different fabs without breaking the bank.
To suppliment quasi_accurate (as I understand) these are parts of the chip that need checked on, adjusted and corrected, and/or even replaced depending on the foundry.
So, reducing these isn't a priority for Intel, but for AMD who wants portability (ability to use both GloFo and TMSC) it's a priority.
Thanks quasi_accurate, Tuna-Fish and lmcd. Your answers were very clear.
If I my understanding is correct, would it be safe to assume that Apple's A6 uses custom macros. Anand mentioned in his article that Apple used a custom layout of ARM to maximize performance. Is this one example of custom macros.
You can customize a variety of things, from individual transistors (eg fast but leaky vs slow but non-leaky), to circuits, to layout.
As I understand it the AMD issue is about customized vs automatic CIRCUITS. The Apple issue is about customized vs automatic LAYOUT (ie placement of items and the wiring connecting them). Transistors are obviously most fab-specific, so you are really screwed if your design depends on them specifically (eg you can't build your finFET design at a non-finFET fab). Circuit design is still somewhat fab-specific --- you can probably get it to run on a different fab, but at lower frequency and higher power, so it's still not where you want to be. Layout, on the other hand, I don't think is very fab-specific at all (unless you do something like use 13 metal layers and then want to move to a fab than can only handle a maximum of 10 metal layers).
I'd be happy to be corrected on any of this, but I think that's the broad outline of the issues.
The Embedded G-Series SOCs seem to be exactly Kabini + ECC memory enabled (ex: GX-420CA and A5-5200). This will probably be the cheapest way to get ECC enabled and better performance then Atom, next step up would be Intel S1200KPR + Celeron G1610?.
I've been thinking of putting together a Router/Firewall/Proxy/NAS combo ...
Is it just me or does the shared L2 cache merely enable the same scaling to 4 cores as bobcat had to 2 cores? There is no "massive benefit" as alluded to in the numbers or discussion.
Bobcat has for one thread 0.32 and for two threads 0.61, or a scaling of 95%. (0.64 perfect scaling) Jaguar has for one thread 0.39 and for four threads 1.50, or a scaling of 96% (1.56 perfect scaling)
The 1% difference could easily be a result of score rounding. I see that a four core bobcat would probably scale worse than jaguar, but the percentages chosen in the table are a bit misleading.
Of course, drawing such conclusions from a single benchmark is dangerous. If other benchmarks exhibit more code/data sharing and thread dependencies than Cinebench, their numbers might show a more appreciable scaling benefit from the shared L2 cache.
Wii U uses a PPC 750? Correct me if I'm wrong, but the PPC 750 family is the same chip that Apple marketed as the G3 up until about 10 years ago? And IIRC, Dolphin in the GameCube was also based on this architecture?
Back in the day, the G3 at least had formidable integer performance -clock for clock, it was able to outdo the Pentium II on certain (integer heavy) benchmarks by 2x. Its downfall was an outdated chipset (no proper support for DDR) and the inability to scale to higher clockspeeds - integer performance may have been fast, but floating point performance wasn't quite as impressively fast - good if the Pentium II you're competing against is nearly the same clock, bad when the PIII and Core Solos are 2x your clockspeed.
Considering the history of the PPC 750, I'd love to know how a modern version of it would compare.
Yes, the Gamecube, Wii, and Wii U all use PowerPC 750 based processors. The Wii U is the only known multicore implementation of it, but the core itself appears unchanged from the Wii, according to the hacker that told us the clock speed and other details.
And you're right, it was good at integer, but the FPU was absolutely terrible...Which makes it an odd choice for games, since games rely much more on floating point math than integer. I think it was only kept for backwards compatibility, while even three Jaguar cores would have been better performing and still small.
The Nintendo faithful are saying it won't matter since FP work will get pushed to the GPU, but the GPU is already straining to get even a little ahead of the PS360, plus not all algorithms work well on GPUs.
Not entirely true. The Wii U CPU is highly customized and has enhancements not found in typical PowerPC processors. It's been completely tailored for gaming. I'm not saying it's the power of the newer Jaguar chipsets, but the beauty of custom silicon is that you can do much more with less (Tegra 3's quad-core, 12-core GPU vs. Apple's A5 dual core CPU/GPU anyone? yeah A5 kicked its arse for games) that's why Nintendo didn't release tech specs because they tailored a system for games and performance will manifest with upcoming games (not these sloppy ports we've seen so far).
Also the "plethora" of developers that said it sucked (namely the Metro: Last Light dev) said they had an early build of the Wii U SDK and said it was "slow". Having worked for a developer, they base their opinions on how fast/efficient they can port over their game. The Wii U is a totally different infrastructure that lazy devs don't want to take the time to learn, especially with a newer GPGPU.
If a developer wants to do GPGPU, the PS4 and Xbox One will be highly preferable due to unified virtual memory space. If GPGPU was Nintendo's strategy, they shouldn't have picked a GPU from the Radeon 6000 generation. Sure, it can do GPU but there are far more compromises to hand off the workload.
Douple the 1.6 Ghz 4 core version and you are near. The wider memory controller eats some extra energy to, so maybe you have to add 0.2 to 0.3 calculation...
"The L2 cache is also inclusive, a first in AMD’s history."
Not exactly correct. The very first Athon (K7) on Slot A with off-die L2 used inclusive cache hierarchy. All models after that moved to exclusive design.
Bulldozer is also mostly inclusive. Not strictly inclusive, but certainly not exclusive (you really wouldn't get such a thing from a writethrough L1 cache)
Ahh amd, I love your marketing slides. Lets compare battery life and EXCLUDE the screen. Never mind that the screen consumes a large amount of power and that when you add it to the total battery life savings go down tremendously. (That's why sandy-> ivy bridge didn't improve battery life that much on mobile). Lets also leave out the Rest of system power and soc power for brazos. It also looks like the system is using an SSD to generate these numbers which looking at the target market almost no OEM will do.
Can anyone clue me in as to how AMD got the rights to make 'PU's for both of the consoles? Was it just bang/$ vs Intel/IBM/etc? Not a fanboy of either camp (amd/intel), just curious.
It was purely the fact that they have an APU with a high end GPU on it. Intel is nowhere near AMD in terms of top tier graphics power. Nvidia doesn't have x86. The total package price for an APU vs CPU/GPU made it impossible for an Intel/Nvidia solution to compete. The complexity is also much less on an APU system than CPU/GPU. The GPU needs a slot on the mobo. You have to cool it as well as the CPU. Less complexity = less cost.
Multiple reasons, AMD has historically been better with console contracts than Nvidia or Intel though, those two want too much control over their own chips while AMD licences them out and lets Sony or MS do whatever with them. They're probably also cheaper, and no one else has an all in one APU solution with this much GPU grunt yet.
Good article!--as usual, it's mainly Anand's conclusions that I find wanting...;) Nobody's "handing" AMD anything, as far as I can see. AMD is far, far ahead of Intel on the gpu front and has been for years--no accident there. AMD earned whatever position it now enjoys--and it's the only company in existence to go head-to-head with Intel and beat them, and not just "once," as Anand points out. Indeed, we can thank AMD for Core 2 and x86-64; had it been Intel's decision to make we'd all have been puttering happily away on dog-slow, ultra-expensive Itanium derivatives of one kind or another. (What a nightmare!) Intel invested billions in world-wide retool for Rdram while AMD pushed the superior market alternative, DDR Sdram. AMD won out there, too. There are many expamples of AMD's hard work, ingenuity, common sense and lack of greed besting Intel--far more than just two. It's no accident that AMD is far ahead of Intel here: as usual, AMD's been headed in one direction and Intel in another, and AMD gets there first.
But I think I know what Anand means, and that's that AMD can not afford to sit on its laurels. There's nothing here to milk--AMD needs to keep the R&D pedal to the medal if the company wants to stay ahead--absolutely. Had the company done that pre-Core 2, while Intel was telling us all that we didn't "need" 64-bits on the desktop, AMD might have remained out front. The company is under completely different management now, so we can hope for the best, as always. Competition is the wheel that keeps everything turning, etc.
The point Anandtech was trying to make is that no one is stepping up to compete with AMD's Jaguar, and so they are handing that part of the business to AMD - just as AMD handed their desktop CPU business to Intel by deciding not to step up on that front. If you don't do what it takes to compete, you are "handing" the business to those who do. This is a complement to AMD and something of a slam on the other guys, not a suggestion that AMD needed some kind of charity to stay in business here.
I want to suggest that you are letting a bit of fanboyism color your reaction to what others say. :)
Perhaps if AMD had been a bit more "greedy" like Intel is in your eyes, they wouldn't have come so close to crashing permanently. Whatever, it has been very good to see them get some key people back, and that inspires hope in me for the company and the competition it brings to bear. We absolutely need someone to kick Intel in the pants!
Good to see them capture the console market (the two biggest, anyway). Unfortunately, as a PC gamer that hates the fact that many games are made at console levels, I don't see the new generation catching up like they did back when the PS3 and Xbox 360 were released. It looks to me like we will still have weaker consoles to deal with - better than the previous gen, but still not up to mainstream PC standards, never mind high-end. The fact that many developers have been making full-blown PC versions from the start instead of tacking on rather weak ports a year later is more hopeful than the new console hardware, in my opinion.
I honestly expect the fact that both PS4 and X1 are x86 will benefit PC games quite significantly as well. Last gen devs initially developed for 360 and ported over to the PS3 and PC and later in the gen shifted to PS3 as the lead architect with some using PCs. I expect now, since porting to PS4 and X1 will be significantly easier, PC will eventually become the lead platform and will scale down accordingly for the PS4 and X1.
As someone who games more on consoles than PCs, I'm really excited for both platforms as devs can spend less time tweaking on a per platform basis and spend more time elsewhere.
Actually im pretty sure 90% still made the games on xbox first then ported to other platforms, however with all of them (excluding the wii u) being x86, the idea of them porting down from pc is quit possible and I didnt think about that. It would probably start to happen mid to late in the gen though.
I know its definitely not that high for any individual platform, but I do remember a lot of major publishers, Ubi, EA and a bunch of other smaller studios had said (early-mid gen) that because porting to PS3 was such a nightmare and resource intensive, that it was more efficient to spend extra resources initially and use the PS3 as the lead and then have it ported over to 360, which was significantly easier.
While I'm sure quite a large chunk still use 360's as their lead platform, I would say 90% was probably very early in this gen and since then has dropped to be much closer between 360 and PS3.
Although at this point both architectures are well understood and accounted for that most engines should make it easier to develop for both regardless of what platform is started with.
I don't think using x86 would benefit the dev as much as many expected. Sure using the same hw-level arch may simplify the low-level code like asm, but seriously, I don't many of devs nowaday uses asm intensively anymore. (I had been working for current-gen console titles for a little, and never write even a single line of asm). Current-gen of game is complex, and need the best software architecture, otherwise it would lead to delay-to-death shipping schedule. Using asm would lead to premature optimisation that gains little-to-nothing.
What would really affect the dev heavily is sdk. XB1 uses custom OS, but the SDK should be closed to Windows' DirectX (just like XB360). PS4, if it's in the same fashion as PS3, would use the custom-made SDK with OpenGL/OpenGL ES API (PS3 uses OpenGL ES, if I'm not mistaken). It needs another layer of abstration to make it easier to make it fully cross-platform, just like the current generation.
The thing that might be shared across two platform might be the shader code, if AMD can convince both MS and Sony to use the same language.
Well all this is pointless if nobody makes good hardware using it. It's the old story. The last generation Trinity would have allowed very decent mid range notebooks with very long battery run time and more than sufficient power at reasonably low costs.
Have we seen anything? Nope.
So where is a nice 11" Trinity Laptop? Or a 10" Brazos? All either horrible cheap Atom or expensive ULV core anything.
Are the hardware makers afraid that AMD can deliver enough chips? Are they worried stepping on Intels toes? Are they simply uncreative all running in the same direction some stupid mainstream guide tell them?
I suspect it is largely the latter - and most current notebooks are simply uncreative. The loss of sales comes to no surprise I think. And its not all M$ fault. M.
It could be another instance of Intel paying oems to not use certain AMD parts. They've done it before, wouldn't be surprised if it happens again in area's where AMD might have a better component.
But it's also not totally true, having worked at Wal-Mart and other big chain stores, I can tell you that many do carry laptops and ultrathins that use Trinity A series chips, and Brazos E series chips. But, right now, everyone still wants that ipad or galaxy tab. And in general the only people I saw buying laptops and ultrathins were the people during the back to school or back to college crowds. And of course black Friday hordes.
And with AMD having both next gen consoles under their belt, them and many OEMs may be able to leverage that to draw sales of jaguar based systems.
So does Jaguar have new (any) hardware instructions that intel processors don't? (Will intel add them in haswell?) I think game makers will use them actively during the consoles lifetime.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
78 Comments
Back to Article
vision33r - Thursday, May 23, 2013 - link
Real shame is that AMD has not gotten into the mobile market at all. APUs like this would've been great for tablets.jeffkibuule - Thursday, May 23, 2013 - link
Even if AMD makes the chip, and OEM has to be willing to use it.duploxxx - Thursday, May 23, 2013 - link
exactly the problem, current atom is a horrible cpu in wathever device, whatever frequency you put it. have used them in notebooks and even now in a tablet. Bobcat on the other hand was awesome in the netbook range. THe temash would be way better suited for all these devices but as usual OEM focus on the blue brand with market jingles and dominancy and in the end its the end consumer (WE) that suffer from it and if it continues like this we will even suffer more. (less innovative, higher prices, dominant predefined design (something already horrible today) but many people fail to see that........as if they think there Intel system they just bought is a better suited device for everything...mganai - Thursday, May 23, 2013 - link
Intel's been going easy on AMD these past few years.Plus, Atom is finally due for its big update this year, following which we'll be seeing a more frequent update schedule in line with their Core processors.
The heterogeneous solution was what won the PS4 and XB1 for AMD.
thebeastie - Saturday, May 25, 2013 - link
Simple, Money! Why roll as fast as you can when your already the fastest and aren't going to be bringing any more money then you are now.Bobs_Your_Uncle - Saturday, May 25, 2013 - link
I'd read an interesting perspective on why Intel refrained from "kicking AMD to the curb & on down into the storm sewer" (sorry; can't recall the source). In essence, given the scope of Intel's unquestioned dominance in their chosen markets, (& mobile's on the radar), were they to act with any obvious & direct intent to further weaken, (or even try & finish off), AMD, Intel would find themselves in an extremely difficult, exceedingly complex & decidedly unpleasant set of circumstances.By decimating their only possible source of true competition, Intel would be responsible for invoking upon themselves intense anti-trust scrutiny; a result that would be inevitable assuming regulatory agencies were functioning properly.
By backing off a bit, Intel may well cede some amount of business to AMD, but they retain a legitimate market competitor & at the same time continue collecting very healthy margins. The premiums charged on sales can then be used to continue funding aggressive Intel R&D, and, uh, marketing related expenditures, too.
spartaman64 - Wednesday, June 4, 2014 - link
for a company that could "kick amd to the curb" intel is awfully nervous about amd's kaveriWolfpup - Wednesday, June 12, 2013 - link
Yeah, but regardless AMD's had the FAR better CPU now for years. I've been running the lowest end version of it for a couple of years in a tiny notebook, and from the beginning wished people were using it for tablets.Flunk - Friday, June 6, 2014 - link
When you say current Atom, are you referring to Baytrail? Previous Atoms were pretty bad for Windows boxes, in my personal experience. But the new generation is significantly more powerful (About on par with a Core 2 Duo on benchmarks). They seem pretty reasonable for basic office tasks, this may not include all the lowest-end versions.Flunk - Friday, June 6, 2014 - link
NVM, followed the wrong link and didn't see the date. Thought this was about something else. May 2013 Atoms sucked.Krysto - Friday, May 24, 2013 - link
Still don't see why OEM's would choose AMD's APU's in Android tablets over ARM, though.It's weaker CPU wise, and most likely weaker GPU wise, too. We'll see when they come out if their GPU's can stand up to Adreno 330, PowerVR Series 6 and Mali T628. Plus, it requires quite a bit of power.
In my book no chip that can't be used in a smartphone (and I'm talking about the exact same model, not the "brand") should be called a "mobile chip".
This idea about "tablet chips" is nonsense. Tablet chips is just another way of saying our chip is not efficient enough, so we're just going to compensate for that with a much larger battery, that adds more to weight, charging time, and of course price.
ReverendDC - Monday, May 27, 2013 - link
Even an Atom chip is more powerful per cycle than ARM, and AMD's stuff is more powerful than Atom. I'm not exactly sure what you are using to state that ARM is more powerful, but AnandTech did a great comparison themselves.By the way, the comparisons in some cases are for quad core ARM vs. single core Atom at comparable speeds. Again, really not sure where your "facts" come from.
BernardBlack - Wednesday, May 29, 2013 - link
ARM actually isn't all that...and it's quite the other way around.As I have seen it stated elsewhere, "Simply: x86 IPC eats ARM for lunch while actual performance and power usage will scale together. That is why ARM currently has no real business competing against x86"
BernardBlack - Wednesday, May 29, 2013 - link
It's all about instructions per cycle and that is what AMD and Intel do best.BernardBlack - Wednesday, May 29, 2013 - link
not to mention, the IPC's in these processors follow suit with their server Opteron processors, which means, they achieve even greater IPC's per cycle. This is how you are able to have 1.6ghz CPU's that can compete with many common 3-4ghz desktop processors.eanazag - Friday, May 31, 2013 - link
They could use the exact same hardware design to sell Android and Windows tablets.Wolfpup - Wednesday, June 12, 2013 - link
Huh? AMD's Bobcat parts are more powerful than ARM's stuff. ARM's only just now managing to sort of compete with first gen Atom at best, and that with a CPU that's not actually used in much.And "tablet chips" is NOT nonsense. You have more power budget, and higher expectations for performance in a tablet. If it's "nonsense", why does Apple put bigger chips in their tablets? Why can tablets run Core i CPUs? Why do they typically get bigger chips first even with Android?
Wolfpup - Wednesday, June 12, 2013 - link
To add to that, THIS article explicitly says "Jaguar is presently without competition...nothing from ARM is quick enough."So really, where the heck are you getting the idea ARM has more powerful chips?
kyuu - Thursday, May 23, 2013 - link
I think the main point of this is getting into the mobile market. Temash looks like a great chip for a tablet. The only problem is OEMs not biting because they think they have to put an Intel sticker on the box for it to sell.Personally, I'm waiting on a good tablet with Temash to finally jump into the Win8 tablet club. Whatever OEM makes a good one first will be getting my money.
mikato - Friday, May 24, 2013 - link
I think with tablets, OEMs are even less likely to think they need to put an Intel sticker on it. Joe Schmo knows Intel doesn't mean as much for tablets. The most well known tablets aren't Intel. This is an opening for AMD to be able to get in the game late if they want to.Actually, do they even put stickers on tablets?
Wolfpup - Wednesday, June 12, 2013 - link
Ironically when I see an Intel sticker on a tablet (unless it's a Core i part), I avoid it like the plague. Bobcat would have been perfect for tablets, and a BIG selling point.Wolfpup - Wednesday, June 12, 2013 - link
Yeah, I really have no interest in an Atom tablet, partially even just because of the horrible video.I've got an 11.6" AMD c50 (lowest end Bobcat) based notebook, and while it's slow, it's still impressive how it runs anything, and in a pinch can even function as a main PC. AMD's got an even lower power Bobcat part with the exact same performance for tablets, but I don't know of shipping computers that used it, and it really would have been perfect. These new ones of course will be even better.
I wonder if the companies building these understand that using AMD would be a selling point... I see "Atom" and my eyes glaze over....
codedivine - Thursday, May 23, 2013 - link
4 DP FMAs per 16 cycles? Why even bother putting them in :|Tuna-Fish - Thursday, May 23, 2013 - link
Because it's expected by the spec, and some compute loads use it for very rarely used things.Exophase - Thursday, May 23, 2013 - link
"I should point out that ARM is increasingly looking like the odd-man-out here, with both Jaguar and Intel’s Silvermont retaining the dual-issue design of their predecessors."It's not just ARM, it's three different current gen ARM cores.. if you're going to pose it as ISA shouldn't it then just be ARM vs x86 and not ARM vs Silvermont and Jaguar?
Besides, MIPS is 3-way in its CPUs targeting this power budget too (proAptiv), and so is PowerPC (e600 for instance). The reason why Silvermont and Jaguar is 2-way is really undeniable: x86 decoders are substantially more expensive than those for any of these ISAs, even Thumb-2. There's some validity to the argument that x86 instructions are more powerful (after first negating where they aren't - most critically, lack of three-way addressing adds a lot of extra move instructions for non-AVX processors) but nowhere close to 50% more powerful.
lmcd - Thursday, May 23, 2013 - link
Isn't Qualcomm Krait 2-way?Exophase - Thursday, May 23, 2013 - link
Qualcomm hasn't said an awful lot about the internals of the uarch but several sources report 3-way decode and I haven't seen any say 2-way. It's possible that isn't fully symmetric or limited in some other way, we don't really know.Krysto - Friday, May 24, 2013 - link
Pretty sure it's 3-way.tiquio - Thursday, May 23, 2013 - link
I don't really understand the point about unique macros. What are macros in reference to CPU architecture.quasi_accurate - Thursday, May 23, 2013 - link
Don't worry, I had no idea either until I started working in the industry :) It just means custom circuits that are hand crafted by a human. This is as opposed to "synthesis", in which the RTL code (written in a hardware description language such as Verilog) are "synthesized" by design software into circuits.fluxtatic - Thursday, May 23, 2013 - link
Whoa - I think this is the first useful thing I've learned today. I've been wondering the same thing for a long time. Thanks!Ev1l_Ash - Wednesday, May 28, 2014 - link
Thanks for that quasi!Tuna-Fish - Thursday, May 23, 2013 - link
quasi was more accurate than his name implies, but just to expand on it:The count of custom macros is important because when you switch manufacturing processes, the work you have to re-do on the new process is the macros. Old cpus were "all custom macro", meaning that switching the manufacturing process meant re-doing all the physical design. A cpu that has a very limited amount of custom macros can be manufactured at different fabs without breaking the bank.
lmcd - Thursday, May 23, 2013 - link
Sorry, didn't see your post.lmcd - Thursday, May 23, 2013 - link
To suppliment quasi_accurate (as I understand) these are parts of the chip that need checked on, adjusted and corrected, and/or even replaced depending on the foundry.So, reducing these isn't a priority for Intel, but for AMD who wants portability (ability to use both GloFo and TMSC) it's a priority.
tiquio - Thursday, May 23, 2013 - link
Thanks quasi_accurate, Tuna-Fish and lmcd. Your answers were very clear.If I my understanding is correct, would it be safe to assume that Apple's A6 uses custom macros. Anand mentioned in his article that Apple used a custom layout of ARM to maximize performance. Is this one example of custom macros.
name99 - Friday, May 24, 2013 - link
You can customize a variety of things, from individual transistors (eg fast but leaky vs slow but non-leaky), to circuits, to layout.As I understand it the AMD issue is about customized vs automatic CIRCUITS. The Apple issue is about customized vs automatic LAYOUT (ie placement of items and the wiring connecting them).
Transistors are obviously most fab-specific, so you are really screwed if your design depends on them specifically (eg you can't build your finFET design at a non-finFET fab). Circuit design is still somewhat fab-specific --- you can probably get it to run on a different fab, but at lower frequency and higher power, so it's still not where you want to be. Layout, on the other hand, I don't think is very fab-specific at all (unless you do something like use 13 metal layers and then want to move to a fab than can only handle a maximum of 10 metal layers).
I'd be happy to be corrected on any of this, but I think that's the broad outline of the issues.
iwodo - Thursday, May 23, 2013 - link
Really want this to be in Servers. Storage Servers, Home based NAS, caching / front end servers etc.JohanAnandtech - Thursday, May 23, 2013 - link
agree. With a much downsized graphics core, and higher clocks for the CPU.Alex_Haddock - Thursday, May 23, 2013 - link
We will certainly have Kyoto in Moonshot :-) . http://h30507.www3.hp.com/t5/Hyperscale-Computing-...GuMeshow - Friday, May 24, 2013 - link
The Embedded G-Series SOCs seem to be exactly Kabini + ECC memory enabled (ex: GX-420CA and A5-5200). This will probably be the cheapest way to get ECC enabled and better performance then Atom, next step up would be Intel S1200KPR + Celeron G1610?.I've been thinking of putting together a Router/Firewall/Proxy/NAS combo ...
R3MF - Thursday, May 23, 2013 - link
HSA?Spoelie - Thursday, May 23, 2013 - link
Is it just me or does the shared L2 cache merely enable the same scaling to 4 cores as bobcat had to 2 cores? There is no "massive benefit" as alluded to in the numbers or discussion.Bobcat has for one thread 0.32 and for two threads 0.61, or a scaling of 95%. (0.64 perfect scaling)
Jaguar has for one thread 0.39 and for four threads 1.50, or a scaling of 96% (1.56 perfect scaling)
The 1% difference could easily be a result of score rounding. I see that a four core bobcat would probably scale worse than jaguar, but the percentages chosen in the table are a bit misleading.
Spoelie - Thursday, May 23, 2013 - link
Of course, drawing such conclusions from a single benchmark is dangerous. If other benchmarks exhibit more code/data sharing and thread dependencies than Cinebench, their numbers might show a more appreciable scaling benefit from the shared L2 cache.tipoo - Thursday, May 23, 2013 - link
I wonder how this compares to the PowerPC 750, which the Wii U is based off of. The PS4 and One being Jaguar based, that would be interesting.aliasfox - Thursday, May 23, 2013 - link
Wii U uses a PPC 750? Correct me if I'm wrong, but the PPC 750 family is the same chip that Apple marketed as the G3 up until about 10 years ago? And IIRC, Dolphin in the GameCube was also based on this architecture?Back in the day, the G3 at least had formidable integer performance -clock for clock, it was able to outdo the Pentium II on certain (integer heavy) benchmarks by 2x. Its downfall was an outdated chipset (no proper support for DDR) and the inability to scale to higher clockspeeds - integer performance may have been fast, but floating point performance wasn't quite as impressively fast - good if the Pentium II you're competing against is nearly the same clock, bad when the PIII and Core Solos are 2x your clockspeed.
Considering the history of the PPC 750, I'd love to know how a modern version of it would compare.
tipoo - Thursday, May 23, 2013 - link
Yes, the Gamecube, Wii, and Wii U all use PowerPC 750 based processors. The Wii U is the only known multicore implementation of it, but the core itself appears unchanged from the Wii, according to the hacker that told us the clock speed and other details.tipoo - Thursday, May 23, 2013 - link
And you're right, it was good at integer, but the FPU was absolutely terrible...Which makes it an odd choice for games, since games rely much more on floating point math than integer. I think it was only kept for backwards compatibility, while even three Jaguar cores would have been better performing and still small.The Nintendo faithful are saying it won't matter since FP work will get pushed to the GPU, but the GPU is already straining to get even a little ahead of the PS360, plus not all algorithms work well on GPUs.
tipoo - Thursday, May 23, 2013 - link
Also barely any SIMD, just paired singles. Even the ancient Xenon had good SIMD.tipoo - Thursday, May 23, 2013 - link
Unchanged on the actual core parts I mean, obviously the eDRAM is different from old 750s.skatendo - Friday, May 24, 2013 - link
Not entirely true. The Wii U CPU is highly customized and has enhancements not found in typical PowerPC processors. It's been completely tailored for gaming. I'm not saying it's the power of the newer Jaguar chipsets, but the beauty of custom silicon is that you can do much more with less (Tegra 3's quad-core, 12-core GPU vs. Apple's A5 dual core CPU/GPU anyone? yeah A5 kicked its arse for games) that's why Nintendo didn't release tech specs because they tailored a system for games and performance will manifest with upcoming games (not these sloppy ports we've seen so far).tipoo - Friday, May 24, 2013 - link
I'm aware it would be highly customized, but a plethora of developers have also come out and said the CPU sucks.skatendo - Saturday, May 25, 2013 - link
Also the "plethora" of developers that said it sucked (namely the Metro: Last Light dev) said they had an early build of the Wii U SDK and said it was "slow". Having worked for a developer, they base their opinions on how fast/efficient they can port over their game. The Wii U is a totally different infrastructure that lazy devs don't want to take the time to learn, especially with a newer GPGPU.Kevin G - Sunday, May 26, 2013 - link
If a developer wants to do GPGPU, the PS4 and Xbox One will be highly preferable due to unified virtual memory space. If GPGPU was Nintendo's strategy, they shouldn't have picked a GPU from the Radeon 6000 generation. Sure, it can do GPU but there are far more compromises to hand off the workload.Simen1 - Thursday, May 23, 2013 - link
What is the TDP and die size of the APUs in X-Box One and Playstation 4?haukionkannel - Thursday, May 23, 2013 - link
Douple the 1.6 Ghz 4 core version and you are near. The wider memory controller eats some extra energy to, so maybe you have to add 0.2 to 0.3 calculation...fellix - Thursday, May 23, 2013 - link
"The L2 cache is also inclusive, a first in AMD’s history."Not exactly correct. The very first Athon (K7) on Slot A with off-die L2 used inclusive cache hierarchy. All models after that moved to exclusive design.
Exophase - Thursday, May 23, 2013 - link
Bulldozer is also mostly inclusive. Not strictly inclusive, but certainly not exclusive (you really wouldn't get such a thing from a writethrough L1 cache)whyso - Thursday, May 23, 2013 - link
Ahh amd, I love your marketing slides. Lets compare battery life and EXCLUDE the screen. Never mind that the screen consumes a large amount of power and that when you add it to the total battery life savings go down tremendously. (That's why sandy-> ivy bridge didn't improve battery life that much on mobile). Lets also leave out the Rest of system power and soc power for brazos. It also looks like the system is using an SSD to generate these numbers which looking at the target market almost no OEM will do.extide - Thursday, May 23, 2013 - link
It's a perfectly valid comparison to make. All laptops will include a screen and the screen has nothing to do with AMD (or Intel).Spunjji - Friday, May 24, 2013 - link
Every CPU manufacturer does that... why would they include numbers they have no control over?araczynski - Thursday, May 23, 2013 - link
Can anyone clue me in as to how AMD got the rights to make 'PU's for both of the consoles? Was it just bang/$ vs Intel/IBM/etc? Not a fanboy of either camp (amd/intel), just curious.Despoiler - Thursday, May 23, 2013 - link
It was purely the fact that they have an APU with a high end GPU on it. Intel is nowhere near AMD in terms of top tier graphics power. Nvidia doesn't have x86. The total package price for an APU vs CPU/GPU made it impossible for an Intel/Nvidia solution to compete. The complexity is also much less on an APU system than CPU/GPU. The GPU needs a slot on the mobo. You have to cool it as well as the CPU. Less complexity = less cost.araczynski - Thursday, May 23, 2013 - link
thanks.tipoo - Thursday, May 23, 2013 - link
Multiple reasons, AMD has historically been better with console contracts than Nvidia or Intel though, those two want too much control over their own chips while AMD licences them out and lets Sony or MS do whatever with them. They're probably also cheaper, and no one else has an all in one APU solution with this much GPU grunt yet.araczynski - Thursday, May 23, 2013 - link
thanks.WaltC - Thursday, May 23, 2013 - link
Good article!--as usual, it's mainly Anand's conclusions that I find wanting...;) Nobody's "handing" AMD anything, as far as I can see. AMD is far, far ahead of Intel on the gpu front and has been for years--no accident there. AMD earned whatever position it now enjoys--and it's the only company in existence to go head-to-head with Intel and beat them, and not just "once," as Anand points out. Indeed, we can thank AMD for Core 2 and x86-64; had it been Intel's decision to make we'd all have been puttering happily away on dog-slow, ultra-expensive Itanium derivatives of one kind or another. (What a nightmare!) Intel invested billions in world-wide retool for Rdram while AMD pushed the superior market alternative, DDR Sdram. AMD won out there, too. There are many expamples of AMD's hard work, ingenuity, common sense and lack of greed besting Intel--far more than just two. It's no accident that AMD is far ahead of Intel here: as usual, AMD's been headed in one direction and Intel in another, and AMD gets there first.But I think I know what Anand means, and that's that AMD can not afford to sit on its laurels. There's nothing here to milk--AMD needs to keep the R&D pedal to the medal if the company wants to stay ahead--absolutely. Had the company done that pre-Core 2, while Intel was telling us all that we didn't "need" 64-bits on the desktop, AMD might have remained out front. The company is under completely different management now, so we can hope for the best, as always. Competition is the wheel that keeps everything turning, etc.
Sabresiberian - Thursday, May 23, 2013 - link
The point Anandtech was trying to make is that no one is stepping up to compete with AMD's Jaguar, and so they are handing that part of the business to AMD - just as AMD handed their desktop CPU business to Intel by deciding not to step up on that front. If you don't do what it takes to compete, you are "handing" the business to those who do. This is a complement to AMD and something of a slam on the other guys, not a suggestion that AMD needed some kind of charity to stay in business here.I want to suggest that you are letting a bit of fanboyism color your reaction to what others say. :)
Perhaps if AMD had been a bit more "greedy" like Intel is in your eyes, they wouldn't have come so close to crashing permanently. Whatever, it has been very good to see them get some key people back, and that inspires hope in me for the company and the competition it brings to bear. We absolutely need someone to kick Intel in the pants!
Good to see them capture the console market (the two biggest, anyway). Unfortunately, as a PC gamer that hates the fact that many games are made at console levels, I don't see the new generation catching up like they did back when the PS3 and Xbox 360 were released. It looks to me like we will still have weaker consoles to deal with - better than the previous gen, but still not up to mainstream PC standards, never mind high-end. The fact that many developers have been making full-blown PC versions from the start instead of tacking on rather weak ports a year later is more hopeful than the new console hardware, in my opinion.
blacks329 - Thursday, May 23, 2013 - link
I honestly expect the fact that both PS4 and X1 are x86 will benefit PC games quite significantly as well. Last gen devs initially developed for 360 and ported over to the PS3 and PC and later in the gen shifted to PS3 as the lead architect with some using PCs. I expect now, since porting to PS4 and X1 will be significantly easier, PC will eventually become the lead platform and will scale down accordingly for the PS4 and X1.As someone who games more on consoles than PCs, I'm really excited for both platforms as devs can spend less time tweaking on a per platform basis and spend more time elsewhere.
FearfulSPARTAN - Thursday, May 23, 2013 - link
Actually im pretty sure 90% still made the games on xbox first then ported to other platforms, however with all of them (excluding the wii u) being x86, the idea of them porting down from pc is quit possible and I didnt think about that. It would probably start to happen mid to late in the gen though.blacks329 - Thursday, May 23, 2013 - link
I know its definitely not that high for any individual platform, but I do remember a lot of major publishers, Ubi, EA and a bunch of other smaller studios had said (early-mid gen) that because porting to PS3 was such a nightmare and resource intensive, that it was more efficient to spend extra resources initially and use the PS3 as the lead and then have it ported over to 360, which was significantly easier.While I'm sure quite a large chunk still use 360's as their lead platform, I would say 90% was probably very early in this gen and since then has dropped to be much closer between 360 and PS3.
Although at this point both architectures are well understood and accounted for that most engines should make it easier to develop for both regardless of what platform is started with.
mr_tawan - Sunday, May 26, 2013 - link
I don't think using x86 would benefit the dev as much as many expected. Sure using the same hw-level arch may simplify the low-level code like asm, but seriously, I don't many of devs nowaday uses asm intensively anymore. (I had been working for current-gen console titles for a little, and never write even a single line of asm). Current-gen of game is complex, and need the best software architecture, otherwise it would lead to delay-to-death shipping schedule. Using asm would lead to premature optimisation that gains little-to-nothing.What would really affect the dev heavily is sdk. XB1 uses custom OS, but the SDK should be closed to Windows' DirectX (just like XB360). PS4, if it's in the same fashion as PS3, would use the custom-made SDK with OpenGL/OpenGL ES API (PS3 uses OpenGL ES, if I'm not mistaken). It needs another layer of abstration to make it easier to make it fully cross-platform, just like the current generation.
The thing that might be shared across two platform might be the shader code, if AMD can convince both MS and Sony to use the same language.
That's only guesses, I might be wrong.
mganai - Thursday, May 23, 2013 - link
That, and Intel's been making a bigger push for the smartphone market; it even says so in the article!Silvermont should change things up quite favorably.
mschira - Thursday, May 23, 2013 - link
Well all this is pointless if nobody makes good hardware using it.It's the old story. The last generation Trinity would have allowed very decent mid range notebooks with very long battery run time and more than sufficient power at reasonably low costs.
Have we seen anything?
Nope.
So where is a nice 11" Trinity Laptop?
Or a 10" Brazos?
All either horrible cheap Atom or expensive ULV core anything.
Are the hardware makers afraid that AMD can deliver enough chips?
Are they worried stepping on Intels toes?
Are they simply uncreative all running in the same direction some stupid mainstream guide tell them?
I suspect it is largely the latter - and most current notebooks are simply uncreative. The loss of sales comes to no surprise I think. And its not all M$ fault.
M.
Mathos - Thursday, May 23, 2013 - link
It could be another instance of Intel paying oems to not use certain AMD parts. They've done it before, wouldn't be surprised if it happens again in area's where AMD might have a better component.But it's also not totally true, having worked at Wal-Mart and other big chain stores, I can tell you that many do carry laptops and ultrathins that use Trinity A series chips, and Brazos E series chips. But, right now, everyone still wants that ipad or galaxy tab. And in general the only people I saw buying laptops and ultrathins were the people during the back to school or back to college crowds. And of course black Friday hordes.
And with AMD having both next gen consoles under their belt, them and many OEMs may be able to leverage that to draw sales of jaguar based systems.
Gest - Saturday, May 25, 2013 - link
So does Jaguar have new (any) hardware instructions that intel processors don't? (Will intel add them in haswell?) I think game makers will use them actively during the consoles lifetime.scaramoosh - Monday, May 27, 2013 - link
Doesn't this just mean the console CPU power is lacking compared to what PCs currently have?Silma - Wednesday, May 29, 2013 - link
Absolutely.