Comments Locked

140 Comments

Back to Article

  • North01 - Monday, February 12, 2018 - link

    "When we’re looking at competitor devices we see only the the iPhone X able to compete with the last generation Snapdragon 835 devices – however with a catch. The A11 is severely thermally constrained and is only able to achieve these scores when the devices are cold. Indeed as seen from the smaller score of the iPhone 8, the SoC isn’t able to sustain maximum performance for even one benchmark run before having to throttle."

    I was curious about that, as the iPhone X has a lower score on Futuremark's website.

    Futuremark:
    https://www.futuremark.com/hardware/mobile+sling_s...

    3DMark Sling Shot Extreme Unlimited

    iPhone X (A11)
    Average score: 3175
    Physics score: 2206
    Graphics score: 3625

    iPhone 8 (A11)
    Average score: 2802
    Physics score: 1988
    Graphics score: 3166

    Pixel 2 (Snapdragon 835)
    Average score: 3531
    Physics score: 2951
    Graphics score: 3723

    The Snapdragon 845 is looking to be rather impressive, hopefully we can find out more about its sustained performance when it launches in upcoming devices.
  • Matthmaroo - Monday, February 12, 2018 - link

    Yes but A11 was already deployed , by the time this soc is in the hands of people - A12 will be a factor

    Good on Qualcomm for catching up but still 6 months to a year behind
  • North01 - Monday, February 12, 2018 - link

    I'm a bit confused by your comment. The comparison was between the Snapdragon 835 (H1 2017) and the A11 (H2 2017).
  • Commodus - Monday, February 12, 2018 - link

    I believe he was referring to the 845 comments at the end. And this is the usual yearly concern: people wax ecstatic, thinking Qualcomm has finally caught up to or beaten Apple... and forget that the phones with the new chip are shipping half a year later (or longer) after Apple's, so any advantage is short-lived.
  • vanilla_gorilla - Monday, February 12, 2018 - link

    That's an interesting perspective. Couldn't you just as easily say that any time an iPhone is launched with a new SoC it's advantage will be short lived as the next Qualcomm SoC will be faster? It seems to me they're just leapfrogging each other each time.
  • Matthmaroo - Saturday, February 17, 2018 - link

    So the next gen Qualcomm chip gpu beats apples last gen tech.

    6 months from now we will have a phone with the 845 and 7 months from now we will have the A12 - that’s the actual comparison

    Huge win

    They still can keep up with apples single thread performance.
  • ciparis - Sunday, February 18, 2018 - link

    "So the next gen Qualcomm chip gpu beats apples last gen tech."

    Except it doesn't -- not remotely. Qualcomm is reliably 2-3 generations behind in performance of flagship phones.
  • Santoval - Tuesday, February 20, 2018 - link

    Which review did you read (if you read any)?
  • lilmoe - Monday, February 12, 2018 - link

    The A11 isn't, in any way, superior to the SD835 for mobile. Get this through your heads people. Have you been under a rock? Haven't you been reading about the failures of Apple's core architecture? What else do you want as proof?

    SMH....
  • neoncat - Tuesday, February 13, 2018 - link

    <Kevin Bacon in Animal House>

    REMAIN CALM!! ALL IS WELL!!!!!

    </Kevin Bacon in Animal House>
  • close - Tuesday, February 13, 2018 - link

    A11 does pretty well for a SoC that has only 2 high performance cores, versus the 835's 4.

    So shake it harder. maybe something good comes out :).
  • lilmoe - Tuesday, February 13, 2018 - link

    Enjoy your 2 high performance cores at 600mhz, cute little iFan, since that's max clock speed it can go and still be relatively efficient, at least according to Apple.
  • techconc - Wednesday, February 14, 2018 - link

    Wow, this is certainly one of the more delusional posts I've ever seen. The A11 is head and shoulders above the SD835. It's not even close. What's worse, this is the period where the Qualcomm is supposed to match or exceed Apple's chip. Outside of 3D Mark anomaly, that doesn't appear to have happened.
  • Matthmaroo - Saturday, February 17, 2018 - link

    dude Do you really believe your own crap
  • Eximorph - Tuesday, February 13, 2018 - link

    Sorry to disappoint you but apple is probably ahead on the cpu but on the gpu is still behind and for like 3 years. The A11 gpu score 2475 vs my lg g5 with adreno 530 score 2545. IPhone 8 slingshot https://youtu.be/JLTzPawjy-0
  • techconc - Wednesday, February 14, 2018 - link

    I guess it depends on the benchmark you use to make your claim. The GFXBench scores seem to clearly favor the A11.
  • Eximorph - Thursday, February 15, 2018 - link

    Yep beacuse the a11 only can use old and light api's like open gl es 2.0 or 3.0 but not 3.1 or 3.2 like android do it. Even the adreno 530 run's them like a champion.
  • techconc - Friday, February 16, 2018 - link

    Apple still keeps legacy support for OpenGL but has moved on to Metal. At the end of the day, it's how fast it performs a task that matters, not which API it's using. Maybe someday when Vulkan matures a bit and enough of the Android user base gets on a modern Android OS release there will be some level of API parity, but not today.
  • Matthmaroo - Saturday, February 17, 2018 - link

    Lol android and parity across devices

    Almost every android phone is abandoned by manufacturers after 6 months
  • Eximorph - Tuesday, February 20, 2018 - link

    You are wrong hahha is 2 years but different from apple android manufacturer's do not reduce you processor speed beacus a cheap battery hahahah and bad battery life.
  • Eximorph - Sunday, February 18, 2018 - link

    You should read a little before post comments. The speed of the gpu depends on the api Thats why apple change from open gl es to metal beacus metal is a low overhead so the performance is higher. And thats why when some one test a gpu have to do it under fair conditions and thats where 3Dmark is good. And like you can see even the adreno 530 on the same api have a better performance than the A11 gpu. Even on t-rex onscreen the performance of the new A11 gpu is the same has the adreno 530. 59 for A11, 60 for the adreno 530 on the oneplus 3t. one more time, apple today day Is almost 3 years behind. Vulkan is have been you in some game today day but you want to know whats the funny thing here, vulkan is on version 1 and offer the same persomance has metal and metal is on version 2 hahahah.
  • Ratman6161 - Thursday, February 15, 2018 - link

    "Good on Qualcomm for catching up but still 6 months to a year behind"

    I guess it depends how you define behind. With my trusty old (and completely paid for) Note 5, I have had a phone since 2015 that is plenty fast for everything I do. Do its all just academic for me. The days when I needed a faster CPU are long since over.
  • SoC lover - Friday, March 2, 2018 - link

    A-11 bionic is just only 6 cores but still a flagship and powerful while snapdragon 835/845 has 8 cores so im thinking what if apple make a new chipset with 8 cores thats would be soo powerful
  • mfaisalkemal - Monday, February 12, 2018 - link

    i think why anandtech got that score because of the device run on cold condition. score from futuremark got from normal condition.
  • BenSkywalker - Monday, February 12, 2018 - link

    We really should tip our hats to Qualcomm's legal team for this one. It is amazing their engineers have managed to push out a GPU that can edge out the Tegra X1 a mere three years after it came out.

    Slingshot Ex- 5360 graphics on a three year old SoC. Really shows how what truly matters in this market is top tier lawyers, and some fourth tier engineers.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    One uses ~12 watts and the other uses ~4W, great comparison there.
  • BenSkywalker - Monday, February 12, 2018 - link

    Two things, one is that your power numbers are going to have to be sourced, the highest number I could ever find was 10 watts(and that was using a UE4 torture test). The only power draw numbers I could find for Manhattan had is sucking down a whopping 1.51 watts for the GPU(just the GPU and clocked to match some crappy Apple SoC's performance).

    https://www.anandtech.com/show/8811/nvidia-tegra-x...

    Second thing- 20nm vs 10nm. On an engineering basis, this is a sad part. Qualcomm's legal posturing is the only reason they are remotely viable.

    We are being held back to a staggering degree because of Qualcomm's strong arm tactics. The performance numbers speak for themselves, they are years behind.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    That's the power on a Shield TV at maximum performance; source me. You're claiming that QC only reached now performance level so it's only fair to compare power at that level.

    Qualcomm is the one far ahead, your conspiracy theories make no sense.
  • BenSkywalker - Monday, February 12, 2018 - link

    You are comparing the power draw of an entire device to the SoC alone, and you are comparing a 20nm part to a 10nm part on your just plain wrong power numbers.

    I linked it- running Manhattan the x1's GPU was using less then half the power that the 845 was reporting- the comparison wasn't exactly fair as they weren't measuring the same thing, but it is closer to accurate then what you are trying to imply. Also, 20nm vs 10nm LPP- are you being intentionally obtuse here, or do you not have a clue what you are talking about whatsoever?

    This part, at least on the GPU end, isn't just bad, it is pathetic. It can barely edge out a three year old SoC. It is a joke.
  • darkd - Monday, February 12, 2018 - link

    The x1 by itself is a 10-12W TDP part. You can Google this easily. Qualcomm SoCs are 4-5W typically. Also note you are comparing power draw at 33 fps of a different benchmark that runs ~40% faster (Manhattan 3.0 vs. 3.1), which makes no sense.

    The x1 at peak can hit ~60 fps on Manhattan 3.0. The 835 from last year can also do that, but at 2-3x less power.

    If you want to talk the ~33 fps on Manhattan 3.0 where the 1.5W you mentioned was measured, then the Adreno 430 can do that about 3 years ago. Because that's peak clocks for the 430 it would likely be using 2+ W GPU power, which is more. The x1 was more power efficient than the 430, congrats. It hasn't been since then, however, as Qualcomm improved their Manhattan score 60-70% in 530. All of these number are easily obtained on gfxbench.com.

    And I dunno what legal study you've done, but to imply Qualcomm lawyers can keep OEMs from using competitor SoCs is completely unfounded. Many of them have and do use other SoCs. They tend not to use Nvidia for mobile anymore, however, as the recent Nvidia parts all have too high a TDP.
  • BenSkywalker - Monday, February 12, 2018 - link

    Legal studies I am lacking, you are correct, alas you can't keep out of the press how much trouble QC is in for their practices-

    https://www.forbes.com/sites/greatspeculations/201...

    China fined them for a billion already, Korea for $850 Million, Taiwan for another $750 Million- US suit is in progress.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    No I'm comparing the same methodology on the devices, active system power. The Shield TV on the X1 does 12W at 61fps. The QRD845 did 82fps at 4.4W.

    Nowhere in this article nor from Qualcomm is there the pure GPU power figure published, but always the system power. Your Nvidia figure is running at half performance meaning up to a 3x higher efficiency point. The GPU at full power is at 5-6W and that's why the Shield and Switch need an active fan to cool them.

    The gap is not closed by process normalisation.
  • BenSkywalker - Monday, February 12, 2018 - link

    Samsung says the gap is entirely closed by process normalisation alone(20nm<35%14nm<,40%10nmLPE<15%10nm LPE) , their claims- but hey, they just actually make the chips. What do the people who run the 10nm fab know compared to you, right?

    Three years later, they have a competitive part, you want to consider that a QC win, well, you are clearly their target customer. Spend more on lawyers, less on engineers :)
  • mfaisalkemal - Monday, February 12, 2018 - link

    after calculate with samsung process normalisation, nvidia still have a gap around 20% worse than qualcomm on gfxbench 3.0 manhattan.
    Nvidia tegra x1 : 12W * 0.3315 = 3.978 W (Normalize from 20nm to 10nm)
    61FPS @ 3.978W -->15.33FPS/W

    Adreno 640
    82FPS @ 4.4W --> 18.63FPS/W
  • mfaisalkemal - Monday, February 12, 2018 - link

    i mean adreno 630 lol
  • BenSkywalker - Monday, February 12, 2018 - link

    Could you provide some links? Seems odd that the Adreno 640 uses the exact same wattage on two different benches.

    Also- your wattage consumption assumption negates the earlier link I provided showing a massive efficiency gain in terms of performance/watt once you moved away from nearing thermal limits. Either you would be able to clock the part higher at the same power level(reduced leakage, better matching of optimal power usage for the die etc) or you would use less power- not to mention you would no longer be using active cooling for something that low power(again reducing power draw).

    That would assume, of course, that no other improvement was possible in the last *THREE YEARS* since we saw this level of performance.

    BTW- We could also ask things like why is tessellation performance still *half* of a three year old SoC, but that would imply that Qualcomm actually cared about moving forward with technology.
  • mfaisalkemal - Monday, February 12, 2018 - link

    that data was from Andrei Frumusanu comment, and i think he test it but not published. yup your're right nvidia better on tesselation offscreen test, but adreno 630 better on texturing offscreen(15424mtexel/s vs 13427mtexel/s) and i guess gfxbenchmark car chase test(tesselation test) adreno 630 better than tegra x1 although the tesselation offscreen worse.

    your link provided why so low wattage(1.51W) i think because nvidia only estimate gpu power without ram power, in this article andrei and ryan estimate system power(device substract idle power like display etc.
  • Kvaern1 - Tuesday, February 13, 2018 - link

    I'd be much more disappointed with NVidia if Qualcomm could make GPUs as well as them.

    Now, if only NVidia could make a competitive CPU.
  • Eximorph - Tuesday, February 13, 2018 - link

    I have the shield tv, shield k1 and an lg g5 ( tegra x1, tegra k1 and adreno 530) let me tell you that the tegra x1 is powerful but let's be honest. First The tegra x1 is connected to a power sources, second the tegra x1 is on max performance mode at all time, 3rd it have fans and 4th lets go to the specs, 256 cores at 1000 mhz vs 256 cores at 624 mhz on the adreno 530 we are talking about 376 mhz more per core for the tegra x1 over the adreno 530, screen resolution 1080 vs 2k. Now lets under clock the x1 to a 624 mhz and let put a 2k resolution screen and let see whats happen ? The result with just a 2k screen is the next one, Manhattan 3.0 offscreen, google pixel c 46 fps adreno 530 46 fps. So the true here is that Qualcomm is a beast on the gpu side and nvidia, apple and amr have alote to learn. Qualcomm is not behind, Qualcomm is far ahead. a really small chip with a really low power consumption and a great performance. The tegra x1 do not have nothing to look against the adreno 630.
  • Eximorph - Tuesday, February 13, 2018 - link

    https://wccftech.com/snapdragon-820-benchmarks/
  • gamertaboo - Monday, February 12, 2018 - link

    Well, you sure see wildly different scores if you compare them using Geekbench. 4260 vs 1998 on single core, and 10,221 vs 6765 on multi-core I believe. Apple's chips are always faster, always. It's literally the one thing Apple at the very least, always does well.
  • Gk12kung - Tuesday, February 13, 2018 - link

    Dude your extremely mistaken , the processors in the x and 8 are differently binned . Since apples GPU is inhouse this year they couldn't get enough high quality GPU in time so the x uses higher binned gpu than 8 so the higher performance and the x uses faster ram than 8 which is also a factor in GPU scores . I've used a 8 and x there is rarely any throttling the d220AP is the lower GPU revision used in 8 and d221ap is the x version with higher bin GPU
  • BronzeProdigy - Tuesday, February 13, 2018 - link

    The GFX bench is also wrong. The A11's GPU is behind. If you look at the sections for the X you'll see N/A. Look at the SD835/5T scores and you'll see they match up but the score on that test either doesn't exist or doesn't match the X's score. This is because they failed to note that the GFXBench has different versions (e.g. 3.1, 2.7, 3.0) so they wrote the wrong scores thinking that the scores on the X were something they're not.

    https://uploads.disquscdn.com/images/9c0ebbebc7b3c...
  • peevee - Tuesday, February 13, 2018 - link

    Except in real life users do not run SPEC for long time. They load and start apps, or process photos. It all takes less than a second.
  • rocky12345 - Tuesday, February 13, 2018 - link

    Yea because no one ever plays any games on theirs mobile devices which can go from 5 minutes to 5 hours of use and will heat up any device no matter who makes them. So if you have devices from a mobile maker that is known to heat up & then throttle from that heat then yep it won't be as fast as in the benches. I don't care if Apples makes them or Samsung it happens to every device but it happens worse on some than it does to others.
  • Stochastic - Monday, February 12, 2018 - link

    This looks like a nice, albeit not earth shattering overhaul to the 835.

    Any chance we'll see a Google SoC in the Pixel 3 this year? Or is it more realistic to expect that in the Pixel 4/5? It's a bit boring seeing Snapdragon SoCs in practically all Android flagships.

    Also, what's up with Chrome's lackluster Javascript performance? You mention that the Nitro engine Apple uses is much better optimized. You would think with all the competition in the browser space these days and Google's vested interest in the future of the web that they would push Javascript performance further.
  • Jon Tseng - Monday, February 12, 2018 - link

    Not going to happen for a bunch of reasons, not least bc Google doesn't have baseband IP.
  • Dr. Swag - Monday, February 12, 2018 - link

    I don't think we'll see Google SoCs unless pixel really gains market share. Not worth it to design and fabricate soc if you only sell a few millions phones.
  • Stochastic - Monday, February 12, 2018 - link

    Yes, but this would help them gain marketshare. They could perhaps even license the SoC to other OEMs in order to advance the Android hardware ecosystem as a whole. See this: https://www.androidauthority.com/google-dabbling-s...
  • techconc - Wednesday, February 14, 2018 - link

    Regarding Javascript performance, this article is placing far too much emphasis on the Javascript engine. Yes, Apple's Nitro engine is ahead of Google's V8 engine. However, the majority of the speed difference comes down to the fact that Javascript is inherently single threaded. (Yes, I know work is being do to attempt to address this, but it's not there yet). That, coupled with the fact that Apple's single core performance is WAY ahead of everyone else on ARM is why you see such a big difference in performance.
  • iwod - Monday, February 12, 2018 - link

    Waiting to see what happen in A12. It will likely just be a A11 in 7nm, allowing peak performance for longer. May be some GPU update. But the Single Thread performance of S845 still has some way to go. But at least it is improving.
  • ZolaIII - Monday, February 12, 2018 - link

    For a let's say user experience their won't be a viable difference between A11 & S845. The S845 has a DinamiQ & combination of it and new A75 based core's gives 50% speed up in UIX, Apple A11 don't have DinamiQ in fact it lags behind significantly & it's first one ever to implement even Big.Litle HPM setup. iOS lacks suspend apps to RAM which every Linux derivate including Android has so that pretty much melts down Apple's CPU advanced in running, re running apps. All do A75 is slower than the costume Apple CPU cores it's also significantly more efficient & current Apple graphics can't even match Adreno 5xx series efficiency while series 6xx are 30% more efficient. Still in the regular use their won't be any noticeable difference except Android powered phones with S845 will have longer SOT.
  • Dr. Swag - Monday, February 12, 2018 - link

    A11 is quite a bit ahead in cpu performance and also the A10 was the first with big.LITTLE

    Qualcomm may lead in graphics but apple is much closer than any other vendor out there.
  • ZolaIII - Monday, February 12, 2018 - link

    Yes first in Apple world & 4 years behind android SoC's...
  • close - Tuesday, February 13, 2018 - link

    The fact that they could get by without big.LITTLE for so many years and still top the charts says a lot about their merits.

    It also says a lot about your fanboi attitude.
  • id4andrei - Tuesday, February 13, 2018 - link

    Topping the charts until they throttled from heat and later from failed power delivery systems. As long as the product - the iphone - cannot sustain those performances then I'm afraid your bragging rights become invalid.
  • star-affinity - Wednesday, February 14, 2018 - link

    ”…and later from failed power delivery systems”.

    That is if the battery is in a bad state/worn out. My Iphone 6 is soon three years old and I my battery is still working well (620 cycles) and there's no down-throttling (according to Geekbench 4). I think that down-throttling due to a bad battery issue on iPhones has been taken too wide proportions.
  • id4andrei - Wednesday, February 14, 2018 - link

    Just like with Samsung, if the issue persists on too many devices per average sample - and it did otherwise Apple wouldn't have issued the "fix" - then Apple should have issued a recall. They didn't. They kneecapped the troubled devices thus gaming the strict warranty or insurance conditions.
  • techconc - Wednesday, February 14, 2018 - link

    Apple's A series chips have been far more immune to heat based throttling than any equivalent Android phone in the past. If that's catching up with Apple now, that would be the first time. Sadly, Anandtech has chosen not to do an iPhone review this year.
  • tipoo - Monday, February 12, 2018 - link


    >It will likely just be a A11 in 7nm

    There's almost no, in fact no precedent, for Apple using a die shrink on an A series chip without further tweaks. The A7-A8 was the closest, but there were still CPU tweaks while they had the chance. Every generation improved IPC as they went.
  • name99 - Monday, February 12, 2018 - link

    "Waiting to see what happen in A12. It will likely just be a A11 in 7nm"

    Based on what? This ridiculous claim flies in the face of the five past generations of Apple CPU updates.
  • id4andrei - Monday, February 12, 2018 - link

    Based on Apple's kneecapping of the previous three generations of Apple CPUs. They pushed too much on core performance, with complete disregard to the power constraints of the "thin" design (coupled with a flawed power delivery system). You've seen the result. Defective devices that cannot sustain their own performance across a single year.
  • star-affinity - Wednesday, February 14, 2018 - link

    ”Defective devices that cannot sustain their own performance across a single year.”

    Not necessarily true! If the battery is healthy there's no down-throtteling. I think you have to be an extremely heavy user to wear out the battery in one year. My Iphone 6 has used about 620 cycles on its original battery and it's soon three years old. I use the phone every day and never turn it off. No down-throtteling for me according to Geekbench 4.
  • id4andrei - Wednesday, February 14, 2018 - link

    Geekbench themselves found an instance of an iphone 7 being throttled. Battery life is supposed to take a hit as time goes and not the performance. This constitutes bad design, and indirectly, planned obsolescence.
  • techconc - Wednesday, February 14, 2018 - link

    The issue isn't to say that any of the phones can't be throttled if they have a bad battery. Batteries can go bad for simple things like being exposed to heat for prolonged periods of time. I'll back up star-affinity's claim by saying that I have a 6, 6s, 7 and X in use in my household. Even the oldest iPhone 6 that has been through heavy use on the original battery has not been throttled. According to Apple, it's a pretty rare condition where the throttling would actually occur.
  • id4andrei - Thursday, February 15, 2018 - link

    Your experience could very well be the one of the majority but this does not mean a larger issue did not exist, or otherwise Apple wouldn't have recalled a sample. After said recall, they still encountered the issue(through disgruntled owners) and decided to cover it up with a "fix". This is the crux of the matter, not bad batteries. Apple faced a total recall and they kneecapped all potentially faulty devices and thus removed themselves from complaints and warranty replacements. Most importantly they could get away with under-designing the battery for another two years.
  • techconc - Friday, February 16, 2018 - link

    Apple did not face a total recall. That is simply nonsense. I've seen random shutdowns on Android phones and on my older PC laptops. That's what happens when batteries degrade to a certain point. Unless there was a specific issue with a batch of batteries, there wouldn't be a recall.

    While I'd agree that Apple could have been more transparent with their updates, I also don't believe they were specifically trying to hide something. Go visit various Android support boards and you'll see plenty of battery related problems and devices randomly shutting down. This is not unique to Apple. What is unique here is that Apple provided a technical solution to mitigate these rare conditions for their customers. Their execution wasn't flawless, but they've done more than other vendors have.
  • StormyParis - Monday, February 12, 2018 - link

    oups it be possible to have a bit more perspective. My issue is not "which flagship to buy", but whether it's worth it at all to buy this year's or last year's flagship (current answer: no, except for photographers).
    Extending the comparison to mid-range even low-end check would be helpful.
  • StormyParis - Monday, February 12, 2018 - link

    "Chipsets" not "check"
  • Alex_Haddock - Monday, February 12, 2018 - link

    On a standard mobile device such as a smartphone the GPU improvements are pretty uninteresting to me but I assume for those into mobile vr applications it would be beneficial (but more so in dedicated devices?). This does look good with respect to Windows 10 on ARM though, hopefully an 845 based device won’t be too long after the launch 835 ones.
  • jjj - Monday, February 12, 2018 - link

    In high end phones perf doesn't quite matter anymore but A75 looks interesting.
    You seems to not like it and it's very suspicious that you have GPU power numbers but not CPU - that's the most important info we needed here, the only risk to see a big negative change.The one thing you had to look at and you don't..
    Anyway, since you avoid looking at what matters (and that really stinks), excluding power numbers, why wouldn't you like A75? It's a tiny core that gets quite awesome perf numbers, can't wait for server SoCs at 4GHz or more. 1 or almost 1 per MHz in GB integer for such a small core is awesome. If power is not terrible, sounds like a fantastic core. FP is a bit behind but maybe that's ok nowadays.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    I'm not sure you read the article before writing your criticism, we explicitly talked about CPU power at the end:

    > And while we weren’t able to test for system power efficiency improvements for this preview, we weren’t left empty-handed and were able to quickly do a CPU power virus on the QRD845. The results there have turned out promising, with 1W per-core and slightly under 4W for four-core power usage, which are very much in line with the Snapdragon 835.

    I also explicitly state that it's disappointing that it didn't reach many of ARM's performance targets, in some tests it merely falls back to a clock frequency advantage.
  • jjj - Monday, February 12, 2018 - link

    My bad, only looked at the benchmarks, for the most part. Thanks for pointing out the power numbers. and if I may suggest, that info, even if limited, should be in the CPU section and visible.

    In GB, I am quite happy with it. To quote GB's knowledge base "Geekbench 4 uses a Microsoft Surface Book with an Intel Core i7-6600U processor as the baseline with a score of 4,000 points.".
    The clocks there peak at 3.4GHz so that means 1.1764 per MHz. Granted, not sure how much of that stands with the latest versions of the benchmark , might have changed since v.4.0. Anyway, you got Intel at 18% higher in integer and 59% in FP, clock for clock but with a many times larger core- we'll see how power compares at 4GHz or above.
    I'm impressed by perf density, not necessarily absolute perf and i also remind myself that very high ST perf is only needed in some niches. They do need some further gains with future gens but they are on the right track and it's starting to be exciting.

    Any chance you guys could do a A75 vs Coffee Lake, Ryzen and Atom and not focus on only SPEC?
  • ZolaIII - Monday, February 12, 2018 - link

    Geek bench is highly unoptimized for X86 (probably not good optimized for ARM either) so for any real comparation you would have to compile it from the source with optimal flags...
  • name99 - Monday, February 12, 2018 - link

    Of course it is ("highly unoptimized for X86")...
    And you learned this how, exactly?
    Hell, why don't you throw in the obligatory "John Poole is on the Apple payroll" while you're about it?
  • ZolaIII - Monday, February 12, 2018 - link

    As it is! & it's Open Source but people like you don't even know what that is. So you fatch the code, add optimisation flags and compile it for a given target instead using generic one built for degeneric.
  • Wilco1 - Monday, February 12, 2018 - link

    Geekbench isn't open source. Neither is it unoptimized for x86, the same compiler and the same options are used across multiple OSes and ISAs.
  • Pdimri - Monday, February 12, 2018 - link

    What are the chances Google coming up its own SOC in next Pixel device to compete with A11 chip. Snapdragon chipset is lagging behind on year to year basis and this gap is going to widen even more.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    Zero in my opinion.
  • Speedfriend - Monday, February 12, 2018 - link

    Andrei - is that zero that they do they own chip or zero that it is competitive with Apple's?
  • Lodix - Monday, February 12, 2018 - link

    That they make their own chip.
  • ZolaIII - Monday, February 12, 2018 - link

    Actually A75 is a bit of a let down. As it's actually refined A73 with tree instructions per clock vs two. I assume that with larger cache, bigger predictor & everything else it's also close of being 50% larger while it's able to achieve only 20~25% performance advantage. Nevertheless if compared to A71 which is similar 3 instructions per clock design advantage is nice 30~35%. Neither is really a server material & you know that pretty good (of all people around hire). We will have to wait & see what Austin will cook up next.
  • ZolaIII - Monday, February 12, 2018 - link

    One more thing FP VFP & especially NEON got a most significant boost with A75 - A73, that's actually only really architectural improvement on this generation. FP performance is very important as it scales rather good on the SMP while integer doesn't. Still giving MP scaling factor & relative power efficiency/performance the A55's are still much better target for such workloads using 25% power & achieving 85% performance per MHz. Arm's NEON SIMD whose marginally unusable before this gen. as on the previous VFP had 98% of NEON performance while (VFP) being much faster to access so in many real workloads actually faster. ARM boosted NEON performance but in my opinion not even close enough to go in a higher tear. I do agree with you that Integer performance is actually rather very good for small, efficient little OoO core but ARM must do much more on the FP - NEON SIMD if it wants that their cores become more competitive in HPC segment. Actually I see this as a key (FP performance). Hopefully they will produce a next key architectural element of unified SIMD with added multiply, divide subs on it as I see that as the best possible scaling/performance improvement & also as future avoiding of black silicone. Actually regarding large NEON SIMD blocks usage & in the purpose of server scientific HPC workloads the Fujitsu started working on it long time ago (two + years ago). I just wonder what happened with that.
  • iter - Monday, February 12, 2018 - link

    You are confusing integer and floating point with scalar and vector. SIMD units do vector processing, the vector components can be either integer or floating point. Both are equally useful in HPC, and both get a massive boost from SIMD processing. It is the ALU and the FPU units that do scalar processing, that is one number at a time, of integers and floating point numbers respectively. Those are not used for data crunching, but for managing the program flow, which is beneficial since the lower throughput also means lower latency.

    There is no such thing as a free lunch here. If you want to stay at a lower power target, you have to compromise on the SIMD throughput. There is no way to cheat around that. If ARM chips get SIMD units to match x86 counterparts they will also match their higher power usage.
  • ZolaIII - Monday, February 12, 2018 - link

    Lol both scalar and vector are FP. I ain't confusing anything, you are... SIMD's are rather efficient, more efficient for an order of magnitude compared to the VFP, that's why SIMD arias find their way to pretty much any special purpose or general purpose computing unit's. What I told is a massive united heterogeneous SIMD aria... Now think about it.
  • iter - Tuesday, February 13, 2018 - link

    You are such a dummy. Scalar means "one number", vector means "two or more numbers". The number can be an integer or a floating point number. SIMD instruction sets feature dozens of instructions for processing integer numbers, which are essential to image, video and audio processing, which is all stored using integers.

    In fact, the first SIMD implementation to hit consumer products was intel's MMX, which provided ONLY INTEGER operations.

    As I said - scalar operations involve processing one number of a time, and are executed by the ALU or FP unit for integers and real numbers respectively, vector operations involve processing multiple numbers at once, and is handled by the SIMD units, regardless of whether its integers or reals.
  • lmcd - Monday, February 12, 2018 - link

    Wouldn't get too excited, as A75 was reported to feature a variant of the "Meltdown" bug also affecting Intel CPUs. Performance hit for a patch could be damaging.
  • SirPerro - Monday, February 12, 2018 - link

    I'm more interested in the mid-range processors to drive devices like the Moto G Plus series

    Right now, an SD845 is extraordinarily excessive for like... 95% of the Android use cases.

    It's like... "OK, that 1080 Ti GPU is really nice, but how good is the 1060 I will actually pay for?"
  • imaheadcase - Monday, February 12, 2018 - link

    The irony of all this is that software is going to make the difference more than this SoC. You can have the best SoC and put in a shit phone.
  • yeeeeman - Monday, February 12, 2018 - link

    First of all nice review Andrei, coming from a romanian guy like you.
    Related to SD845, this chip is a nice bump over 835, but I cannot help but wonder if this yearly cadence is really a necessity or just a money grabbing, each year, thing.
    I want to change my Z3 compact, SD801 phone with something new, but I feel like what is best, has yet to come. In 2019 we will have 5G modems, 11ax wifi chips, new uArch from ARM aaand 7nm. This chip is just a intermediate step to have something to sell this year, but in any case, nice work as usual from Qualcomm.
  • Tigran - Monday, February 12, 2018 - link

    Any ideas why S821 overperforms both S845 and S835 in Geekbench 4 Floating Point Performance Single Threaded?
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    The original Kryo had very robust FP execution pipelines while having weaker integer and memory subsystems. Please keep in mind those are normalized per clock, not absolute performance.
  • Tigran - Monday, February 12, 2018 - link

    Does it mean the new Kryo in S835 & S845 compromises FP performance to obtain better performance in integer calculations (per clock)?
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    The new Kryo's have nothing to do with the microarchitecture of the original S820 Kryo as the former are ARM designs while the latter was a full Qualcomm design - you can't say compromise as it's not a deliberate choice but that's just the differences between the CPUs.
  • jjj - Monday, February 12, 2018 - link

    And efficiency is not factored in, Kryo was not great at that.
  • gregounech - Monday, February 12, 2018 - link

    First word of System performance page should be "Now we see how" and not "How we see how" :)

    Otherwise great job Andrei & Ryan!
  • tuxRoller - Monday, February 12, 2018 - link

    From fd.o:

    "The a6xx GPU is an iteration of the a5xx family so most of the GPU side code
    looks pretty close to the same except for the usual register differences. The
    big different is in power control. On the a5xx there was a rudimentary device
    called the GMU that did some basic power stuff but left most of the complexity
    to the kernel. On the a6xx the power complexity is being moved to a component
    called the GMU (graphics management unit)."

    https://lists.freedesktop.org/archives/dri-devel/2...
  • GC2:CS - Monday, February 12, 2018 - link

    Seems like qualc is at their pre 810 days again. It’s definitely going to be a standrad of android soc this year.
    Regarding the entire mobile landscape, I think Qualcomm dropped the ball with legacy ARM designs, and everyone got somewhat lazy with their own design and IPC. And it looks kind of poor now that Apple is now on their sixth gen of custom, each bringing a significant improvement.

    GPU got more impresion. Obviously they’ve done a lot of work over the years. It is going to be interesting if they can claim the crown from Apple this year for that. On the other hand Apple got a bit lazy with the GPU too - with A9 and A10 being excessively power hungry because they reused the same core clocking it higher.
    It’s super impressive they managed to match A11 at peak, but they should not slow down as that is Apple’s first GPU and the efficiency they pulled over A10 is just insane - this year is gonna be exciting.

    But OEM’s migt want to bring up somewhat more than just make device upgrades rely on some%qeuieckerer moniker.
  • generalako - Monday, February 12, 2018 - link

    Andrei Frumusanu has completely lost his legitimacy as a writer on Anandtech. In this post here, he makes the ridiculious claim that Snapdragon 845 should offer "expected performance improvement to 39-52%", despite the fact that Qualcomm themselves very clearly state "up to 25% improvement" on their launch and on their own site: https://www.anandtech.com/show/12114/qualcomm-anno...

    Now, 2 months later, he merely repeats what Qualcomm said all along, based on the results he got from the device: an overall 20-25% performance improvement.
  • StormyParis - Monday, February 12, 2018 - link

    On the article you read, Qualcomm's lists performance at iso-process and frequency (34% for Geekbuying, 48% for Octane), to which you then add the performance gain due to the frequency. That makes Andrew's figures cogent, and your comment idiotic.
  • generalako - Monday, February 12, 2018 - link

    No, Qualcomm actually didn't list those charts themselves. Qualcomm, on their own site even, stated during and after the launch, that the SD845 will provide "up to 25% performance improvement" over the SD835. That was with the stated clockspeed of 2.8 GHz.
  • Reflex - Monday, February 12, 2018 - link

    Um, you just confirmed Stormy's point. Re-read Stormy's comment and then your reply...
  • tuxRoller - Tuesday, February 13, 2018 - link

    The Qualcomm slide said 25-30% increase@2.8GHz with the smaller cores increasing their performance by 15%@1.8GHz.
    The next slides looks like its from arm's a75 announcement (no mention of Qualcomm brand names and the graph is the same one that arm uses) not the Qualcomm presentation.

    From the article:
    "The Kryo 385 gold/performance cluster runs at up to 2.8GHz, which is a 14% frequency increase over the 2.45GHz of the Snapdragon 835's CPU core. But we also have to remember that given that the new CPU cores are likely based on A75's we should be expecting IPC gains of up to 22-34% based on use-cases, bringing the overall expected performance improvement to 39-52%. Qualcomm promises a 25-30% increase which is at the low-end of ARM's projections.'

    The author speculates about perf based on the arm graphs & the freq increase that Qualcomm announced but Qualcomm themselves didn't suggest such numbers.
    My guess as to why integer IPC didn't increase by much is the power issue Andrei had alluded to before. Iow, the freq scaling provided enough perf so that Qualcomm didn't have to employ the more expensive changes that would've been required for the IPC gains.
  • mfaisalkemal - Monday, February 12, 2018 - link

    @andrei_frumansunu and @ryan_smith
    any date when we can read review from iphone 8 and iphone x?
    i just curious about long term performance manhattan 3.1 compared ipghone 7 and snapdragon 835.

    any chance to compare gpu of android and ios device with real world benchmark games Tainted Keep? nvidia use that game when tegra x1 launch.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    The A11 generally throttles 35% from its peak performance figures while the 835 maintains full or 90%. I'll include the iPhones in the new full review.

    We don't have any way to benchmark iOS devices in real games.
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    > I'll include the iPhones in the new full review.

    And by that, I meant general next full device reviews, not specifically iPhone reviews.
  • mfaisalkemal - Monday, February 12, 2018 - link

    sorry not explain you tell little detail, tainted keep have in-game benchmark offscreen 1080p in normal and extreme mode.
    nvida tegra x1 ultra score: http://www.legitreviews.com/wp-content/uploads/201...
    iphone 7+ extreme score: https://i.imgur.com/3v5Tgxt.jpg
    ipad pro(a9x) extreme score: https://i.imgur.com/9pKrQBE.jpg

    thanks andrei, i'm eagerly waiting for the review :)
  • Andrei Frumusanu - Monday, February 12, 2018 - link

    The game doesn't even allow me to use the extreme settings on Qualcomm devices and in the normal benchmark it's just vsync limited at 60 fps - so I don't think we'll do anything with it.
  • mfaisalkemal - Monday, February 12, 2018 - link

    because driver or bug maybe, what a pity!
    how about test peak GFLOPS with opencl for GPU on next benchmark?
    i found this benchmark application named CLPeak on playstore.

    https://play.google.com/store/apps/details?id=kr.c...

    Oneplus 5 (Adreno 540)
    Single-precision compute (GFLOPS)
    float : 294.65
    float2 : 285.81
    float4 : 311.02
    float8 : 265.02
    float16 : 308.34

    half-precision compute (GFLOPS)
    half : 570.72
    half2 : 539.62
    half4 : 610.79
    half8 : 314.82
    half16 : 313.73

    source: https://forum.beyond3d.com/posts/2011570/

    with alu result from your test about 50% improvement, i think adreno 640 is not far from tegra X1 (512GFLOPs) in smartphone device!!!
  • darkd - Monday, February 12, 2018 - link

    Note that Ravn is a studio Nvidia has contracted things like this out to multiple times. As with most Nvidia contractors, they tend to have features/optimizations that are Nvidia only, and tend to be almost pointedly unoptimized for tiling GPUs. You can see something to this effect in the Ravn supplied benchmark for Antutu v7.
  • id4andrei - Monday, February 12, 2018 - link

    Andrei, what can you and the staff at Anandtech do to prevent stuff like Apple's permanent throttling from flying under the radar? I mean a new device throttles temporarily but the iphone 7 was found(by Geekbench) to have been already "permanently" throttled after one year.

    Also, could Apple dropping Dialog - maker of power management chips - have something to do with this?
  • Dr. Swag - Monday, February 12, 2018 - link

    Why have none of the soc makers besides Qualcomm and apple used anything other than Mali gpus? It's obvious that Mali is lacking in perf/watt so why not use something from, say, imagination that seems to have much better perf/watt
  • frenchy_2001 - Monday, February 12, 2018 - link

    Because integrating a different solution (like powerVR) or even designing your own (like Adreno) requires more efforts (and hence money).
    ARM offers a full turn-key solution.
    Like their CPU cores, their GPU cores are not the best, but for most of the industry, they are good enough.
    This lack of valuation of the graphic component is partly why nvidia left the phone chipset market. Few care enough to pay (in effort, silicon space or licensing) for good graphics.
    (Apple and Qualcomm obvious exception, with their own designs)
  • warrenk81 - Monday, February 12, 2018 - link

    Any update on reviews for 2017 flagship devices? iPhones, Pixels, iPads? anything? i realize it's not a trivial undertaking to review those, but something other than announcements and press releases would be nice.
  • Wardrive86 - Monday, February 12, 2018 - link

    Great article! SD845 performance seems to be right where it was expected to be. I am glad for that. It looks like even the midrange QC SoCs will be using a mix of 385 gold and silver cores at wildly varying clock speeds , various levels of "system cache" and Adreno 6xx series gpus. I think this year is going to be great from the high to the low end. Thanks for confirming that "system cache" acts like an L4. I hate to see DRAM latency has went up but from a memory perspective these new devices are completely different than what were used to
  • Yaldabaoth - Monday, February 12, 2018 - link

    So... maybe a Snapdragon 845 Netbook-style Windows S device won't suck as bad (or at least won't be as graphically laggy) when paired with suitable other components? Maybe?
  • lucam - Monday, February 12, 2018 - link

    I take this article as you have officially tested the iPhone X? :) At long last...
  • Raqia - Monday, February 12, 2018 - link

    It's interesting that they chose the lower sized 256k L2 cache per core configuration instead of going for 512k per core. Perhaps it's die space, or something to do with hitting a sweet spot in simulations against their own custom memory bus and L4 cache.
  • yhselp - Tuesday, February 13, 2018 - link

    By my utterly amateurish calculations, the S845 is the first Android SoC nearing the single-threaded performance of Apple's A9. Can't wait to see what the Exynos 9810 has to offer.
  • Krispytech - Tuesday, February 13, 2018 - link

    Geekbench scores are out for the Exynos S9 3700 single core much faster than the 845.
  • yhselp - Wednesday, February 14, 2018 - link

    Wow. That's... Fingers crossed!
  • lilmoe - Tuesday, February 13, 2018 - link

    Would it be possible to completely disable the A75 cores and test the A55s alone to see the performance improvements compared to the A53s? That, in my opinion is a more meaningful comparison since your phone is running on the small cluster most of the time.
  • Andrei Frumusanu - Tuesday, February 13, 2018 - link

    We've done that in the past and I'll do it once I get my hands on devices.
  • Hotstar Download - Wednesday, February 14, 2018 - link

    How to balance https://uniqsofts.com/blogs/vidmate-online memory space
  • yhselp - Wednesday, February 14, 2018 - link

    Is it possible to update us on the status of a possible iPhone X/8 review - is it in the works, or has it been decided you won't be publishing one? Any interesting, technical insight into why it's taking longer than usual would be much appreciated, and fascinating to read. To be clear: I'm just asking politely, and not complaining.
  • Andrei Frumusanu - Wednesday, February 14, 2018 - link

    Hello, there is no technical reason for the delay; our former mobile editor Matt Humrick has left AT last summer. Ryan has not had the time to work on them alongside his editorial responsibilities. I re-joined AT at the very end of December and there was a lot of mobile back-log work to be done. I've only recently got to integrate data on the iPhones such as in this article.

    At this point in time it's highly unlikely the X/8 will get dedicated reviews but I will be integrating the data from the devices in other reviews. I will try to post an update on battery life and general update- camera comparisons will be included in the next device reviews.
  • Andrei Frumusanu - Wednesday, February 14, 2018 - link

    * at the very end of November I meant to write.
  • yhselp - Wednesday, February 14, 2018 - link

    Thank you for taking the time to respond, and shed light on the matter. Unpredictable things happen. Looking forward to seeing the data in other reviews, sustained performance in particular. I wonder whether the glass back is a detriment, thermally, compared to aluminum on older iPhones.
  • serendip - Wednesday, February 14, 2018 - link

    Being a cheapskate techie, I'm more interested to see what future midrange 6xx parts will look like. For me, flagship 8xx SoCs are overkill and I'm not willing to pay to play in that range. A quad-A55 and dual-A75 design could be a big midrange hit like the old Snapdragon 650.
  • Wardrive86 - Thursday, February 15, 2018 - link

    Snapdragon 640 : 2 x 2.15 ghz Kryo 360 Gold, 6 x 1.55 GHz Kryo 360 Silver, 1 mb L3, 1 mb system cache, Adreno 610, dual channel lpddr4x

    Snapdragon 670 : 4 x 2 ghz Kryo 360 Gold, 4 x 1.6 ghz Kryo 360 Silver, 1 mb L3, 2 mb system cache, Adreno 620, triple channel lpddr4x.
  • Wardrive86 - Thursday, February 15, 2018 - link

    Need an edit button..128 kb L2 per core for both clusters in the 670, 128 kb L2 per core big, 64kb per core little in the 640.
  • Wardrive86 - Thursday, February 15, 2018 - link

    Correction the 670 should have Kryo 385 silver cores instead of 360 Silver.
    Kryo 385 Gold (A75/256 KB L2)
    Kryo 360 Gold (A75/128 KB L2)
    Kryo 385 Silver (A55/128 KB L2)
    Kryo 360 Silver (A55/64 KB L2)

    Snapdragon 460 : 4 x 1.8 GHz Kryo 360 Silver, 4 x 1.4 GHz Kryo 360 Silver, no L3, No system cache, Adreno 605, dual channel lpddr4x. Sorry for the triple post
  • serendip - Thursday, February 15, 2018 - link

    So the 640 has 6x A55 and 2x A75 while the 670 has 4x A55 and 4x A75. All this Intel-aping gold and silver nonsense gives me a headache.

    I wonder why the 640 needs so many small and light cores. The 670 looks like an update to the rarely used 652 and its higher cost could also lead to OEMs favoring the 640.
  • Wardrive86 - Thursday, February 15, 2018 - link

    Probably don't want to lose out on multithread performance. The 625, 626, 630 all use 8 A53s and have remarkable performance. Android 6 and 7 will regulars fire up all 8 cores in typical daily useage
  • austonia - Thursday, February 15, 2018 - link

    new CPU looks great, but so did the 835. i'd love to buy it. however it's gotta come in a phone with a removable battery or i'll keep using a Note4.
  • phoenix_rizzen - Monday, February 19, 2018 - link

    Missing word in this sentence from the article:
    "For the integer workload results we see a healthy performance across the various tests."

    Guessing the missing word is "increase" after performance.
  • SoC lover - Friday, March 2, 2018 - link

    I like adreno GPU even i overclock it it never lags and heat like Mali GPU
  • sonu12345 - Monday, August 6, 2018 - link

    do you have any reference of above given data
  • Gdhsczyanxv - Thursday, November 15, 2018 - link

    Snapdragon 845 e A11 sem dúvidas são muito impressionantes, os chips da Apple sempre costumam ser mais rápidos e atingem pontuações de Benchmark maiores, mais tem que analisar que o ios é bem mais leve que o Android e a maçã desenvolve tanto o software quanto o hardware, daí a maior otimização. Se colocar esse A11 em um smart Android é capaz de ele rodar mau o sistema de tão pesado que é o Android, bem como o Snapdragon 845 pode rodar mau o ios por não ter sido desenvolvido para esse sistema.
  • hhashemi - Saturday, January 12, 2019 - link

    When comparing Snapdragon 835 to iPhoneX in general FMA throughput, we've found the iPhone to be far superior (about 3x more to be specific).
    You can experiment for yourself. The app below uses the same compute shader on iPhone (Metal) and Samsung (OpenCL). It seems the the iPhone's compute can be accessed via general compute shaders, but the Snapdragon needs special Hexagon code to tap into full compute power. That's a major issue for android developers, as it means even more fragmentation in the Android market.

    iTunes TestFlight:
    https://testflight.apple.com/join/NK6HmGOW
    Android Store (Unreleased):
    https://play.google.com/store/apps/details?id=com....

Log in

Don't have an account? Sign up now