All I read here is just constant bellowing of buzzword advertising fluff. I honestly found Ian's questions significantly more enlightening and informative than this conman's half-hearted responses. I will be so happy when this guy leaves Intel. Raja Koduri is the living embodiment of a corporate parasite. Once a person gets into high-level executiveship, it is a life pass to be a leech regardless of results. He will do terribly here just like he did at AMD, mark my words. I have little confidence in Arc succeeding and now that the GPU fever is cooling off right as it is set to release, it will depend totally on merit and not blind and insatiable demand.
I thought the interview was worthwhile, in terms of what Ian was able to pin down.
> Raja Koduri is the living embodiment of a corporate parasite.
Have you ever heard of the Peter Principle? I'm not saying that's what happened, but it could be. Or, another phenomenon is that sometimes the easiest way to get rid of someone incompetent at their job is to promote them.
> I have little confidence in Arc succeeding
Intel has enough hard-working, competent engineers that I think it'll do alright. I think it's got good potential for GPU compute, so that's my main interest.
Sorry for my bad English. But reading all the interviews makes me feel like watching an infomercial. I'm always waiting for more detail or precision. Like the magazines that you found when you go to a doctor and wait. Not sure how to say it in other words.
other than leaking company design secrets, hes pretty much laid out what they believe they can do. their goal is to have a complete interconnected platform that cuts down on power delivery losses. add that in with general architecture and node improvements to reach zetaflop performance at the supercomputer level.
Since they've qualified that it is zeta FP64 performance, I honestly don't believe intel will reach it by 2027. If it was some other metric like mixed FP then yes, but pure FP64 up to zeta in 5 years? ehhh good luck with that
The part about the oneAPI software advances coming is more interesting to me. Maybe they're learning something from the supercomputer users. It's good to see them listening.
There's also a lot to be gained from lowering the energy to "move bits around" i.e. interconnects. A larger supercomputer that is not a "billion independent nodes" or "Chinese lottery" style might consume as much power in moving data around as in processing it.
Yeah, moving data from the chip package to the front panel requires big heat sinks and retimers if done over copper. This was presented when intel did their 12.8Tb switch with co-packaged optics demo in 2020. I believe we'll see the 25.6Tb switch demo this year, based on some slides from Robert Blume
The UCIE announcements mentioned its use for hooking up to optical transceivers. Raja's servethehome interview mentions "lightbender" for the Falcon Shores chip
‘The amount of compute needed is as I said back then, literally PetaFLOPs of compute, Petabytes of storage, at less than 10 milliseconds away from every human on the planet. That is the vision mission that we are on, that Intel is still on.’
Someone I know has the AT&T ‘rural plan’. This person lives a few minutes from a fairly major university but cannot get wired broadband service. So, she has a cellular modem and AT&T refuses to provide an update that supports even WPA-2 with AES, let alone WPA-3. To go with the insecurity of the connection, her upload speed is terrible (as has been the case for most ordinary Internet users in America, at least, since the dawn of time).
Her service is capped at a massive 250 GB per month, unlike the satellite options that were twice the price for 30 and didn’t even get up to anything near 250. Her signal strength has been maxing at two bars out of five, also (in the most optimal location in her home).
This is the sort of Internet many have as their option. Good luck with stuffing neo-1984 metaverse experiences through it, unless it’s going to be an antique Second Life style incarnation.
> Her signal strength has been maxing at two bars out of five
I take your broader point, but I wonder if your acquaintance could do something like one of those "Pringles can" hacks that seemed pretty popular, in the mid-2000's.
The one constant in most all of Intel's gobbledygook PR efforts these days is the future--years and decades away--Intel never seems tired of talking about the future and making grandiose claims it cannot prove but also claims it doesn't have to prove. The company talks far more about pie-in-the-sky than it does about products shipping on the ground to customers today. Metaverses and Zettascales seem to mate fantastically with Intel's proclivity to discuss what is coming in the future instead of what they are doing today--right now--this minute. I can only assume that preening about Zettascales and Metaverses is far more interesting for the company...;)
Zettascale is pretty straightforward. You just need around a 1000x performance-per-watt improvement. The secret to getting there is obviously some kind of 3D chip. The 2027 timeframe (quicker than the TOP500 projection) can be explained by a sudden spike in performance from moving to 3D chips. We could see a generation or two of desktop chips that are 1000% faster instead of the typical 10-40%.
Metaverse? That can could be kicked down the road forever.
"You just need around a 1000x performance-per-watt improvement." You also need a lot more performance in connectivity, unified memory, shared data between multiple computation units, ...
Somewhat, but that's also going the *opposite* direction of reducing data movement. For processing big data sets, there's no question accelerators will need large pools of shared memory. However, the way to reduce data movement is going to be bundling compute and some of the memory closer together.
> Zettascale is pretty straightforward. You just need around a 1000x > performance-per-watt improvement.
LOL.
> The secret to getting there is obviously some kind of 3D chip.
3D isn't just some magic wand you wave and "poof" you get more transistors for free. If anything 3D is going to add overhead beyond the base per-transistor cost you have in 2D chips.
Plus, I don't see 3D as a big perf/W improvement. The main benefit is reducing data movement, but the main drawback is cooling. So, you probably have to clock it lower, and that means probably getting less performance per transistor. So, I definitely don't see it as a "free" performance boost.
I mean, just look at how AMD is charging like an extra $100 to stack a single layer of L3 cache on top of its compute die, in the new 5800X3D!
> The company talks far more about pie-in-the-sky than it does about products > shipping on the ground to customers today.
Really? In the past few years, they've been very open about their roadmaps and their architecture day presentations have really gone into lots of detail.
I get why they want to paint a longer-term vision, as it's clearly important for keeping investors on board and customers engaged. However, I don't get the sense that it's really out-of-proportion. It is headline-grabbing, so that might be why you tend to notice it more.
A "Metaverse" with closed-source software and closed-design hardware will just be a dystopia. We already have proof that Facebook/Meta deliberately promoted far-right extremist content because it polarized people and kept them interacting in their social network.
Considering how humanity is in the process of using ‘business as usual’ to destroy the only suitable planet for human life — while giving paltry lip service to sustainability, what isn’t ‘extremist content’?
People generally have an extremely warped viewpoint when it comes to what is and isn’t rational. Ecological cannibalism isn’t it and yet that is the basis of our elaborate business behaviours — the very respectable and impressive utterly unsustainable landscapes we create.
Vocabulary like ‘sustainability’ itself is part of the fraud, since nothing involving it in any kind of mainstream discourse has anything truly to do with it. It’s kabuki and puts a comfortable sheen on the spreading plastic ocean.
> Vocabulary like ‘sustainability’ itself is part of the fraud
As usual, you're going overboard. Not everyone talking about sustainability is insincere or unrealistic. Of course, there is "green-washing" by corporations and some politicians, but that's nothing new.
If we followed your logic, nobody could even talk about environmentalism without fear of being called a fraud. You're too blinded by your own cynicism to see any way forward, which leaves you on an ideological island. Please don't lure others to join you there.
A big concern about metaverse is the power it has to indoctrinate new followers to a movement. It was shown in research studies that a VR experience involving deforestation can make people more eco-conscious, but what about experiences made by extremist groups and cults?
We already have a sort-of example in Youtube, where highly-produced videos are made to promote a cause. These have even proven a powerful recruitment tool by terrorists. How much more potent will VR be, in their hands?
I'm pretty sure the 10eflop chart will be correct, but a few little snippets make me think they will achieve a zettaflop with blockchain. Basically bitcoin but the work is useful, and consensus verifies the validity of the calculations you do. What ever "pool" you join will take bids from places needing compute, and you will get rewarded with some new coin where you can exchange it for cash at Intel's reserve bank. Trading compute will be the new commodity with some factors of energy trading (cheaper compute during solar panel generation), some of currency and stock trading (which pool you choose, what coin you get), and traditional exchange of "labour" for currency (the computing).
If this catches on you'd effectively have fiat currency pegged to compute. High Tech jobs and energy are the main enablers for a compute economy. It sounds better than an oil economy.
> What ever "pool" you join will take bids from places needing compute,
I don't believe this. Compute needs data. Sometimes a lot of it, which can make such schemes impractical. Remember the whole part about reducing data movement? You're proposing to amplify it billions of times!
And people are protective of their data and potentially also their algorithms.
We've seen this model for a couple decades, with projects like SETI @ Home and protein folding. It's been successful in niches, but there are limits to what sorts of problems make sense to tackle like that.
Fascinating article. Intel does have the clout to make it happen ... if things like the physics fall into line, of course. The company is doing far more interesting things with Gelsinger at the helm than it's done under the last few lame ducks!
(also: who is the comedic genius responsible for "...but is about as well defined as a PHP variable"? Now THAT is a truly sick burn! 😂)
> by zettascale, do you mean one machine, One zettaFLOP, double-precision, 64-bit compute?
Massive thanks for this! It's really helpful to pin these guys down, when they start making such lofty claims!
> it’s annoying because I asked for this interview before you bumped into him at > Supercomputing and had this chat!
Exactly when *was* the chat? You guys have not the best track record of publishing your interviews promptly. If Raja wanted to get some details out to the public, it's hard to fault him for not giving you an exclusive scoop.
> Silicon Photonics or ‘LightBringer’
Hmm... it's a little too on-the-nose. There's got to be some better code name than that.
> I immediately heard it and thought we're going to start getting 800 to 1000 watt GPUs!
Same!
Anyway, thanks for the interview! Some very interesting clues, in there. I guess we'll have to wait and see.
I'm a little late to this one, but has Anandtech provided a justification for why we get so many Intel fluff pieces? Hardly seems balanced when we don't see anything like this for AMD/Nvidia or other big tech.
Is Intel really that much more interesting? Is 'balanced' not a concern?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
45 Comments
Back to Article
prophet001 - Wednesday, March 9, 2022 - link
Miss me with that metaverse.Hifihedgehog - Friday, March 11, 2022 - link
All I read here is just constant bellowing of buzzword advertising fluff. I honestly found Ian's questions significantly more enlightening and informative than this conman's half-hearted responses. I will be so happy when this guy leaves Intel. Raja Koduri is the living embodiment of a corporate parasite. Once a person gets into high-level executiveship, it is a life pass to be a leech regardless of results. He will do terribly here just like he did at AMD, mark my words. I have little confidence in Arc succeeding and now that the GPU fever is cooling off right as it is set to release, it will depend totally on merit and not blind and insatiable demand.prophet001 - Friday, March 11, 2022 - link
Well you're not wrong lol.mode_13h - Monday, March 21, 2022 - link
I thought the interview was worthwhile, in terms of what Ian was able to pin down.> Raja Koduri is the living embodiment of a corporate parasite.
Have you ever heard of the Peter Principle? I'm not saying that's what happened, but it could be. Or, another phenomenon is that sometimes the easiest way to get rid of someone incompetent at their job is to promote them.
> I have little confidence in Arc succeeding
Intel has enough hard-working, competent engineers that I think it'll do alright. I think it's got good potential for GPU compute, so that's my main interest.
ElGus - Wednesday, March 9, 2022 - link
Sorry for my bad English.But reading all the interviews makes me feel like watching an infomercial.
I'm always waiting for more detail or precision.
Like the magazines that you found when you go to a doctor and wait.
Not sure how to say it in other words.
whatthe123 - Wednesday, March 9, 2022 - link
other than leaking company design secrets, hes pretty much laid out what they believe they can do. their goal is to have a complete interconnected platform that cuts down on power delivery losses. add that in with general architecture and node improvements to reach zetaflop performance at the supercomputer level.Since they've qualified that it is zeta FP64 performance, I honestly don't believe intel will reach it by 2027. If it was some other metric like mixed FP then yes, but pure FP64 up to zeta in 5 years? ehhh good luck with that
JayNor - Wednesday, March 9, 2022 - link
The part about the oneAPI software advances coming is more interesting to me. Maybe they're learning something from the supercomputer users. It's good to see them listening.mode_13h - Monday, March 21, 2022 - link
I just hope they don't follow AMD's example and simply make a 1:1 CUDA clone.Calin - Thursday, March 10, 2022 - link
There's also a lot to be gained from lowering the energy to "move bits around" i.e. interconnects. A larger supercomputer that is not a "billion independent nodes" or "Chinese lottery" style might consume as much power in moving data around as in processing it.JayNor - Thursday, March 10, 2022 - link
Yeah, moving data from the chip package to the front panel requires big heat sinks and retimers if done over copper. This was presented when intel did their 12.8Tb switch with co-packaged optics demo in 2020. I believe we'll see the 25.6Tb switch demo this year, based on some slides from Robert BlumeThe UCIE announcements mentioned its use for hooking up to optical transceivers. Raja's servethehome interview mentions "lightbender" for the Falcon Shores chip
surohbychandni - Saturday, March 12, 2022 - link
hey really thankful for this blog keep posting new content on your blog and your information in this blog really good https://suroh.in/collections/nightyshopping more products like https://suroh.in/collections/unstitched-suit-set
https://suroh.in/collections/unstitched-suit-set
https://suroh.in/collections/velvet-suits
mode_13h - Monday, March 21, 2022 - link
spammer.SallyInce - Tuesday, March 15, 2022 - link
I agree with youOxford Guy - Wednesday, March 9, 2022 - link
‘The amount of compute needed is as I said back then, literally PetaFLOPs of compute, Petabytes of storage, at less than 10 milliseconds away from every human on the planet. That is the vision mission that we are on, that Intel is still on.’Someone I know has the AT&T ‘rural plan’. This person lives a few minutes from a fairly major university but cannot get wired broadband service. So, she has a cellular modem and AT&T refuses to provide an update that supports even WPA-2 with AES, let alone WPA-3. To go with the insecurity of the connection, her upload speed is terrible (as has been the case for most ordinary Internet users in America, at least, since the dawn of time).
Her service is capped at a massive 250 GB per month, unlike the satellite options that were twice the price for 30 and didn’t even get up to anything near 250. Her signal strength has been maxing at two bars out of five, also (in the most optimal location in her home).
This is the sort of Internet many have as their option. Good luck with stuffing neo-1984 metaverse experiences through it, unless it’s going to be an antique Second Life style incarnation.
nandnandnand - Wednesday, March 9, 2022 - link
Prime candidate for Starlink.Oxford Guy - Thursday, March 10, 2022 - link
And the latency on that is... ?nandnandnand - Friday, March 11, 2022 - link
Around 20 to 40 ms, a little higher than that for beta testers, but could dip below 20 ms eventually.Compare to 500 to 1000 ms for traditional geosynchronous satellite service.
Short of Intel's 10 ms target, but I don't know what that's for. Responsive multi-user VR?
Oxford Guy - Wednesday, March 16, 2022 - link
So, 40 ns — with better results maybe sometimes.surohbychandni - Saturday, March 12, 2022 - link
hey really thankful for this blog keep posting new content on your blog and your information in this blog really good <a href="https://suroh.in/collections/nighty">embro... nighties</a> shopping more products like <a href="https://suroh.in/collections/unstitched-suit-set&q... salwar suits</a><a href="https://suroh.in/collections/unstitched-suit-set&q... suit set</a><a href="https://suroh.in/collections/velvet-suits">... suit set</a>mode_13h - Monday, March 21, 2022 - link
spammer.mode_13h - Monday, March 21, 2022 - link
> Her signal strength has been maxing at two bars out of fiveI take your broader point, but I wonder if your acquaintance could do something like one of those "Pringles can" hacks that seemed pretty popular, in the mid-2000's.
mode_13h - Monday, March 21, 2022 - link
I don't know what this "cellular modem" looks like, but I'm assuming it either has or can accommodate an external antenna.James5mith - Wednesday, March 9, 2022 - link
How long ago was this interview? Didn't Ian quit already?erotomania - Wednesday, March 9, 2022 - link
"This interview took place before Intel's Investor Meeting"WaltC - Wednesday, March 9, 2022 - link
The one constant in most all of Intel's gobbledygook PR efforts these days is the future--years and decades away--Intel never seems tired of talking about the future and making grandiose claims it cannot prove but also claims it doesn't have to prove. The company talks far more about pie-in-the-sky than it does about products shipping on the ground to customers today. Metaverses and Zettascales seem to mate fantastically with Intel's proclivity to discuss what is coming in the future instead of what they are doing today--right now--this minute. I can only assume that preening about Zettascales and Metaverses is far more interesting for the company...;)nandnandnand - Wednesday, March 9, 2022 - link
Zettascale is pretty straightforward. You just need around a 1000x performance-per-watt improvement. The secret to getting there is obviously some kind of 3D chip. The 2027 timeframe (quicker than the TOP500 projection) can be explained by a sudden spike in performance from moving to 3D chips. We could see a generation or two of desktop chips that are 1000% faster instead of the typical 10-40%.Metaverse? That can could be kicked down the road forever.
Calin - Thursday, March 10, 2022 - link
"You just need around a 1000x performance-per-watt improvement."You also need a lot more performance in connectivity, unified memory, shared data between multiple computation units, ...
mode_13h - Monday, March 21, 2022 - link
> shared data between multiple computation unitsSomewhat, but that's also going the *opposite* direction of reducing data movement. For processing big data sets, there's no question accelerators will need large pools of shared memory. However, the way to reduce data movement is going to be bundling compute and some of the memory closer together.
mode_13h - Monday, March 21, 2022 - link
> Zettascale is pretty straightforward. You just need around a 1000x> performance-per-watt improvement.
LOL.
> The secret to getting there is obviously some kind of 3D chip.
3D isn't just some magic wand you wave and "poof" you get more transistors for free. If anything 3D is going to add overhead beyond the base per-transistor cost you have in 2D chips.
Plus, I don't see 3D as a big perf/W improvement. The main benefit is reducing data movement, but the main drawback is cooling. So, you probably have to clock it lower, and that means probably getting less performance per transistor. So, I definitely don't see it as a "free" performance boost.
I mean, just look at how AMD is charging like an extra $100 to stack a single layer of L3 cache on top of its compute die, in the new 5800X3D!
mode_13h - Monday, March 21, 2022 - link
> The company talks far more about pie-in-the-sky than it does about products> shipping on the ground to customers today.
Really? In the past few years, they've been very open about their roadmaps and their architecture day presentations have really gone into lots of detail.
I get why they want to paint a longer-term vision, as it's clearly important for keeping investors on board and customers engaged. However, I don't get the sense that it's really out-of-proportion. It is headline-grabbing, so that might be why you tend to notice it more.
Wereweeb - Wednesday, March 9, 2022 - link
A "Metaverse" with closed-source software and closed-design hardware will just be a dystopia. We already have proof that Facebook/Meta deliberately promoted far-right extremist content because it polarized people and kept them interacting in their social network.flyingpants265 - Wednesday, March 9, 2022 - link
No! Don't let them promote content!Oxford Guy - Thursday, March 10, 2022 - link
Considering how humanity is in the process of using ‘business as usual’ to destroy the only suitable planet for human life — while giving paltry lip service to sustainability, what isn’t ‘extremist content’?People generally have an extremely warped viewpoint when it comes to what is and isn’t rational. Ecological cannibalism isn’t it and yet that is the basis of our elaborate business behaviours — the very respectable and impressive utterly unsustainable landscapes we create.
Vocabulary like ‘sustainability’ itself is part of the fraud, since nothing involving it in any kind of mainstream discourse has anything truly to do with it. It’s kabuki and puts a comfortable sheen on the spreading plastic ocean.
mode_13h - Monday, March 21, 2022 - link
> Vocabulary like ‘sustainability’ itself is part of the fraudAs usual, you're going overboard. Not everyone talking about sustainability is insincere or unrealistic. Of course, there is "green-washing" by corporations and some politicians, but that's nothing new.
If we followed your logic, nobody could even talk about environmentalism without fear of being called a fraud. You're too blinded by your own cynicism to see any way forward, which leaves you on an ideological island. Please don't lure others to join you there.
mode_13h - Monday, March 21, 2022 - link
A big concern about metaverse is the power it has to indoctrinate new followers to a movement. It was shown in research studies that a VR experience involving deforestation can make people more eco-conscious, but what about experiences made by extremist groups and cults?We already have a sort-of example in Youtube, where highly-produced videos are made to promote a cause. These have even proven a powerful recruitment tool by terrorists. How much more potent will VR be, in their hands?
gcolehour - Wednesday, March 9, 2022 - link
I'm pretty sure the 10eflop chart will be correct, but a few little snippets make me think they will achieve a zettaflop with blockchain. Basically bitcoin but the work is useful, and consensus verifies the validity of the calculations you do. What ever "pool" you join will take bids from places needing compute, and you will get rewarded with some new coin where you can exchange it for cash at Intel's reserve bank. Trading compute will be the new commodity with some factors of energy trading (cheaper compute during solar panel generation), some of currency and stock trading (which pool you choose, what coin you get), and traditional exchange of "labour" for currency (the computing).If this catches on you'd effectively have fiat currency pegged to compute. High Tech jobs and energy are the main enablers for a compute economy. It sounds better than an oil economy.
mode_13h - Monday, March 21, 2022 - link
> What ever "pool" you join will take bids from places needing compute,I don't believe this. Compute needs data. Sometimes a lot of it, which can make such schemes impractical. Remember the whole part about reducing data movement? You're proposing to amplify it billions of times!
And people are protective of their data and potentially also their algorithms.
We've seen this model for a couple decades, with projects like SETI @ Home and protein folding. It's been successful in niches, but there are limits to what sorts of problems make sense to tackle like that.
krumme - Thursday, March 10, 2022 - link
OK. When is Vegas NGG going to work?mode_13h - Monday, March 21, 2022 - link
I think they did *eventually* get it working?https://www.phoronix.com/scan.php?page=news_item&a...
It mentions RadeonSI, the GCN driver, so that should mean Vega.
Carmen00 - Thursday, March 10, 2022 - link
Fascinating article. Intel does have the clout to make it happen ... if things like the physics fall into line, of course. The company is doing far more interesting things with Gelsinger at the helm than it's done under the last few lame ducks!(also: who is the comedic genius responsible for "...but is about as well defined as a PHP variable"? Now THAT is a truly sick burn! 😂)
mode_13h - Monday, March 21, 2022 - link
> by zettascale, do you mean one machine, One zettaFLOP, double-precision, 64-bit compute?Massive thanks for this! It's really helpful to pin these guys down, when they start making such lofty claims!
> it’s annoying because I asked for this interview before you bumped into him at
> Supercomputing and had this chat!
Exactly when *was* the chat? You guys have not the best track record of publishing your interviews promptly. If Raja wanted to get some details out to the public, it's hard to fault him for not giving you an exclusive scoop.
> Silicon Photonics or ‘LightBringer’
Hmm... it's a little too on-the-nose. There's got to be some better code name than that.
> I immediately heard it and thought we're going to start getting 800 to 1000 watt GPUs!
Same!
Anyway, thanks for the interview! Some very interesting clues, in there. I guess we'll have to wait and see.
8lec - Tuesday, March 22, 2022 - link
Goodbye Ian. It was nice to you on AT all these yearsMcBoj - Wednesday, April 6, 2022 - link
I'm a little late to this one, but has Anandtech provided a justification for why we get so many Intel fluff pieces? Hardly seems balanced when we don't see anything like this for AMD/Nvidia or other big tech.Is Intel really that much more interesting? Is 'balanced' not a concern?
Dkey - Friday, April 8, 2022 - link
Thank you for this piece of information https://customessayexpert.com/kath1mack - Thursday, April 14, 2022 - link
Interesting