I can't shake the feeling that using a 48V bus is a mistake. Nearly all power FETs built today have a 30V Vds max. Using 48V means all new power FETs need to be designed.
24V would have been more reasonable. Major efficiency improvements over 12V while not requiring new hardware to be designed.
Moving to 48V would be even better: it would effective mean quadrupling any existing trace design's power carrying capacity. As an example, the 75W PCIe bus power would jump to 300W without needing new hardware designs, allowing pretty much every GPU on the market to switch to bus power only, eliminating a whole pile of internal wiring (reduce both material cost and assembly cost).
CPU Power "Stamps" have been an IBM trademark since PowerPC G3's where the FET's, voltage conversion (and external cache - a norm at the time...) were on the same PCB. This greatly simplified motherboard architecture and allowed for some pretty ridiculous life cycles.
The PowerMac's are a great example of this. Board that shipped with 300MHz processors could be upgraded a DECADE later to the latest G4 architecture running dual 1.8GHz CPU's - using THE SAME motherboard. It was no surprise these systems held their value for 10-12 years until the G5 architecture was discontinued and no further upgrade path was possible.
That doesn’t sound right. There are plenty of high voltage FETs available. There are trades to be made in other FET parameters when you go high voltage though that can degrade efficiency in that stage. There is hope that GaN can overcome a lot of those drawbacks. They’re going to have to get the voltages up in these DC systems so the resistive losses don’t get out of control.
They are beating an ancient dead horse with this power solution utilizing Buck Converter Topology with PEAK efficiencies of 91%, which means efficiency as low as 82 - 85% at lower power levels. With 600W to memory and CPU that is 90W of heat. The new approach using GaN at 2MHz switching comes with a hope and prayer elevated frequencies will suffice. 48V to 1V reduction means a 2% duty cycle and probably 12 to 16 Buck converter phases. This is a ludicrous solution because 2% duty cycle at 2MHz means ON periods of 10ns - oscillator jitter at that level is problematic. But worse than this the ferrite core's at 2MHz operate at 5 mT flux density at best, which means higher frequency will not result in reduced ferrite core size. It is time to stop beating this dead horse topology and switch to modern topologies using hybrid PWM-resonant switching, as the Cuk patents from 8 to 10 years ago demonstrated with >98.5 to 99% efficiency. To wit: reduce heat by over 90% and use just one or two phases. A big advantage of fractional resonance and resonance scaling is the ferrite core is eliminated entirely even at 50kHz switching, which reduces power supply footprint by 85%. Chip manufacturers like this dead horse as they sell lots more chips. It is frustrating to watch power electronics engineers engage in this ludicrous power solution, because they believe the marketing hype of the power chip manufacturers. There is a disaster in the making here.
For those interested in the refinements of power electronics and the topology issues mentioned above, here is a link to the commentary at the recent Rap Session 1 at APEC 2018. I am just not impressed by Intel's new servers, and Intel's engineers should know better. But apparently they do not. Certainly the acumen of organizers of the recent APEC 2018 conference leaves a lot to be desired, where profit rules over common sense.
The real problem is consumers pay for a huge number of ferrite cores, MOSFET power stages and controllers, when a much smaller footprint at a small fraction of the cost would increase both manufacturer's profitability, as well as eliminating substantial heat for a significant reduction in motherboard cost. Lackadaisical engineering prowess places the cost at the consumer big time. Imagine 12 phase Buck VRMs with 12 ferrite cores are reduced to zero, where inductors with resonance scaling are reduced to 10nH as 5mm copper traces on the motherboard. That is a very big deal in cost and effort, with ceramic capacitors eliminating the larger capacitors. Intel does not bet per se, they are just victims of really bad practices, and Intel is not alone in this.
This issue is compounded with ATX PSU, GPU VRMs, and motherboard VRMs connected in series. The ATX PSU is 80% efficient in supplying power, then various VRM's convert that output power with another 80% conversion efficiency on top of that, so in the end almost 60% of input power goes into waste heat due to cascaded DC-DC conversions. A near 40% waste heat could be reduced to 5%, the ATX power supply could be 20% of its current volume, and individual VRM footprints <10% of their current sizes (without ferrite chokes). Looked at in those terms we are paying a lot of money for no gain. Power electronics manufacturers are very happy their hype is working in their favor. We just keep losing money in our purchases. Using single cycle settling times with modern topologies also eliminates the complexity of the power controllers, which could be integrated into a single chip with the power stage. A Ryzen like takeover is needed in the realm of motherboard power supplies. I hope AMD is listening. AMD! Please tell me you are listening. As a retired member of the electronics industry, I can say I am most disappointed in where this industry currently stands. OK! I will rest from my rant now, and hope for change.
Did I mention a 50 kHz DC-DC converter using hybrid PWM-resonant switching and resonance scaling, allows a 48V to 1V conversion ratio at 50% duty cycle and not 2% pulse width as in a Buck converter. The war drums are rolling.
Waste is not the issue - heat is - which translates into a) reduced life of components due to stress, b) increased size and cost of several PSU and VRMs elements, c) trouble providing cooling at a reasonable cost, and of course, d) much larger cost of the board overall. Cost, Cost, Cost, and reduced component life. I would have thought that was obvious in my posting. There is of course noise due to several cooling fans which constantly annoy you. If I can get both reduced noise, and cost, why would I not go for it. Oh yes, and reduced cost also means increased profitability due to better margins, so your point is moot, or at least I fail to see how waste in this case could possibly translate to profitability - it cannot. It has nothing to do with rare Earth elements, just common sense and good business.
I would like to see the total cascaded efficiency in a server farm from mains to a 1V processor rail before making any opinions. Dealing with any individual stage's efficiency can be misleading.
In regards to GaN, I was just saying that IN THEORY it should have better performance at any given switching frequency than silicon. I agree that running DC-DC converters at 2 MHz to get size down has diminishing returns that even GaN can't overcome.
Efficiency varies with load, with both lower and higher loads less than peak efficiency. A good average for Buck converters is 82 - 85% at best. ATX power supply yields 12V, then VRM's use that to provide 1V, and the two cascaded easily result in 40% losses overall. Actually your point about GaN is quite relevant and accurate, as it is superior to Silicon MOSFETs at any frequency. With GaN all the devices could be integrated on a single chip with the driver (already being done by TI for a half bridge), and then enlarge the current flow area to reduce Rdson. Integrate the controller onto the chip too (not done yet), and using a topology that does not require an air-gapped ferrite inductor makes the VRM footprint a tiny fraction of Buck converters. Not to mention a tiny fraction of the cost. The inductor is a 5 mm length of wire to give 10nH air-cored inductance at 50kHz. The efficiency is in the range of 99% to 99.5%, or as low as 98.5% at load currents approaching 200 to 250 Amps. Modern CPUs are already exceeding 150A with server chips up in the 230A range, and at 50kHz switching losses for GaN are moot, and with conduction losses very low by making the transistor current flow region larger in area, unheard of efficiencies are possible. The power supply on a single chip is possible today with GaN and newer switching methods of the newer topologies - compare that with motherboards today with 16 ferrite chokes and as many as 30 chips. A 300 Amp VRM would be very tiny with just a single chip. I would bet VRM cost exceeds 25% of total motherboard cost at the high end, and for servers >30%.
Intel split its SKL-SP family into SKUs that supported 768MB and M versions that supported 1536MB for about 30% more. The argument from Intel was that only 5% of the market needed that amount of memory per socket. So following that logic, they might have SKUs supporting 1TB and 2TB, unless they double up all around and go to 2TB/4TB depending on the LRDIMMs available.
And, of course, Intel charges at least 100% more for the -M chips... that are probably ordinary chips without one bridge cut. Meanwhile AMD's Epyc lineup supports up to 2TB RAM per CPU without any artificial model segmentation.
And E7 v4 was used to support 3DPC at max 3TB per socket (24TB for 8 sockets) via Jordan Creek Scalable Memory Buffer, the reason Intel ditched that 3DPC is because it would operate only at DDR4-1333 for E7 and DDR4-1866 for E5, Skylake-SP can do 2666 in all config (whether 1DPC or 2DPC) no matter how many ranks each DIMM has. EPYC has a much weaker IMC which can only operate 2666 in 1R RDIMM & in 1DPC only, for RDIMM all other configs would drop to 2400 or 2133, LRDIMM would fare better but is much more expensive
but EPYC already has 8 dimm/cpu meaning that a dualsocket has already 512GB ram without the need of the second DIMM row. And all SKU -1 drop to 2400 when dual row dimms, marginal perf diff for such a memory layout.
Innovation is so hard and expensive... when the competition isn't on your heels.
But here's some not so veiled "let's defend Intel because that's why we're here" argument: "The argument from Intel was that only 5% of the market needed that amount of memory per socket". What you're not saying is that it was followed up by "and since those 5% have few alternatives we're going to squeeze them dry".
The market doesn't have much choice in the matter so Intel can milk it. It's not like you'll just quickly switch to AMD based servers.
A couple of years ago Ian Cutress was promoting and supporting Intel's argument that the market doesn't need more than 4 cores and that building such a CPU is prohibitively expensive and too technologically challenging to build which is why they're targeting just the ultra-enthusiast. Ryzen launched shortly after. And lo and behold, Intel found another 2-4 cores in their pocket to tack on to the old CPUs with barely a price change. They also found that the market wanted and/or needed more cores. And whaddayaknow, they even make a profit on them.
But don't let Intel's history of constantly lying about their overcharging and lack of solid progress YoY get in the way of defending them. I'm sure this time it's fo' real.
I dont think there is only 5% of market needing that at all, it is simply because Memory has becomes ridiculously expensive. So few years ago we were hoping the $1000 DRAM would drop to $500, instead it is now $2000.
Cascade Lake is due with Optane DIMM support. I fathom the M suffix will be the only SKUs that will support them due to their expected capacity advantage over LR-DIMMs.
Oh goodie, can't wait for overpriced memory modules that cr@p out in week of typical usage.
There are things I would use hypetane for, but under tight scrutiny in terms of usage patterns. Shoehorning it into a ram socket and role is tremendously stupid, because performance is piss poor compared to dram, and endurance is practically pathetic. If you don't keep a tight control over the usage, it will wear out in zero time.
Which is good for intel anyway, since they still struggle hard to sell poor hypetane.
Its a collusion of forces engineered to maximize profitability of the power chip manufacturers at best, but it seems the acumen of its members leaves a lot to be desired, and hence are likely to damage the industry given enough time to be allowed to prevail in their agenda.
" If Intel is using the same on-chip network implementation as Skylake, it would also mean that one of the segments that previously used for a tri-memory channel controller actually has enough space for a quad-memory channel controller."
Alternative take 1: Intel is increasing the core size so there is now room for two quad channel memory controllers on-die.
Alternative take 2: Intel decreased the die size and is now using four dual channel memory controllers on-die.
EMIB seems to be a natural way to scale up core counts and additional logic on package. I would expect that the memory controllers and IO get spun off into their own separate dies as that would simply matching the requisite IO for each socket: 2066, BGA 2518 (Xeon D), 3647, 4189 etc.
What really make EMIB so nice is that it can use different process on the same chip - for example the more critical can be done on smaller processes while less critical like IO can be done on less dense material. Even in example of 980xG even from different manufactures - not just AMD but NVidia also. But the I expect the real use will be Artic Sound GPU implementations in the future. I believe AMD used is only temporary unless GPU or Laptop vendors demand it
But this article sheds light on another possibility up multiple CPUs and include GPU cores on signal modules, What this could main is dual cpu notebooks one day - not just multi-core CPU's
Gaining over 500 pins without any change to footprint and while maintaining power pin layout? That would preclude both a 'use smaller pins' modification (power pins would relocate) or 'use a bigger socket' (would not be drop-in compatible with LGA3647). I wonder if the "Omni Path Stickey-Outey-Bit" will transition from a card-edge protrusion for a separate cable to an additional pinned area.
I know I'm late to this party, but why not just switch to VCSEL or some other Silicon Photonics option. Effectively have the chip spit out light directly (via interposer or EMIB most likely) and stop messing with Copper Cables.
But since I'm replying from the future, Intel already killed off OmniPath so they don't need to worry about this anymore.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
37 Comments
Back to Article
Khenglish - Monday, April 9, 2018 - link
I can't shake the feeling that using a 48V bus is a mistake. Nearly all power FETs built today have a 30V Vds max. Using 48V means all new power FETs need to be designed.24V would have been more reasonable. Major efficiency improvements over 12V while not requiring new hardware to be designed.
LarsAlereon - Monday, April 9, 2018 - link
The thing is there's already a 48V DC supply available in the data center, so running it's worth designing new hardware to avoid a conversion stage.The_Assimilator - Monday, April 9, 2018 - link
It is way, way, WAY past time for the ATX standard to move to 24V.edzieba - Tuesday, April 10, 2018 - link
Moving to 48V would be even better: it would effective mean quadrupling any existing trace design's power carrying capacity. As an example, the 75W PCIe bus power would jump to 300W without needing new hardware designs, allowing pretty much every GPU on the market to switch to bus power only, eliminating a whole pile of internal wiring (reduce both material cost and assembly cost).Samus - Tuesday, April 10, 2018 - link
Isn't PowerPC (Power9) already 48v?CPU Power "Stamps" have been an IBM trademark since PowerPC G3's where the FET's, voltage conversion (and external cache - a norm at the time...) were on the same PCB. This greatly simplified motherboard architecture and allowed for some pretty ridiculous life cycles.
The PowerMac's are a great example of this. Board that shipped with 300MHz processors could be upgraded a DECADE later to the latest G4 architecture running dual 1.8GHz CPU's - using THE SAME motherboard. It was no surprise these systems held their value for 10-12 years until the G5 architecture was discontinued and no further upgrade path was possible.
flgt - Monday, April 9, 2018 - link
That doesn’t sound right. There are plenty of high voltage FETs available. There are trades to be made in other FET parameters when you go high voltage though that can degrade efficiency in that stage. There is hope that GaN can overcome a lot of those drawbacks. They’re going to have to get the voltages up in these DC systems so the resistive losses don’t get out of control.vgray35@hotmail.com - Tuesday, April 10, 2018 - link
They are beating an ancient dead horse with this power solution utilizing Buck Converter Topology with PEAK efficiencies of 91%, which means efficiency as low as 82 - 85% at lower power levels. With 600W to memory and CPU that is 90W of heat. The new approach using GaN at 2MHz switching comes with a hope and prayer elevated frequencies will suffice. 48V to 1V reduction means a 2% duty cycle and probably 12 to 16 Buck converter phases. This is a ludicrous solution because 2% duty cycle at 2MHz means ON periods of 10ns - oscillator jitter at that level is problematic. But worse than this the ferrite core's at 2MHz operate at 5 mT flux density at best, which means higher frequency will not result in reduced ferrite core size. It is time to stop beating this dead horse topology and switch to modern topologies using hybrid PWM-resonant switching, as the Cuk patents from 8 to 10 years ago demonstrated with >98.5 to 99% efficiency. To wit: reduce heat by over 90% and use just one or two phases. A big advantage of fractional resonance and resonance scaling is the ferrite core is eliminated entirely even at 50kHz switching, which reduces power supply footprint by 85%. Chip manufacturers like this dead horse as they sell lots more chips. It is frustrating to watch power electronics engineers engage in this ludicrous power solution, because they believe the marketing hype of the power chip manufacturers. There is a disaster in the making here.vgray35@hotmail.com - Tuesday, April 10, 2018 - link
For those interested in the refinements of power electronics and the topology issues mentioned above, here is a link to the commentary at the recent Rap Session 1 at APEC 2018. I am just not impressed by Intel's new servers, and Intel's engineers should know better. But apparently they do not. Certainly the acumen of organizers of the recent APEC 2018 conference leaves a lot to be desired, where profit rules over common sense.https://www.linkedin.com/pulse/biggest-impact-devi...
iter - Tuesday, April 10, 2018 - link
When it comes to maximum profits, intel bets on other horses rather than engineering excellency.vgray35@hotmail.com - Tuesday, April 10, 2018 - link
The real problem is consumers pay for a huge number of ferrite cores, MOSFET power stages and controllers, when a much smaller footprint at a small fraction of the cost would increase both manufacturer's profitability, as well as eliminating substantial heat for a significant reduction in motherboard cost. Lackadaisical engineering prowess places the cost at the consumer big time. Imagine 12 phase Buck VRMs with 12 ferrite cores are reduced to zero, where inductors with resonance scaling are reduced to 10nH as 5mm copper traces on the motherboard. That is a very big deal in cost and effort, with ceramic capacitors eliminating the larger capacitors. Intel does not bet per se, they are just victims of really bad practices, and Intel is not alone in this.This issue is compounded with ATX PSU, GPU VRMs, and motherboard VRMs connected in series. The ATX PSU is 80% efficient in supplying power, then various VRM's convert that output power with another 80% conversion efficiency on top of that, so in the end almost 60% of input power goes into waste heat due to cascaded DC-DC conversions. A near 40% waste heat could be reduced to 5%, the ATX power supply could be 20% of its current volume, and individual VRM footprints <10% of their current sizes (without ferrite chokes). Looked at in those terms we are paying a lot of money for no gain. Power electronics manufacturers are very happy their hype is working in their favor. We just keep losing money in our purchases. Using single cycle settling times with modern topologies also eliminates the complexity of the power controllers, which could be integrated into a single chip with the power stage. A Ryzen like takeover is needed in the realm of motherboard power supplies. I hope AMD is listening. AMD! Please tell me you are listening. As a retired member of the electronics industry, I can say I am most disappointed in where this industry currently stands. OK! I will rest from my rant now, and hope for change.
vgray35@hotmail.com - Tuesday, April 10, 2018 - link
Did I mention a 50 kHz DC-DC converter using hybrid PWM-resonant switching and resonance scaling, allows a 48V to 1V conversion ratio at 50% duty cycle and not 2% pulse width as in a Buck converter. The war drums are rolling.iter - Tuesday, April 10, 2018 - link
Waste is not an issue if it can be translated into profit.Even when it wastes critically important finite quantity rare Earth elements.
vgray35@hotmail.com - Wednesday, April 11, 2018 - link
Waste is not the issue - heat is - which translates into a) reduced life of components due to stress, b) increased size and cost of several PSU and VRMs elements, c) trouble providing cooling at a reasonable cost, and of course, d) much larger cost of the board overall. Cost, Cost, Cost, and reduced component life. I would have thought that was obvious in my posting. There is of course noise due to several cooling fans which constantly annoy you. If I can get both reduced noise, and cost, why would I not go for it. Oh yes, and reduced cost also means increased profitability due to better margins, so your point is moot, or at least I fail to see how waste in this case could possibly translate to profitability - it cannot. It has nothing to do with rare Earth elements, just common sense and good business.flgt - Tuesday, April 10, 2018 - link
I would like to see the total cascaded efficiency in a server farm from mains to a 1V processor rail before making any opinions. Dealing with any individual stage's efficiency can be misleading.In regards to GaN, I was just saying that IN THEORY it should have better performance at any given switching frequency than silicon. I agree that running DC-DC converters at 2 MHz to get size down has diminishing returns that even GaN can't overcome.
vgray35@hotmail.com - Wednesday, April 11, 2018 - link
Efficiency varies with load, with both lower and higher loads less than peak efficiency. A good average for Buck converters is 82 - 85% at best. ATX power supply yields 12V, then VRM's use that to provide 1V, and the two cascaded easily result in 40% losses overall. Actually your point about GaN is quite relevant and accurate, as it is superior to Silicon MOSFETs at any frequency. With GaN all the devices could be integrated on a single chip with the driver (already being done by TI for a half bridge), and then enlarge the current flow area to reduce Rdson. Integrate the controller onto the chip too (not done yet), and using a topology that does not require an air-gapped ferrite inductor makes the VRM footprint a tiny fraction of Buck converters. Not to mention a tiny fraction of the cost. The inductor is a 5 mm length of wire to give 10nH air-cored inductance at 50kHz. The efficiency is in the range of 99% to 99.5%, or as low as 98.5% at load currents approaching 200 to 250 Amps. Modern CPUs are already exceeding 150A with server chips up in the 230A range, and at 50kHz switching losses for GaN are moot, and with conduction losses very low by making the transistor current flow region larger in area, unheard of efficiencies are possible. The power supply on a single chip is possible today with GaN and newer switching methods of the newer topologies - compare that with motherboards today with 16 ferrite chokes and as many as 30 chips. A 300 Amp VRM would be very tiny with just a single chip. I would bet VRM cost exceeds 25% of total motherboard cost at the high end, and for servers >30%.iwod - Monday, April 9, 2018 - link
An 8 Channel Design and still only 1TB Memory support?Ian Cutress - Monday, April 9, 2018 - link
Intel split its SKL-SP family into SKUs that supported 768MB and M versions that supported 1536MB for about 30% more. The argument from Intel was that only 5% of the market needed that amount of memory per socket. So following that logic, they might have SKUs supporting 1TB and 2TB, unless they double up all around and go to 2TB/4TB depending on the LRDIMMs available.The_Assimilator - Monday, April 9, 2018 - link
And, of course, Intel charges at least 100% more for the -M chips... that are probably ordinary chips without one bridge cut. Meanwhile AMD's Epyc lineup supports up to 2TB RAM per CPU without any artificial model segmentation.jerrytsao - Monday, April 9, 2018 - link
And E7 v4 was used to support 3DPC at max 3TB per socket (24TB for 8 sockets) via Jordan Creek Scalable Memory Buffer, the reason Intel ditched that 3DPC is because it would operate only at DDR4-1333 for E7 and DDR4-1866 for E5, Skylake-SP can do 2666 in all config (whether 1DPC or 2DPC) no matter how many ranks each DIMM has. EPYC has a much weaker IMC which can only operate 2666 in 1R RDIMM & in 1DPC only, for RDIMM all other configs would drop to 2400 or 2133, LRDIMM would fare better but is much more expensiveduploxxx - Tuesday, April 10, 2018 - link
but EPYC already has 8 dimm/cpu meaning that a dualsocket has already 512GB ram without the need of the second DIMM row. And all SKU -1 drop to 2400 when dual row dimms, marginal perf diff for such a memory layout.close - Tuesday, April 10, 2018 - link
Innovation is so hard and expensive... when the competition isn't on your heels.But here's some not so veiled "let's defend Intel because that's why we're here" argument: "The argument from Intel was that only 5% of the market needed that amount of memory per socket". What you're not saying is that it was followed up by "and since those 5% have few alternatives we're going to squeeze them dry".
The market doesn't have much choice in the matter so Intel can milk it. It's not like you'll just quickly switch to AMD based servers.
A couple of years ago Ian Cutress was promoting and supporting Intel's argument that the market doesn't need more than 4 cores and that building such a CPU is prohibitively expensive and too technologically challenging to build which is why they're targeting just the ultra-enthusiast. Ryzen launched shortly after. And lo and behold, Intel found another 2-4 cores in their pocket to tack on to the old CPUs with barely a price change. They also found that the market wanted and/or needed more cores. And whaddayaknow, they even make a profit on them.
But don't let Intel's history of constantly lying about their overcharging and lack of solid progress YoY get in the way of defending them. I'm sure this time it's fo' real.
iwod - Tuesday, April 10, 2018 - link
I dont think there is only 5% of market needing that at all, it is simply because Memory has becomes ridiculously expensive. So few years ago we were hoping the $1000 DRAM would drop to $500, instead it is now $2000.Kevin G - Monday, April 9, 2018 - link
Cascade Lake is due with Optane DIMM support. I fathom the M suffix will be the only SKUs that will support them due to their expected capacity advantage over LR-DIMMs.iter - Tuesday, April 10, 2018 - link
Oh goodie, can't wait for overpriced memory modules that cr@p out in week of typical usage.There are things I would use hypetane for, but under tight scrutiny in terms of usage patterns. Shoehorning it into a ram socket and role is tremendously stupid, because performance is piss poor compared to dram, and endurance is practically pathetic. If you don't keep a tight control over the usage, it will wear out in zero time.
Which is good for intel anyway, since they still struggle hard to sell poor hypetane.
lurker22 - Monday, April 9, 2018 - link
Can't include 1 line in the entire article that defines what the heck is the power stamp alliance?vgray35@hotmail.com - Tuesday, April 10, 2018 - link
Its a collusion of forces engineered to maximize profitability of the power chip manufacturers at best, but it seems the acumen of its members leaves a lot to be desired, and hence are likely to damage the industry given enough time to be allowed to prevail in their agenda.Kevin G - Monday, April 9, 2018 - link
" If Intel is using the same on-chip network implementation as Skylake, it would also mean that one of the segments that previously used for a tri-memory channel controller actually has enough space for a quad-memory channel controller."Alternative take 1: Intel is increasing the core size so there is now room for two quad channel memory controllers on-die.
Alternative take 2: Intel decreased the die size and is now using four dual channel memory controllers on-die.
It is bit too early to tell which one is reality.
Ian Cutress - Monday, April 9, 2018 - link
^^ thisHStewart - Monday, April 9, 2018 - link
To me this looks like they are moving to EMIB technology on Xeon chips - probably on other lines also. This could mean multiple CPU or CPU and GPU.Also adapter sounds like a way to make motherboards handle older processors - or possibly mean
new motherboards will be upgradable to new processors.
One thing interesting is that Power Alliance page does not include Intel as member. when it looks aim at Intel products.
Kevin G - Monday, April 9, 2018 - link
EMIB seems to be a natural way to scale up core counts and additional logic on package. I would expect that the memory controllers and IO get spun off into their own separate dies as that would simply matching the requisite IO for each socket: 2066, BGA 2518 (Xeon D), 3647, 4189 etc.HStewart - Monday, April 9, 2018 - link
What really make EMIB so nice is that it can use different process on the same chip - for example the more critical can be done on smaller processes while less critical like IO can be done on less dense material. Even in example of 980xG even from different manufactures - not just AMD but NVidia also. But the I expect the real use will be Artic Sound GPU implementations in the future. I believe AMD used is only temporary unless GPU or Laptop vendors demand itBut this article sheds light on another possibility up multiple CPUs and include GPU cores on signal modules, What this could main is dual cpu notebooks one day - not just multi-core CPU's
peterfares - Tuesday, April 10, 2018 - link
Dual CPU notebooks wouldn't make much sense unless they're going for more than 18 cores in a single CPU. Maybe in a very long time though.patrickjp93 - Tuesday, April 10, 2018 - link
It would make plenty of sense to save power when not under maximum load.Ian Cutress - Monday, April 9, 2018 - link
As stated, the adapter is likely just to be for translating equivalent power pins. I doubt there's any functionality transfer.edzieba - Tuesday, April 10, 2018 - link
Gaining over 500 pins without any change to footprint and while maintaining power pin layout? That would preclude both a 'use smaller pins' modification (power pins would relocate) or 'use a bigger socket' (would not be drop-in compatible with LGA3647). I wonder if the "Omni Path Stickey-Outey-Bit" will transition from a card-edge protrusion for a separate cable to an additional pinned area.Casper42 - Wednesday, March 4, 2020 - link
I know I'm late to this party, but why not just switch to VCSEL or some other Silicon Photonics option.Effectively have the chip spit out light directly (via interposer or EMIB most likely) and stop messing with Copper Cables.
But since I'm replying from the future, Intel already killed off OmniPath so they don't need to worry about this anymore.
nasfufrsrgiu - Saturday, August 22, 2020 - link
http://bitly.com/zoom-viber-skype-psy