There are days in this profession in which I am surprised. The longer I stay in the technology industry, they become further and further apart. There are several reasons to be surprised: someone comes out of the blue with a revolutionary product and the ecosystem/infrastructure to back it up, or a company goes above and beyond a recent mediocre pace to take on the incumbents (with or without significant financial backing). One reason is confusion, as to why such a product would ever be thought of, and another is seeing how one company reacts to another.

We’ve been expecting the next high-end desktop version of Skylake for almost 18 months now, and fully expected it to be an iterative update over Broadwell-E: a couple more cores, a few more dollars, a new socket, and done. Intel has surprised us with at least two of the reasons above: Skylake-X will increase the core count of Intel’s HEDT platform from 10 to 18.

The Skylake-X announcement is a lot to unpack, and there are several elements to the equation. Let’s start with familiar territory: the first half of the processor launch.

Announcement One: Low Core Count Skylake-X Processors

The last generation, Broadwell-E, offered four processors: two six-core parts, an eight-core part, and a top-tier 10-core processor. The main difference between the two six-core parts was the PCIe lane count, and aside from the hike in pricing for the top-end SKU, these were iterative updates over Haswell-E: two more cores for the top processor.

This strategy from Intel is derived from what they call internally as their ‘LCC’ core, standing for ‘low core count’. The enterprise line from Intel has three designs for their silicon – a low core count, a high core count, and an extreme core count: LCC, HCC, and XCC respectively. All the processors in the enterprise line are typically made from these three silicon maps: a 10-core LCC silicon die, for example, can have two cores disabled to be an 8-core. Or a 22-core XCC die can have all but four cores disabled, but still retain access to all the L3 cache, to have an XCC processor that has a massive cache structure. For the consumer HEDT platform, such as Haswell-E and Broadwell-E, the processors made public were all derived from the LCC silicon.

The first half of the Skylake-X processor llineup follows this trend. Intel will launch four Skylake-X processors based on the LCC die, which for this platform will have a maximum of 12 cores. All processors will have hyperthreading.

Skylake-X Processors (Low Core Count Chips)
  Core i7-7800X Core i7-7820X Core i9-7900X Core i9-7920X
6/12 8/16 10/20 12/24
Base Clock 3.5 GHz 3.6 GHz 3.3 GHz TBD
Turbo Clock 4.0 GHz 4.3 GHz 4.3 GHz TBD
TurboMax Clock N/A 4.5 GHz 4.5 GHz TBD
L3 8.25 MB 11 MB 13.75 MB TBD
(Likely 13.75 MB)
PCIe Lanes 28 44 TBD
(Likely 44)
Memory Channels 4
Memory Freq DDR4-2400 DDR4-2666 TBD
Price $389 $599 $999 $1199

The bottom processor is the Core i7-7800X, running at 3.5 GHz with a 4.0 GHz turbo. This design will not feature Intel’s new ‘favored core’ Turbo 3.0 technology (more on that below), but will have six cores, support quad-channel memory at DDR4-2400, come in at a TDP of 140W, have 28 PCIe lanes, and retail for around $400. This processor will be the entry level model, for any user who needs the benefit of quad-channel memory but perhaps doesn’t need a two-digit number of cores or has a more limited budget.

Next up is the Core i7-7820X, which hits a potential sweet spot in the LCC design. This is an eight-core processor, with the highest LCC base clock of 3.6 GHz and the joint-highest turbo settings: 4.3 GHz for regular turbo and 4.5 GHz for favored core. Unlike the previous processor, this CPU gets support for DDR4-2666 memory.

However in another break from Intel’s regular strategy, this CPU will only support 28 PCIe lanes. Normally only the lowest CPU of the HEDT stack would be adjusted in this way, but Intel is using the PCIe lane allocation as another differentiator as a user considers which processor in the stack to go for. This CPU also runs in at 140W, and comes in at $600. At this price, we would expect it to be competing directly against AMD’s Ryzen 7 1800X, which will be the equivalent of a generation behind in IPC but $100 cheaper.

Comparison: Core i7-7820X vs. Ryzen 7 1800X
Core i7-7820X
Features AMD
Ryzen 7 1800X
8 / 16 Cores/Threads 8 / 16
3.6 / 4.3GHz
(4.5 GHz TMax)
Base/Turbo 3.6 / 4.0 GHz
28 PCIe 3.0 Lanes 16
11 MB L3 Cache 16 MB
140 W TDP 95 W
$599 Price (MSRP) $499

The third processor is also a change for Intel. Here is the first processor bearing the new Core i9 family. Previously we had Core i3, i5 and i7 for several generations. This time out, Intel deems it necessary to add another layer of differentiation in the naming, so the Core i9 naming scheme was the obvious choice. If we look at what the Core i9 name brings to the table, the obvious improvement is PCIe lanes: Core i7 processors will have 28 PCIe lanes, while Core i9 processors will have 44 PCIe lanes. This makes configuring an X299 motherboard a little difficult: see our piece on X299 to read up on why.

Right now the Core i9-7900X is the only Core i9 with any details: this is a ten core processor, running with a 3.3 GHz base, a 4.3 GHz turbo and a 4.5 GHz favored core. Like the last processor, it will support DDR4-2666 and has a TDP of 140W. At this level, Intel is now going to charge $100/core, so this 10-core part runs in at a $999 tray price ($1049 retail likely).

One brain cell to twitch when reading this specification is the price. For Ivy Bridge-E, the top SKU was $999 for six-cores. For Haswell-E, the top SKU was $999 for eight-cores. For Broadwell-E, we expected the top SKU for 10-cores to be $999, but Intel pushed the price up to $1721, due to the way the enterprise processors were priced. For Skylake-X, the new pricing scheme is somewhat scrapped again. This 10-core part is now $999, which is what we expected the Broadwell-E based Core i7-6950X to be. This isn’t the top SKU, but the pricing comes back down to reasonable levels.

Meanwhile for the initial launch of Skylake-X, it is worth noting that this 10-core CPU, the Core i9-7900X, will be the first one available to purch. More on that later.

Still covering the LCC core designs, the final processor in this stack is the Core i9-7920X. This processor will be coming out later in the year, likely during the summer, but it will be a 12-core processor on the same LGA2066 socket for $1199 (retail ~$1279), being part of the $100/core mantra. We are told that Intel is still validating the frequencies of this CPU to find a good balance of performance and power, although we understand that it might be 165W rather than 140W, as Intel’s pre-briefing explained that the whole X299 motherboard set should be ready to support 165W processors.

In the enterprise space, or at least in previous generations, Intel has always had that processor that consumed more power than the rest. This was usually called the ‘workstation’ processor, designed to be in a single or dual socket design but with a pumped up frequency and price to match. In order for Intel to provide this 12-core processor to customers, as the top end of the LCC silicon, it has to be performant, power efficient, and come in at reasonable yields. There’s a chance that not all the factors are in place yet, especially if they come out with a 12-core part that is clocked high and could potentially absorb some of their enterprise sales.

Given the expected timing and launch for this processor, as mentioned we were expecting mid-summer, that would have normally put the crosshairs into Intel’s annual IDF conference in mid-August, although that conference has now been canned. There are a few gaming events around that time to which Intel may decide to align the launch to.

Announcement Two: High Core Count Skylake-X Processors
Comments Locked


View All Comments

  • ddriver - Friday, June 2, 2017 - link

    "I would be willing to bet that between 2-4 of those can replace your entire farm and still give you better FLOP/$."

    Not really. Aside from the 3770k's running at 4.4 Ghz, most of the performance actually comes from GPU compute. You can't pack those tiny stock rackmount systems with GPUs. Not that 256 cores @2.9 GHz would come anywhere near 256 cores @4.4 Ghz, even if they had the I/O to accommodate the GPUs.

    And no, Intel is NO LONGER better at flops/$. Actually it may not have ever been, considering how cheap Amd processors are. Amd was simply too slow and too power inefficient for me until now.

    And since the launch of Ryzen, Amd offers 50-100% better flops/$, so it is a no brainer, especially when performance is not only so affordable but actually ample.

    Your who post narrative basically says "intel fanboy in disguise". I guess it is back to the drawing board for you.
  • Meteor2 - Saturday, June 3, 2017 - link

    Ddriver is our friendly local troll; best ignored and not fed.
  • trivor - Saturday, June 3, 2017 - link

    Whether you're a large corporation with $Billion IT budget with dedicated IT or a SOHO (Small Office Home Office) user with a very limited budget everyone is looking for bang for the buck. While most people on this site are enthusiasts we all have some kind of budget to keep. Where do we find the sweet spot for gaming (intersection of CPU/GPU for the resolution we want) and more and more having a fairly quiet system (and even more for a HTPC) is important. While some corporations might be tied to certain vendors (Microsoft, Dell, Lenovo, etc.) they don't necessarily care what components are in there because it is the vendor that will be warranting the system. For pure home users, all of the systems are not for us. Ryzen 5/7, i5/i7, and maybe i9 are the cpus and SOCs for us. Anything more than that will not help our gaming performance or even other tasks (Video Editing/Encoding) because even multi core aware programs (Handbrake) can't necessarily use 16-20 cores. The absolute sweet spot right now are the CPUs around $200 (Ryzen 5 1600/1600x, Core i5) because you can get a very nice system in the $600 range. That will give you good performance in most games and other home user tasks.
  • swkerr - Wednesday, May 31, 2017 - link

    There may be brand loyalty on the Retail side but it does not exist in the Corporate world. Data Center mangers will look at total cost of ownership. Performance per watt will be key as well as the cost of the CPU and motherboard. What the Corporate world s loyal to is the brand of server and if Dell\Hp etc make AMD based servers than they will add them if the total cost of ownership looks good.

  • Namisecond - Wednesday, May 31, 2017 - link

    Actually, even for the consumer retail side, there isn't brandy loyalty at the CPU level (excepting a very vocal subset of the small "enthusiast" community) Brandy loyalty is at the PC manufacturer level: Apple, Dell, HP, Lenovo, etc.
  • bcronce - Tuesday, May 30, 2017 - link

    "But at that core count you are already limited by thermal design. So if you have more cores, they will be clocked lower. So it kind of defeats the purpose."

    TDP scales with the square of the voltage. Reduce the voltage 25%, reduce the TDP by almost 50%. Voltage scales non-linearly with frequency. Near the high end of the stock frequency, you're gaining 10% clock for a 30% increase in power consumption because of the large increase in voltage to keep the clock rate stable.
  • ddriver - Tuesday, May 30, 2017 - link

    The paragraph next to the one you quoted explicitly states that lower clocks is where you hit the peak of the power/performance ratio curve. Even to an average AT reader it should be implied that lowered clocks come with lowered voltage.

    There is no "magic formula" like for example the quadratic rate of intensity decay for a point light source. TDP vs voltage vs clocks in a function of process scale, maturity, leakage and operating environment. It is however true that the more you push above the optimal spot the less performance you will get for every extra watt.
  • boeush - Tuesday, May 30, 2017 - link

    "More cores would be beneficial for servers, where the chips are clocked significantly lower, around 2.5 Ghz, allowing to hit the best power/performance ratio by running defacto underclocked cores.

    But that won't do much good in a HEDT scenario."

    I work on software development projects where one frequently compiles/links huge numbers if files into a very large application. For such workloads, you can never have enough cores.

    Similarly, I imagine any sort of high-resolution (4k, 8k, 16k) raytracing or video processing workloads would benefit tremendously from many-core CPUs.

    Ditto for complex modelling tasks, such as running fluid dynamics, heat transfer, or finite element stress/deformation analysis.

    Ditto for quantum/molecular simulations.

    And so on, and on. Point being, servers are not the only type of system to benefit from high core counts. There are many easily-parallelizable problems in the engineering, research, and general R&D spheres that can benefit hugely.
  • ddriver - Tuesday, May 30, 2017 - link

    The problem is that the industry wants to push HEDT as gaming hardware. They could lower clocks and voltages, and add more cores, which would be beneficial to pretty much anything time staking like compilation, rendering, encoding or simulations, as all of those render themselves very well to multithreading and scale up nicely.

    But that would be too detrimental to gaming performance, so they will lose gamers as potential customers for HEDT. They'd go for the significantly cheaper, lower core count, higher clocked CPU. So higher margins market would be lost.
  • Netmsm - Thursday, June 1, 2017 - link

    "AMD will not and doesn't need to launch anything other than 16 core. Intel is simply playing the core count game, much like it played the Mhz game back in the days of pentium4."
    Exactly ^_^ That's it.

Log in

Don't have an account? Sign up now