One of our forum members, Sweepr, posted Intel’s latest pricing list for OEMs dated the 24th of January and it contained a number of interesting parts worth documenting.  The Braswell parts and Skylake Celerons were disclosed over the past few months are now available to OEMs, but it’s the parts with Iris Pro that have our attention.

Iris Pro is Intel’s name for their high end graphics solution. Using their latest graphics microarchitecture, Gen9, Iris Pro packs in the most execution units (72) as well as a big scoop of eDRAM. At the minute we assume it’s the 128 MB edition as Intel’s roadmaps have stated a 4+4e part only on mobile, rather than a 4+3e part with 64 MB (only the 2+3e parts are listed as 64MB), although we are looking for confirmation.

The new parts are listed as:

Xeon E3-1575M v5 (8M cache, 4 Cores, 8 Threads, 3.00 GHz, 14nm) - $1,207
Xeon E3-1545M v5 (8M cache, 4 Cores, 8 Threads, 2.90 GHz, 14nm) - $679
Xeon E3-1515M v5 (8M cache, 4 Cores, 8 Threads, 2.80 GHz, 14nm) - $489

These will compare to the non-Iris Pro counterparts, running P530 graphics (4+2, 24 EUs):

Xeon E3-1535M v5 (8M cache, 4 Cores, 8 Threads, 2.90 GHz, 14nm) - $623
Xeon E3-1505M v5 (8M cache, 4 Cores, 8 Threads, 2.80 GHz, 14nm) - $434

As Sweepr points out, the difference between the 2.8-2.9 GHz parts is only $55-56. That is for both the increase in graphics EUs (24 to 72) as well as that extra on-package eDRAM.


The i7-4950HQ with 128 MB eDRAM

We have more reasons to be excited over the eDRAM in Skylake than what we saw before in Haswell with the i7-4950HQ on mobile and Broadwell on desktop with the i7-5775C, i5-5765C and the relevant Xeons. With the older platforms, the eDRAM was not a proper bidirectional cache per se.  It was used as a victim cache, such that data that was spurned from the L3 cache on the CPU ended up in eDRAM, but the CPU could not place data from the DRAM into the eDRAM without using it first (prefetch prediction). This also meant that the eDRAM was invisible to any other devices on the system, and without specific hooks couldn’t be used by most software or peripherals.

With Skylake, this changes, the eDRAM lies beyond the L3 and the System Agent as a pathway to DRAM, meaning that any data that wants DRAM space will go through the eDRAM in search for it. Rather than acting as a pseudo-L4 cache, the eDRAM becomes a DRAM buffer and automatically transparent to any software (CPU or IGP) that requires DRAM access. As a result, other hardware that communicates through the system agent (such as PCIe devices or data from the chipset) and requires information in DRAM does not need to navigate through the L3 cache on the processor.  Technically graphics workloads still need to circle around the system agent, perhaps drawing a little more power, but GPU drivers need not worry about the size of the eDRAM when it becomes buffer-esque and is accessed before the memory controller is adjusted into a higher power read request. The underlying message is that the eDRAM is now observed by all DRAM accesses, allowing it to be fully coherent and no need for it to be flushed to maintain that coherence. Also, for display engine tasks, it can bypass the L3 when required in a standard DRAM access scenario. While the purpose of the eDRAM is to be as seamless as possible, Intel is allowing some level on control at the driver level allowing textures larger than the L3 to reside only in eDRAM in order to prevent overwriting the data contained in the L3 and having to recache it for other workloads.

We go into more detail on the changes to Skylake’s eDRAM in our microarchitecture analysis piece, back from September.

The fact that Intel is approaching the mobile Xeon market first, rather than the consumer market as in Haswell, should be noted. eDRAM has always been seen as a power play for heavy DRAM workloads, which arguably occur more in professional environments. That still doesn’t stop desktop users requesting it as well – the fact that the jump from 4+2 to a 4+4e package is only $55-$56 means that if we apply the same metrics to desktop processors, an i5-6600K with eDRAM would be $299 in retail (vs. $243 MSRP on the standard i5-6600K).

One of the big tasks this year will be to see how the eDRAM, in the new guise as a DRAM buffer, makes a difference to consumer and enterprise workloads. Now that there are two pairs of CPUs on Intel’s pricing list that are identical aside from the eDRAM, we have to go searching for a source. It seems that HP has already released a datasheet showing the HP ZBook 17 G3 Mobile Workstation as being offered with the E3-1575 v5, which Intel lists as a whopping $1207. That's certainly not the extra $55.

Source: AnandTech Forums, Intel

Comments Locked

72 Comments

View All Comments

  • nils_ - Wednesday, January 27, 2016 - link

    If it weren't for NVidias atrocious linux drivers (although the Intel drivers for Skylake also suck big time) I'd be using my dGPU for everything. As such I'm one of the few people who actually use the iGPU in a high end skylake...
  • rtho782 - Wednesday, January 27, 2016 - link

    I thought the nvidia binary drivers were supposed to be very good for linux?
  • bug77 - Wednesday, January 27, 2016 - link

    They are. I'm an intel+nvidia linux setup right now and it works just fine. Dual monitor, too.
  • icrf - Wednesday, January 27, 2016 - link

    My understanding is the open source nvidia drivers suck, but the proprietary binaries are good.
  • BurntMyBacon - Thursday, January 28, 2016 - link

    @icrf: "My understanding is the open source nvidia drivers suck, but the proprietary binaries are good."

    That has been my experience (with the notable exception of the 900 series being a bit buggy).
  • BrokenCrayons - Wednesday, January 27, 2016 - link

    My experience with NV's Linux drivers has been utterly free of problems. I'd happy put them into the "it just works" category.
  • nils_ - Wednesday, January 27, 2016 - link

    They do have problems with the GTX 900 series and tend to pretty much break with every new kernel release. Also they won't work with Wayland.
  • BurntMyBacon - Thursday, January 28, 2016 - link

    @nils_: "They do have problems with the GTX 900 series ..."

    I've noticed this as well. They are workable, but hopefully they get this straightened out soon. It is a spec of dirt on an otherwise (relatively) clean record for their binary drivers.

    @nils_: "... and tend to pretty much break with every new kernel release."

    This is a function of the drivers being closed source binary drivers. nVidia has to recompile the driver for you every time the kernel is updated.
  • nils_ - Wednesday, February 3, 2016 - link

    Yeah, the drivers being closed is part of the problem. But as it stands I can use the iGPU fine for Linux work and dual boot into Windows using the NVidia card for gaming. Its probably more energy efficient as well, if only there was a way to complete disable the Nvidia card in Linux...
  • Fallen Kell - Wednesday, January 27, 2016 - link

    The binary drivers are the best for linux. It is why you don't see a single AMD GPU in any SteamMachine linux system. The only people who think they have atrocious linux drivers are the people using the open source driver or expecting the same performance as on Windows (it won't be until more people use linux).

Log in

Don't have an account? Sign up now