Benchmarking Performance: CPU System Tests

Our first set of tests is our general system tests. These set of tests are meant to emulate more about what people usually do on a system, like opening large files or processing small stacks of data. This is a bit different to our office testing, which uses more industry standard benchmarks, and a few of the benchmarks here are relatively new and different.

All of our benchmark results can also be found in our benchmark engine, Bench.

Strategic AI

One of the hot button topics this year (and for the next few years, no doubt) is how technology is shifting to using artificial intelligence and purpose built AI hardware to perform better analysis in low power environments. AI is not relatively new as a concept, as we have had it for over 50 years. What is new is the movement to neural network based training and inference: moving from ‘if this then that’ sort of AI to convolutional networks that can perform fractional analysis of all the parameters.

Unfortunately the movement of the neural-network ecosystem is fast paced right now, especially in software. Every few months or so, announcements are made on new software frameworks, improvements in accuracy, or fundamental paradigm shifts in how these networks should be calculated for accuracy, power, performance, and what the underlying hardware should support in order to do so. There is no situational AI benchmarking tools using network topologies that will remain relevant in 2-4 months, let alone an 18-24 month processor benchmark cycle. So to that end our AI test becomes the best of the rest: strategic AI in the latest video games.

For our test we use the in-game Civilization 6 AI benchmark with a few custom modifications. Civilization is one of the most popular strategy video games on the market, heralded for its ability for extended gameplay and for users to suddenly lose 8 hours in a day because they want to play ‘one more turn’. A strenuous setting would involve a large map with 20 AI players on the most difficult settings, leading to a turn time (waiting for the AI players to all move in one turn) to exceed several minutes on a mid-range system. Note that a Civilization game can easily run for over 500 turns and be played over several months due to the level of engagement and complexity.

Before the benchmark is run, we change the game settings for medium visual complexity at a 1920x1080 resolution while using a GTX 1080 graphics card, such that any rendered graphics are not interfering with the benchmark measurements. Our benchmark run uses a command line method to call the built-in AI benchmark, which features 8 AI players on a medium size map but in a late game scenario with most of the map discovered, each civilization in the throes of modern warfare. We set the benchmark to play for 15 turns, and output the per-turn time, which is then read into the script with the geometric mean calculated. This benchmark is newer than most of the others, so we only have a few data points so far:

System: Civilization 6 AI (1080p Medium + GTX 1080)

Our Strategic AI test is new to the scene, and it looks like there is at least an asymptotic result wken you have a 'good enough' processor.

PDF Opening

First up is a self-penned test using a monstrous PDF we once received in advance of attending an event. While the PDF was only a single page, it had so many high-quality layers embedded it was taking north of 15 seconds to open and to gain control on the mid-range notebook I was using at the time. This put it as a great candidate for our 'let's open an obnoxious PDF' test. Here we use Adobe Reader DC, and disable all the update functionality within. The benchmark sets the screen to 1080p, opens the PDF to in fit-to-screen mode, and measures the time from sending the command to open the PDF until it is fully displayed and the user can take control of the software again. The test is repeated ten times, and the average time taken. Results are in milliseconds.

System: PDF Opening with Adobe Reader DC

Single thread frequency usualy works well for PDF Opening, although as we add on more high performance cores it becomes more difficult for the system to pin that individual thread to a single core and get the full turbo boost - if anything flares up on any other core then it brings the frequencies down. I suspect that is what is happening here and the next couple of thests where the i7-8700K sits behind the i7-7700K and i7-7740X.

FCAT Processing: link

One of the more interesting workloads that has crossed our desks in recent quarters is FCAT - the tool we use to measure stuttering in gaming due to dropped or runt frames. The FCAT process requires enabling a color-based overlay onto a game, recording the gameplay, and then parsing the video file through the analysis software. The software is mostly single-threaded, however because the video is basically in a raw format, the file size is large and requires moving a lot of data around. For our test, we take a 90-second clip of the Rise of the Tomb Raider benchmark running on a GTX 980 Ti at 1440p, which comes in around 21 GB, and measure the time it takes to process through the visual analysis tool.

System: FCAT Processing ROTR 1440p GTX980Ti Data

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that ray traces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

System: Dolphin 5.0 Render Test

3D Movement Algorithm Test v2.1: link

This is the latest version of the self-penned 3DPM benchmark. The goal of 3DPM is to simulate semi-optimized scientific algorithms taken directly from my doctorate thesis. Version 2.1 improves over 2.0 by passing the main particle structs by reference rather than by value, and decreasing the amount of double->float->double recasts the compiler was adding in. It affords a ~25% speed-up over v2.0, which means new data.

System: 3D Particle Movement v2.1

DigiCortex v1.20: link

Despite being a couple of years old, the DigiCortex software is a pet project for the visualization of neuron and synapse activity in the brain. The software comes with a variety of benchmark modes, and we take the small benchmark which runs a 32k neuron/1.8B synapse simulation. The results on the output are given as a fraction of whether the system can simulate in real-time, so anything above a value of one is suitable for real-time work. The benchmark offers a 'no firing synapse' mode, which in essence detects DRAM and bus speed, however we take the firing mode which adds CPU work with every firing.

System: DigiCortex 1.20 (32k Neuron, 1.8B Synapse)

DigiCortex can take advantage of the extra cores, paired with the faster DDR4-2666 memory. The Ryzen 7 chips still sit at the top here however.

Agisoft Photoscan 1.3.3: link

Photoscan stays in our benchmark suite from the previous version, however now we are running on Windows 10 so features such as Speed Shift on the latest processors come into play. The concept of Photoscan is translating many 2D images into a 3D model - so the more detailed the images, and the more you have, the better the model. The algorithm has four stages, some single threaded and some multi-threaded, along with some cache/memory dependency in there as well. For some of the more variable threaded workload, features such as Speed Shift and XFR will be able to take advantage of CPU stalls or downtime, giving sizeable speedups on newer microarchitectures. The 1.3.3 test is relatively new, so has only been run on a few parts so far.

System: Agisoft Photoscan 1.3.3 (Large) Total Time

Benchmark Overview Benchmarking Performance: CPU Rendering Tests
Comments Locked

222 Comments

View All Comments

  • DigitalFreak - Thursday, October 5, 2017 - link

    Anyone having an issue with Bench? I'm trying to compare my i7-3770k to the i7-8700k and it comes back with no data. Same with trying the Threadripper 1920x
  • mkaibear - Friday, October 6, 2017 - link

    CPU tests changed so benchmarks weren't comparable. Latest processor tested on the old tests was the 7700K iirc, and not everything is tested on the new tests.

    I'd compare results for the 3770k and the 2600K to get a baseline then you can compare 2600K to the 8700K. It's a bit fiddly, I have to do the same with my 4790K.
  • Ian Cutress - Saturday, October 7, 2017 - link

    We updated our CPU testing suite for Windows 10 in Q1. Regression testing is an on-going process, though it's been slow because of all the CPU launches this year. Normally we have 1/2 a year. We're so far at what, 6 or 7 for 2017?
  • mczak - Thursday, October 5, 2017 - link

    Doesn't look to me like the die size actually increased at all due to the increased gate pitch.
    The calculations in the article forgot to account for the increase of the unused area (at the bottom left) - this area is tiny with 2c die, but increases with each 2 cores added significantly. By the looks of it, that unused area would have grown by about 2 mm^2 or so going from 4 to 6 cores, albeit I'm too lazy to count the pixels...
  • jjj - Thursday, October 5, 2017 - link

    Your conclusion is weirdest thing ever, you fully ignore the 8359k and AMD.

    In retail, the 8350k will do very very well and retail is what matters for most readers
    And ignoring AMD is not ok at all, it's like you think that we are all idiots that buy on brand.You do think that, your system guides make that very clear but you should not accept, support and endorse such an idiotic behavior.
    AMD got hit hard here, Intel takes back the lead and it's important to state that. Sure they might have Pinnacle Ridge in a few months and take back the lead but buyers that can't wait should go with Intel right now, for the most part. AMD could also adjust prices ofc.
  • Tigris - Thursday, October 5, 2017 - link

    Really confused why the pricing listed in this review isn't consistent- for Intel you were posting prices you found online, but for Ryzen you appear to be posting MSRP.

    The truth is- you can find 1700x for $298 right now EASILY (Amazon), yet Microcenter is selling the 8700k for $499.

    If you factor this information in, the AMD solutions are still far more valuable per dollar.
  • wolfemane - Thursday, October 5, 2017 - link

    I really can’t belive the amount of flak Anandtech takes these days. I find it un-earned an unwarrented. Out of all the tech sites and forums I manage to read in a given week, Anandtech is the most often quoted and linked to. Hell I use it as my go to for reference and comparison (and general reading). My only big complaint is your ads, and I’d gladly pay a sub to completely remove that nonsense and directly support the site!

    Ian, you and your staff deserve far more credit than you get and that’s an injustice. Each piece is pretty thorough and pretty spot on. So for that thank you very much.

    This article is no exception to the rule and is superb. Your graph layouts are a welcome feature!!!!! I look forward to your ever expanding tests as new chips roll in. I think the 8600k is going to be a game changer in the i5 vs i7 performance category for these hexacore cpus. I think that’s why almost all the reviews I’m reading today are with the 8700k and 8400.

    Agin, thank you and your staff very much for the work you put into publishing amazing articles!!
  • vanilla_gorilla - Thursday, October 5, 2017 - link

    Personally I buy whatever is best at the time. Right now I'm typing this on a 1700x and I can see a 4770k build on the desk next to me. So it's always funny to see the bias. Intel review gets posted, AMD fanboys come out of the wood works to trash them as paid shills. But it works exactly the same on any positive AMD reviews. Intel fans come in trashing them. It's really odd. Anandtech is one of the most unbiased sites I've found and I trust their reviews implicitly.
  • mkaibear - Saturday, October 7, 2017 - link

    > Anandtech is one of the most unbiased sites I've found and I trust their reviews implicitly.

    Yep. Anyone who looks at AT and sees bias needs to examine their own eyesight.
  • SeannyB - Thursday, October 5, 2017 - link

    For the H.264 encoding tests, you could consider using the "medium" preset or better. The "very fast" preset has a tendency to use fewer cores.

Log in

Don't have an account? Sign up now