CPU Benchmark Performance: AI and Inferencing

As technology progresses at a breakneck pace, so too do the demands of modern applications and workloads. With artificial intelligence (AI) and machine learning (ML) becoming increasingly intertwined with our daily computational tasks, it's paramount that our reviews evolve in tandem. Recognizing this, we have AI and inferencing benchmarks in our CPU test suite for 2024. 

Traditionally, CPU benchmarks have focused on various tasks, from arithmetic calculations to multimedia processing. However, with AI algorithms now driving features within some applications, from voice recognition to real-time data analysis, it's crucial to understand how modern processors handle these specific workloads. This is where our newly incorporated benchmarks come into play.

As chip makers such as AMD with Ryzen AI and Intel with their Meteor Lake mobile platform feature AI-driven hardware within the silicon, it seems in 2024, and we're going to see many applications using AI-based technologies coming to market.

We are using DDR5 memory on the Core i9-14900KS, as well as the other Intel 14th Gen Core series processors including the Core i9-14900K, the Core i7-14700K, Core i5-14600K, and Intel's 13th Gen at the relative JEDEC settings. The same methodology is also used for the AMD Ryzen 7000 series and Intel's 12th Gen (Alder Lake) processors. Below are the settings we have used for each platform:

  • DDR5-5600B CL46 - Intel 14th & 13th Gen
  • DDR5-5200 CL44 - Ryzen 7000
  • DDR5-4800 (B) CL40 - Intel 12th Gen

(6-1) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1b) ONNX Runtime 1.14: CaffeNet 12-int8 (CPU Only)

(6-1c) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-1d) ONNX Runtime 1.14: Super-Res-10 (CPU Only)

(6-2) DeepSpeech 0.6: Acceleration CPU

(6-3) TensorFlow 2.12: VGG-16, Batch Size 16 (CPU)

(6-3b) TensorFlow 2.12: VGG-16, Batch Size 64 (CPU)

(6-3d) TensorFlow 2.12: GoogLeNet, Batch Size 16 (CPU)

(6-3e) TensorFlow 2.12: GoogLeNet, Batch Size 64 (CPU)

(6-3f) TensorFlow 2.12: GoogLeNet, Batch Size 256 (CPU)

(6-4) UL Procyon Windows AI Inference: MobileNet V3 (float32)

(6-4b) UL Procyon Windows AI Inference: ResNet 50 (float32)

(6-4c) UL Procyon Windows AI Inference: Inception V4 (float32)

Regarding AI and inferencing workloads, there is virtually no difference or benefit from going for the Core i9-14900KS over the Core i9-14900K. While Intel takes the win in our TensorFlow-based benchmark, the AMD Ryzen 9 7950X3D, and 7950X both seem to better grasp the type of AI workloads we've tested.

CPU Benchmark Performance: Science And Simulation Gaming Performance: 720p And Lower
Comments Locked

54 Comments

View All Comments

  • kobblestown - Friday, May 10, 2024 - link

    Right next to https://www.anandtech.com/show/21392/amd-hits-reco... at the front page at the moment.
  • FatFlatulentGit - Friday, May 10, 2024 - link

    Dangit, here I was gonna have a walk down memory lane and the benchmark images are all missing.
  • edwpang - Friday, May 10, 2024 - link

    History repeats itself, 20 years ago, Intel Pentium 4 Prescott increased clock speed, but no much performance gain:
    https://www.anandtech.com/show/1230
  • Samus - Saturday, May 11, 2024 - link

    Yeah but Intel has gone full insanity here. If Prescott could heat my dorm room in college, the Raptor Lake on juice should be able to heat an entire house. This is over 3x the heat output of the hottest Prescott!
  • GeoffreyA - Saturday, May 11, 2024 - link

    That's the thing. Prescott has this reputation, rightly so, but I don't think it went over 200 W. Cedar Mill further curtailed the TDP.
  • boozed - Sunday, May 12, 2024 - link

    I'd love to have seen some specific performance results, i.e. performance/watt or work/energy.
  • Samus - Tuesday, May 14, 2024 - link

    AT did some testing on this last year and the short of it is unsurprisingly AMD scales up AND down better than Intel when it comes to performance per watt, but Intel can hit higher TDP's due to limits AMD has on their package power.

    https://www.anandtech.com/show/17641/lighter-touch...
  • James5mith - Friday, May 10, 2024 - link

    Would have liked to see this review done at the Intel dictated stock settings rather than the motherboard defaults.

    https://www.anandtech.com/show/21374/intel-issues-...
  • Gavin Bonshor - Friday, May 10, 2024 - link

    Don't worry; I will be testing Intel Default settings, too. I'm testing over the weekend and adding them in.

    I tested, as we normally do because it keeps the data set consistent. As I state on the first page

    "This does pose questions when it comes to testing and reviewing Intel's 14th and 13th Gen processors. We have been considering our standpoint on this, as we will typically test at the default motherboard settings with memory set to JEDEC specifications of the specific processor we're testing. For this review, we will be testing how we usually test, as this fits within the realm of keeping things consistent."

    Intel's back-and-forth with motherboard vendors on this issue has raised many questions. We intend to address it as soon as possible. We already test with memory as per JEDEC, and I usually get a lot of criticism about why I don't test with DDR5-6000 or DDR5-7200, etc.

    Don't worry, we will be addressing this in-house.
  • yannigr2 - Friday, May 10, 2024 - link

    I am expecting reviews to start fixing Intel's inaccuracies, not "keeping things consistent". Using an overclocked CPU, at an overclocked state in a review, helps maintaining inconsistency not consistency. Especially when there is a chance the CPU to get degraded even in the short period of warranty time. The fact that this is pre overclocked CPU from Intel, doesn't mean it should be tested that way, especially after the latest revelations. Whatever data you had, should have gone directly to the trash bin and only test with whatever Intel believes it is "in spec".

Log in

Don't have an account? Sign up now