Intel’s Plans for Core M, and the OEMs' Dilemma

When Intel put its plans on the table for Core M, it had one primary target that was repeated almost mantra-like to the media through the press: the aim for fanless tablets using the Core architecture. In terms of physical device considerations and the laws of physics themselves, this meant that for any given chassis temperature and tablet size and thickness, there was an ideal SoC power to aim for:

Core M is clocked and binned such that an 11.6-inch tablet at 8mm thick will only hit 41°C skin temperature with a 4.5 watt SoC in a fanless design. In Intel's conceptual graph we see that moving thinner to a 7mm chassis has a bigger effect than moving down from 10mm to 8mm, and that the screen dimensions have a near linear response. This graph indicates only for a metal chassis at 41°C under 25°C ambient, but this is part of the OEM dilemma.

When an OEM designs a device for Core M, or any SoC for that matter, they have to consider construction and industrial design as well as overriding performance. The design team has to know the limitations of the hardware, but also has to provide something interesting in that market in order to gain share within the budgets set forth by those that control the beans.

This, broadly speaking, gives the OEM control over several components that are out of the hands of the processor designers. Screen size, thickness, industrial design, and skin temperature all have their limits, and adjusting those knobs opens the door to slower or faster Core M units, depending on what the company decides to target. Despite Intel’s aim for fanless designs, some OEMs have also gone with fans anyway to help remove those limits, however it is not always that simple.

The OEMs' dilemma, for lack of a better phrase, is heat soak causing the SoC to throttle in frequency and performance.

How an OEM chooses to design their products around power consumption and temperature lies at the heart of the device's performance, and can be controlled at the deepest level by the SoC manufacturer through implementing different power states. This in turn is taken advantage of in firmware by the OEM on the motherboard that can choose to move between the different states through external analysis of battery levels, external sensors for temperature and what exactly is plugged in. Further to this is the operating system and software, which can also be predefined by the OEM by add-ins at the point of sale over the base – this goes for both Windows and OS X. More often than not, the combination of product design and voltage/frequency response is the ultimate play in performance, and this balance can be difficult to get right when designing an ‘ideal’ system within a specified price range.

To say this is a new issue would be to disregard the years of product design up until this point. Intel used to diffentiate in this space by defining the Scenario Design Power (SDP) of a processor, meaning that the OEM should aim for a thermal dissipation target equal to the SDP. In some circles, this was seen as a diversionary tactic away from the true thermal design power properties of the silicon, and was seemingly scrapped soon after introduction. That being said, the 5Y10c model of the Core M line up officially has a SDP of 3.5W, although it still has the same specifications as the 5Y10. Whether this 3.5W SDP is a precautionary measure or not, we are unsure.

For those of us with an interest in the tablet, notebook, and laptop industry, we’ve seen a large number of oddly designed products that either get very hot due to a combination of things, or are super loud due to fans as well as bad design. The key issue at hand is heat soak from the SoC and surrounding components. Heat soak lies in the ability (or lack of) for the chassis to absorb heat and spread it across a large area. This mostly revolves around the heatsink arrangement and whether the device can move heat away from the important areas quickly enough.

The thermal conductivity (measured in watts per meter Kelvin) of the heatpipes/heatsinks and the specific heat capacity (measured in joules per Kelvin per kilogram) define how much heat the system can hold and how the temperature can increase in an environment devoid of airflow. This is obviously important towards the fanless end of the spectrum for tablets and 2-in-1s which Core M is aimed at, but in order to add headroom to avoid heat soak requires fundamentally adding mass, which is often opposite of what the OEM wants to do. One would imagine that a sufficiently large device with a fan would have a higher SoC/skin temperature tolerance, but this is where heat soak can play a role – without a sufficient heat movement mechanism, the larger device can be in a position where overheating happens quicker than in a smaller device.


Examples of Thermal Design/Skin Temperature in Surface Pro and Surface Pro 2 during 3DMark

Traditionally either a sufficiently large heatsink (which might include the chassis itself) or a fan is used to provide a temperature delta and drive heat away. In the Core M units that we have tested at AnandTech so far this year, we have seen a variety of implementations with and without fans and in a variety of form factors. But the critical point of all of this comes down to how the OEM defines the SoC/skin temperature limitations of the device, and this ends up being why the low-end Core M-5Y10 can beat the high-end Core M-5Y71, and is a poignant part of our tests.

Simply put, if the system with 5Y10 has a higher SoC/skin temperature, it can stay in its turbo mode for longer and can end up outperforming a 5Y71, leading to some of the unusual results we've seen so far.

The skin temperature response by the SoC is also at the mercy of firmware updates, meaning that from BIOS to BIOS, performance may be different. As always, our reviews are a snapshot in time. Typically we test our Windows tablets, 2-in-1s and laptops on the BIOS they are shipped with barring any game-breaking situation which necessarily requires an update. But OEMs can change this at any time, as we experienced in our recent HTC One M9 review, which resulted in a new software update giving a lower skin temperature.

We looped back to Intel to discuss the situation. Ultimately they felt that their guidelines are clear, and it is up to the OEM to produce a design they feel comfortable shipping with the hardware they want to have inside it. Although they did point out that there are two sides to every benchmark, and it will heavily depend on the benchmark length and the solution design for performance:

Intel Core M Response
  Low Skin/SoC Temperature Setting High Skin/SoC Temperature Setting
Short Benchmark Full Turbo Full Turbo
Medium Benchmark Depends on Design Turbo
Long Benchmark Low Power State Depends on Design

Ultimately, short benchmarks should all follow the turbo mode guidelines. How short is short? Well that depends on the thermal conductivity of the design, but we might consider light office work to be of the same sort of nature. When longer benchmarks come into play, the SoC/skin temperature, the design of the system and the software controlling the turbo modes can kick in and reduce the CPU temperature, resulting in a slower system.

What This Means for devices like the Apple MacBook

Apple’s latest MacBook launch has caused a lot of fanfare. There has been a lot of talk based on the very small size of the internal PCB as well as the chassis design being extremely thin. Apple is offering a range of different configurations, including the highest Core M bin, the 5Y71, which in its standard mode which allows a 4.5W part to turbo up to 2.9 GHz. That being said, and Apple having the clout they do, it would be somewhat impossible to determine if these are normal cores or special low-voltage binned processors from Intel, but either way the Apple chassis design has the same issue as other mobile devices, and perhaps even more so. With the PCB being small and the bulk of the design based on batteries, without a sufficient chassis-based dispersion cooling system, there is a potential for heat soak and a reduction in frequencies. It all depends on Apple’s design, and the setting for the skin temperature.

Core M vs. Broadwell-U

The OEMs' dilemma also plays a role higher up in the TDP stack, specifically due to how more energy being lost as heat is being generated. But because Core M is a premium play in the low power space, the typical rules are a little relaxed for Broadwell-U due to its pricing, not to mention the fact that the stringent design restrictions associated with premium products are only present for the super high end. None the less, we are going to see some exceptional Core M devices that can get very close to Broadwell-U in performance at times. To that end, we’ve included an i5-5200U data set with our results here today.

Big thanks to Brett for accumulating and analyzing all this data in this review.

Introduction The Devices and Test
Comments Locked


View All Comments

  • zodiacfml - Wednesday, April 8, 2015 - link

    A long for this look at the performance Core M. Thanks. Like all nice, popular movies the end is pretty expected after a review from the Asus UX305. It's also good that the Dell is there to provide the scores for no limitation on cooling for long continuous loads.

    After all this, I don't see any problem. The performance of the Asus is pretty expected as well having a tradional notebook design which is fairly overkill for the SDP/TDP.

    I was a PC overclocker many years ago and then realized that underclocking and overclocking at the same time would be ideal. I believe the race to wider CPU dynamic range has become mainstream.
  • dragonsqrrl - Wednesday, April 8, 2015 - link

    "Each model comes with 4MB of L2 cache" On the first page.

    Shouldn't that be L3 cache?
  • dananski - Wednesday, April 8, 2015 - link

    I love how the Asus tries to draw a piano keyboard in the PCMark 8 Creative graph. Very creative of it.
  • DryAir - Wednesday, April 8, 2015 - link

    The temperature x time graphs are all messed up. The lines goes "back" on many ocasions, indicating 2 different temperatures on a same time stamp. You should check the settings on whatever program you are using to generate these graphics.
  • be_prime - Wednesday, April 8, 2015 - link

    I just signed up to comment on the same thing -- the graphs are so clearly distorted by some (no doubt well-intentioned) spline/smoothing that much (even most?) of the data we see here may be the product of a spline or interpolation process, and not represent a data measurement. Where the line goes "back", as DryAir pointed out, it implies time travel.

    That's a very big miss for a site that I've considered to be thoughtful and authoritative. The approach you took here presents false and interpolated data and obscures the quality of your research. Don't let the goal of an attractive graph ruin the whole point of the graph: showing the data.

    These graphs are obviously impossible due to the spline/interpolation used, and should be replaced by a scatter plot or normal line graph.
  • Brett Howse - Wednesday, April 8, 2015 - link

    As I mentioned on the Devices and Test page, sometimes the devices were very heavily loaded and they were not able to log consistently. Sometimes they would log twice in the same second, but with slightly different values. One log would be time 0:00:01:05, and another would log 0:00:01:95 (for instance), but both would be truncated to the same second. Unfortunately that's just the limit of the software, since it only logs time to the nearest second. A second can be a lot of time for a CPU.
  • be_prime - Thursday, April 9, 2015 - link

    That's fine because those data points represent measurements.

    The problem here is you've used interpolated splines/curves which, in this case, actually show impossible or false information: the curve leaning "left" implies that the x-axis (time) is decreasing: that's time travel, and it'd be a bigger story than the Core M for sure, right?

    Also recognize that if you're gathering data points, but drawing a line, you're always implicitly creating an interpolation between those points (at least in viewers minds). Usually, it doesn't matter so much. Here, the resulting lines are false, and I think Anandtech is a better publication than that.

    As it stands, the interpolation/smoothing on your graphs implies time travel. Respectfully: please correct this (or, patent the relevant technology and profit!). If you're going to make your graphs look "pretty" and don't care if they're correct, I can't trust your results.
  • DryAir - Friday, April 10, 2015 - link

    Sarcastic time travel jokes aside, I agree that you should change it somehow. Perhaps just change the data points to be connected to a straith line, instead of a smoothed one. Right now its looking very amateuristic, not matching an otherwise great and highly technical review.
  • Brett Howse - Friday, April 10, 2015 - link

    Ice Storm was the worst offender so I've re-generated the graphs with straight lines. There just was not enough data points on that one because it was so short.
  • gw74 - Wednesday, April 8, 2015 - link

    I am furious that OEMs are using Core M in ultrabooks. It is the solution to a problem which does not exist. The Samsung Series 9 / ATIV 9 Plus use full fat i5 and i7 ULVs and the 2 tiny fans hardly ever come on. when they do, they sound like mice whispering. and huge battery life.

    Core M is not progress when used in the ultrabook factor. it is a step backwards and a ripoff.

Log in

Don't have an account? Sign up now