Intel’s Plans for Core M, and the OEMs' Dilemma

When Intel put its plans on the table for Core M, it had one primary target that was repeated almost mantra-like to the media through the press: the aim for fanless tablets using the Core architecture. In terms of physical device considerations and the laws of physics themselves, this meant that for any given chassis temperature and tablet size and thickness, there was an ideal SoC power to aim for:

Core M is clocked and binned such that an 11.6-inch tablet at 8mm thick will only hit 41°C skin temperature with a 4.5 watt SoC in a fanless design. In Intel's conceptual graph we see that moving thinner to a 7mm chassis has a bigger effect than moving down from 10mm to 8mm, and that the screen dimensions have a near linear response. This graph indicates only for a metal chassis at 41°C under 25°C ambient, but this is part of the OEM dilemma.

When an OEM designs a device for Core M, or any SoC for that matter, they have to consider construction and industrial design as well as overriding performance. The design team has to know the limitations of the hardware, but also has to provide something interesting in that market in order to gain share within the budgets set forth by those that control the beans.

This, broadly speaking, gives the OEM control over several components that are out of the hands of the processor designers. Screen size, thickness, industrial design, and skin temperature all have their limits, and adjusting those knobs opens the door to slower or faster Core M units, depending on what the company decides to target. Despite Intel’s aim for fanless designs, some OEMs have also gone with fans anyway to help remove those limits, however it is not always that simple.

The OEMs' dilemma, for lack of a better phrase, is heat soak causing the SoC to throttle in frequency and performance.

How an OEM chooses to design their products around power consumption and temperature lies at the heart of the device's performance, and can be controlled at the deepest level by the SoC manufacturer through implementing different power states. This in turn is taken advantage of in firmware by the OEM on the motherboard that can choose to move between the different states through external analysis of battery levels, external sensors for temperature and what exactly is plugged in. Further to this is the operating system and software, which can also be predefined by the OEM by add-ins at the point of sale over the base – this goes for both Windows and OS X. More often than not, the combination of product design and voltage/frequency response is the ultimate play in performance, and this balance can be difficult to get right when designing an ‘ideal’ system within a specified price range.

To say this is a new issue would be to disregard the years of product design up until this point. Intel used to diffentiate in this space by defining the Scenario Design Power (SDP) of a processor, meaning that the OEM should aim for a thermal dissipation target equal to the SDP. In some circles, this was seen as a diversionary tactic away from the true thermal design power properties of the silicon, and was seemingly scrapped soon after introduction. That being said, the 5Y10c model of the Core M line up officially has a SDP of 3.5W, although it still has the same specifications as the 5Y10. Whether this 3.5W SDP is a precautionary measure or not, we are unsure.

For those of us with an interest in the tablet, notebook, and laptop industry, we’ve seen a large number of oddly designed products that either get very hot due to a combination of things, or are super loud due to fans as well as bad design. The key issue at hand is heat soak from the SoC and surrounding components. Heat soak lies in the ability (or lack of) for the chassis to absorb heat and spread it across a large area. This mostly revolves around the heatsink arrangement and whether the device can move heat away from the important areas quickly enough.

The thermal conductivity (measured in watts per meter Kelvin) of the heatpipes/heatsinks and the specific heat capacity (measured in joules per Kelvin per kilogram) define how much heat the system can hold and how the temperature can increase in an environment devoid of airflow. This is obviously important towards the fanless end of the spectrum for tablets and 2-in-1s which Core M is aimed at, but in order to add headroom to avoid heat soak requires fundamentally adding mass, which is often opposite of what the OEM wants to do. One would imagine that a sufficiently large device with a fan would have a higher SoC/skin temperature tolerance, but this is where heat soak can play a role – without a sufficient heat movement mechanism, the larger device can be in a position where overheating happens quicker than in a smaller device.


Examples of Thermal Design/Skin Temperature in Surface Pro and Surface Pro 2 during 3DMark

Traditionally either a sufficiently large heatsink (which might include the chassis itself) or a fan is used to provide a temperature delta and drive heat away. In the Core M units that we have tested at AnandTech so far this year, we have seen a variety of implementations with and without fans and in a variety of form factors. But the critical point of all of this comes down to how the OEM defines the SoC/skin temperature limitations of the device, and this ends up being why the low-end Core M-5Y10 can beat the high-end Core M-5Y71, and is a poignant part of our tests.

Simply put, if the system with 5Y10 has a higher SoC/skin temperature, it can stay in its turbo mode for longer and can end up outperforming a 5Y71, leading to some of the unusual results we've seen so far.

The skin temperature response by the SoC is also at the mercy of firmware updates, meaning that from BIOS to BIOS, performance may be different. As always, our reviews are a snapshot in time. Typically we test our Windows tablets, 2-in-1s and laptops on the BIOS they are shipped with barring any game-breaking situation which necessarily requires an update. But OEMs can change this at any time, as we experienced in our recent HTC One M9 review, which resulted in a new software update giving a lower skin temperature.

We looped back to Intel to discuss the situation. Ultimately they felt that their guidelines are clear, and it is up to the OEM to produce a design they feel comfortable shipping with the hardware they want to have inside it. Although they did point out that there are two sides to every benchmark, and it will heavily depend on the benchmark length and the solution design for performance:

Intel Core M Response
  Low Skin/SoC Temperature Setting High Skin/SoC Temperature Setting
Short Benchmark Full Turbo Full Turbo
Medium Benchmark Depends on Design Turbo
Long Benchmark Low Power State Depends on Design

Ultimately, short benchmarks should all follow the turbo mode guidelines. How short is short? Well that depends on the thermal conductivity of the design, but we might consider light office work to be of the same sort of nature. When longer benchmarks come into play, the SoC/skin temperature, the design of the system and the software controlling the turbo modes can kick in and reduce the CPU temperature, resulting in a slower system.

What This Means for devices like the Apple MacBook

Apple’s latest MacBook launch has caused a lot of fanfare. There has been a lot of talk based on the very small size of the internal PCB as well as the chassis design being extremely thin. Apple is offering a range of different configurations, including the highest Core M bin, the 5Y71, which in its standard mode which allows a 4.5W part to turbo up to 2.9 GHz. That being said, and Apple having the clout they do, it would be somewhat impossible to determine if these are normal cores or special low-voltage binned processors from Intel, but either way the Apple chassis design has the same issue as other mobile devices, and perhaps even more so. With the PCB being small and the bulk of the design based on batteries, without a sufficient chassis-based dispersion cooling system, there is a potential for heat soak and a reduction in frequencies. It all depends on Apple’s design, and the setting for the skin temperature.

Core M vs. Broadwell-U

The OEMs' dilemma also plays a role higher up in the TDP stack, specifically due to how more energy being lost as heat is being generated. But because Core M is a premium play in the low power space, the typical rules are a little relaxed for Broadwell-U due to its pricing, not to mention the fact that the stringent design restrictions associated with premium products are only present for the super high end. None the less, we are going to see some exceptional Core M devices that can get very close to Broadwell-U in performance at times. To that end, we’ve included an i5-5200U data set with our results here today.

Big thanks to Brett for accumulating and analyzing all this data in this review.

Introduction The Devices and Test
Comments Locked


View All Comments

  • jabber - Thursday, April 9, 2015 - link

    Be intrigued to know how these M chips stack up against my old CULV 1.3GHz SU7300. That benched as good as a old Pentium D 2.8Ghz back in 2009.

    Still using it as my main work laptop. Mainly just for configuring routers or downloading stuff on site.
  • fuzzymath10 - Thursday, April 9, 2015 - link

    Probably not great. I'm using the Venue 11 Pro 7140, and the 5Y10 is usually snappier than my old 14" Dell Latitude with a T8300 (2.4GHz 45nm Core 2 Duo). Some of it might also be the awful GM965 IGP.
  • fokka - Thursday, April 9, 2015 - link

    thanks for the analysis, the new charts look very cool!

    it's interesting to see a direct comparison, how the different form factors and cooling solutions affect performance and it's good to have a "bog standard" 5200u thrown in for good measure, too.

    i'm still not a fan of having low power systems burdened with high resolution screens, especially if it screws with graphics benchmark scores as we see on some benchmarks, but mabe that's just me.

    it would be interesting to have the new macbook thrown in for comparison too, as far that is/will be possible. i'd expect it to perform closer to the asus, albeit with more throttling due to the smaller chassis and higher turbo clocks. but maybe we will se more soon.

    the low temperatures on the lenovo are an interesting and valid design choice, but it would also be good to have an optional high performance mode allowing higher temps for when you simply sit at a desk playing games or such.

    i also heard that there are ux305 variants coming out with different core-m SKUs, so it might even be possible to further investigate the boundaries of asus' cooling solution.

    so all in all, while performance seems adequate for most day to day tasks, the only thing i'm still disappointed in regards to core m is efficiency/battery life. imho, this goes to show that core m is nothing else than a smaller, more constrained core i, with a lower TDP to allow for slimmer, fanless mobile designs.
    for me that means i'm still preferring a "full blown" 15w ULV, simply to keep performance on a slightly higher, more consistent and more future proof level, even if it adds a couple millimeters to the thickness and a couple grams to the weight.
  • Qwertilot - Thursday, April 9, 2015 - link

    The thing with battery life is that the U stuff has such low idle power states etc that there really isn't anything much to gain there. Especially as super thing means less battery.
  • melgross - Thursday, April 9, 2015 - link

    It has to be understood that these are essentially first generation products. Two years from now, they will make the ones tested today seem somewhat pokey. And two years after that...
  • TheJian - Friday, April 10, 2015 - link

    So based on the benchmarks of X1 here (58448 vs. Intel above 49619):
    Intel's 14nm can't catch NV's 20nm X1 on the gpu side, and it's about to go 14nm samsung process in time for xmas devices (should up clocks on gpu, and denver back in perhaps tweaked for cpu side). This isn't good for Intel. I suspect they'll continue to lose 4.1B a year, or give up the portion of the market they bought with that 4.1B loss for the last few years each ;)

    As gpu perf requirements amp up on mobile, I don't see Intel taking down ARM's side (qcom,samsung,nv, arm themselves etc). The cpu side will be good enough rev after rev on arm (A72's coming 1.9x A57's) and at some point have a full PC like box, massive PC style heatsink/fan, 16GB-32GB (google has to polish the 64bit OS more before there is a point to doing this), discrete gpu for top end, and pure amped up soc (with gpu, running ~20-80w or so like Intel's lineup) to cabbage up the low-end laptop/pc market. Intel profits will be going down soon if they don't buy NV to take out the fab/arm march that is coming up the chain slowly but surely. It would seem the only way to gun down arm at this point is to figure out a way to buy NV and produce a better ARM soc than anyone on arm's side can with the help of Intel's process (then their fabs would matter again, at least for a while if not forever far longer). Intel can't count on process to beat the enemy now. As they race to 10nm so is TSMC etc. Even if they always are one behind for a while as you can see above Intel isn't winning. Both i5's gpu and CoreM's gpu get smacked around even on 14nm vs. 20nm.

    The core pro-app market is a different story, but that's the last part that gets assaulted at the top. Games first, then come the apps once a PC like box is out and has massive numbers to be worth making full pc apps, then pro apps over there etc. Google is surely working as fast as they can on the software side (64bit OS polish, more features etc probably coming Nov with devices), but it seems the hardware will already be ready for the next move to a PC type box when google+AEP etc/advanced unreal 4/unity5 etc games get there. We'll see how far NV gets with the 40w console shortly (the first small salvo I guess with semi-good gaming ability). They also have an updated handheld with X1 coming too, and I hope they update it again with 14nm at xmas or just after. I'll wait for 13in or larger 14nm NV chip for my tablet needs (training vids, and a side of games out to tv). I might buy the handheld x1 update though. I have zero interest in vita/3ds stuff.

    One more point, if NV wins the suit against samsung, qcom etc, the rest will fold (or get sued too) and use NV IP which will make everyone have NV like scores on gpu. Again, Intel's best move is to buy NVDA. They'd be suing everyone then and could hold NV's IP back from all the rest or license it at higher fees etc, many ways to do damage owning NV. If win10 is really coming for ARM's side, and brings DX12 with it (kind of have to, to fight off Vulkan/android/iOS/linux/steamos jeez long list) then Intel is even in worse trouble. If they leave out DX 12 (really stupid with fully capable gpus over there, in NV's case maxwell!), I don't see the point for MS as they have to defend against android/vulkan and the rest of the gang I mentioned. MS must embrace ARM fully or Wintel is just headed down as the dominant player (OS share overall already dropping vs. arm's side totals). They'll both survive without the Wintel big stick to push around, but things are definitely changing quickly. Intel losing 4.1B just to sell something on mobile, doesn't lie. Mobile gaming is growing quickly, and it isn't running windows. etc...
  • serendip - Friday, April 10, 2015 - link

    Much as I don't bother with ARM vs Intel debates, I agree with the main points here. Intel can't keep throwing away billions trying to catch up in mobile, especially when desktop and laptop sales are falling. People regularly buy new phones and tablets, PCs not so much. I find that for typical daily computing like web surfing, doing email and handling simple documents, any decent tablet or phablet will do. My laptop has been relegated to a desktop while my Android phablet and cheap Atom Windows tablet travel with me.

    ARM vs Intel now doesn't matter as much as before as long as good apps are available on whatever platform you choose. With the rise of cloud storage and services, your underlying OS and processor architecture matter even less. Not a good time for Intel after being in the lead for so long.
  • serendip - Friday, April 10, 2015 - link

    That pro-app market will be Intel's last refuge, especially when x86 compatibility is needed. As for the rest, Atoms and ARM SOCs are getting good enough for general purpose computing. It'll be a race to the bottom then... I don't think Intel can maintain its current margins and structure in that environment.
  • Brett Howse - Friday, April 10, 2015 - link

    I don't mean to throw a wrench in your whole argument, but your initial numbers are incorrect. The X1 benchmark is showing the Ice Storm Unlimited *Graphics* score, and you are comparing it to the Intel *Overall* score. Easy mistake to make of course since you don't run these benchmarks all day like some.

    Anyway the Yoga 3 Pro (which you are quoting for Intel) achieved a 59405 Graphics score in that benchmark. The overall score combines the Graphics score with the Physics score (which was 31473 on the Yoga 3 Pro). I don't have the Tegra X1 Physics or Overall scores since that was a preview unit. The top ARM score on the Physics test was the NVIDIA Shield Tablet at 20437.

    The NVIDIA tablet is also the highest scoring ARM on 3DMark Ice Storm Unlimited Overall with 36688.

    But that's just one benchmark, and a very short one at that.
  • Xpl1c1t - Friday, April 10, 2015 - link

    IC performance is a function of temperature?!?!?! Blasphemy!

Log in

Don't have an account? Sign up now