Introduction

Update: 8/27/04 - The charts have all been revised. Thanks go out to all the people that posted corrections in the comments section as well as sending them via email. In addition to the corrections, some further information and commentary has been added to the pages. For anyone that actually comes back to this article for reference information, enjoy the changes!

Foreword by Kristopher Kubicki:
From time to time we stumble upon some truly gifted and patient people here at AnandTech. Some weeks ago I wrote a CPU codename cheatsheet as just something to do in an airport terminal to kill time. Very soon after, an extremely diligent Jarred Walton showed me his rendition of the CPU family tree that he was keeping just for fun!? Knowing I was bested, I offered Jarred a chance at writing a pilot for AT, and here it is! Please enjoy the second, extremely thorough CPU Cheatsheet 2.0.


But loud! what lurks in yonder chassis, hot?
A CPU, my programs it will run!
O Pentium, Pentium! wherefore art thou Pentium?
Obscure thy benchmarks and refuse thy name.
What's in a name? that which we call a chip
By any other name would run as fast.

My sincere apologies to Shakespeare, but that mangled version of Romeo and Juliet is an apt description of the world of computer processors. Once upon a time, we dealt with part numbers and megahertz. Larger numbers meant you had a faster computer. 80286 was faster than 8088 and 8086, and the 80386 was faster still, with the 80486 being the king of performance. Life was simple, and life was good. But that is the distant past; welcome to the present.

Where once we had a relatively small number of processor parts to choose from, we are now inundated with product names, model numbers, code names, and features. Keeping track of what each one means is becoming a rather daunting task. Sure, you can always try Googling the information, but sometimes you'll get conflicting information, or unrelated web sites, or only small tidbits of what you're trying to find out. So, why not put together a clear, concise document that contains all of the relevant information? Easier said than done; however, that is exactly what is attempted in this article.

In order to keep things even remotely concise, the cutoff line has been arbitrarily set to the Pentium II and later Intel processors, and the Athlon and later AMD processors. Anything before that might be interesting for those looking at the history of processors, but for all practical purposes, CPUs that old are no longer worth using. Also absent will be figures for power draw and heat dissipation, mainly because I'm not overly concerned with those values, not to mention that AMD and Intel have very different ways of reporting this information. Besides, Intel and AMD design and test their CPUs with a variety of heatsinks, motherboards, and other components to ensure that everything runs properly, so if you use the proper components, you should be fine.

So what will be included? For this first installment, details on clock frequencies, bus speeds, cache sizes, transistor counts, code names, and a few other items has been compiled. The use of model numbers with processors is also something people will likely have trouble keeping straight, so the details of processors for all Athlon XP and later AMD chips and Pentium 4 and later Intel chips will follow. The code names and features will be presented first, with individual processor specifics listed on the later pages. As a whole, it should be a useful quick reference - or cheat sheet, if you prefer - for anyone trying to find details on a modern x86 processor.

With that said, on to the AMD processors. Why AMD first? Because someone has to be first, and AMD comes before Intel in the alphabet.

AMD Processors
Comments Locked

74 Comments

View All Comments

  • JarredWalton - Wednesday, September 1, 2004 - link

    Jenand - thanks for the information. There are certainly some errors in the Itanium charts, but very few people seem to know much about the architecture, so I haven't gotten any corrections. Most of the future IA64 chips are highly speculative in terms of featurs.

    Incidentally, it looks like Tukwilla (and Dimona) will be 4 core designs, with motherboards support 4 CPUs, thus "16C" - or something like that. As for Fanwood, I really don't know much about it other than the name and some speculation that it *might* be the same as Madison9M. Or it might be a Dual Processor version of Madison, which is multi-processor.

    http://endian.net/details.asp?ItemNo=3835
    http://www.xbitlabs.com/news/cpu/display/200311101...

    At the very least, Fanwood will have more than just a 9 MB cache configuration, it's probably safe to say.
  • JarredWalton - Wednesday, September 1, 2004 - link

    If Prescott and Pentium M both use the exact same branch predictor, then yes, the Prescott would be more accurate than Banias. However, with the doubling of the cache size on Dothan, I can't imagine Intel would leave it with inferior branch prediction. So perhaps it goes something like this in terms of branch prediction accuracy:

    P6 cores
    Willamette/Northwood
    Banias
    Prescott
    Dothan

    Possibly with the last two on the same level.

    I'm still waiting to see if we can get pipeline stage information from Intel, but I have encountered several other sources online that refer to the Willamette/Northwood as having a 28 stage pipeline. Guess there's no use in beating a dead horse, though - either Intel will pass on information and we can have a definite, or it will remain an unknown. Don't hold your breath on Intel, though. :)
  • IntelUser2000 - Wednesday, September 1, 2004 - link

    "Intel claims that the combination of the loop detector and indirect branch predictor gives Centrino a 20% increase in overall branch prediction accuracy, resulting in a 7% real performance increase."

    Sure, but Prescott also has Pentium M's branch predictor enhancements in addition to the enhancements made to Willamette, while Pentium M didn't get Willamette's enhancements, just the indirect branch predictor.

    Yes it says 20% increase, but from what? PIII, P4? Prescott?
  • jenand - Tuesday, August 31, 2004 - link

    There are a few errors and some missing information on the IPF sheet:
    1) Fanwood will get 4M(?) L3 or so, not 9M. You probably mixed it up with its bigger brother Madison9M, both to be released soon.

    2)Foxton and Pelleston are code names for technologies used in Montecito, not CPU code names.

    3) Dimona and Tukwila are "pairs" (just like Madison/Deerfield, Madison9M/Fanwood and Montecito/Millington) both will be made on 45nm nodes and are scheduled for 2007. Montvale is probably a shrink of Montecito or Millington to the 65nm node and will probably be launched in 2006.

    4) Montecito and Millington will be made on 90nm and use the PAC-611 socket. The FSB of Montecito will be 100MHZ for compatibility reasons, but will also be introduced at a higher FSB (166MHz?) late in 2005.

    5) Fanwood will probably get 100MHz and 133MHz FSB, not 166MHz. Same goes for Millington.

    I hope it was helpful. Please note that I don't have any internal information I only read the rumors.
  • JarredWalton - Tuesday, August 31, 2004 - link

    Heh... one last link. Hannibal discusses why the PM is able to have better branch prediction with a smaller BTB in his article about the PM. At the bottom of the following page is where he specifically discusses the improvements to the P4:

    http://castor.arstechnica.com/cpu/004/pentium-m/pe...

    And his summary: "Intel claims that the combination of the loop detector and indirect branch predictor gives Centrino a 20% increase in overall branch prediction accuracy, resulting in a 7% real performance increase. Of course, the usual caveats apply to these statistics, i.e. the increase in branch prediction accuracy and that increase's effect on real-world performance depends heavily on the type of code being run. Improved branch prediction gives the PM a leg up not only in terms of performance but in terms of power efficiency as well. Because of its improved branch prediction capabilities, the PM wastes less energy speculatively executing code that it will then have to throw away once it learns that it mispredicted a branch."

    He could be wrong, of course, but personally I trust his research on CPUs more than a lot of other sites - after all, he does *all* architectures, not just x86. Hopefully, Intel will provide me (Kristopher) with some direct answers. :)
  • JarredWalton - Tuesday, August 31, 2004 - link

    In case that last wasn't clear, I'm not saying the CPU detection is really that blatant, but if the CPU detection is required for accuracy, it *could* be that bad. Rumor, by the way, puts the Banias core at 14 or 15 stages, and the Dothan *might* add one more stage.
  • JarredWalton - Tuesday, August 31, 2004 - link

    Regarding Pentium M, I believe the difference to the branch prediction isn't merely a matter of size. It has a new indirect branch predictor, as well as some other features. Basically, P-M is designed for power usage first, and so they made a lot more elegant design decisions at times, whereas Northwood and Prescott are more of a brute force approach.

    As for the differences between various AT articles, it's probably worth pointing out that this is the first article I've ever written for Anandtech, so don't be too surprised that it has some differences of opinion. Who's right? It's difficult to say.

    As for the program mentioned in that thread, I downloaded it and ran it on my Athlon 64. You know what the result was? 13.75 to 13.97 cycles. Since a branch miss doesn't actually necessitate a flush of the entire pipeline, that would mean that it's estimating the length of the A64 as probably 15 or 16 stages - off by a factor of 33% or so. If it were off by that same amount on Prescott, that would put Prescott at [drumroll...] 23 stages.

    I've passed on some questions for Intel to Kristopher Kubuki, so maybe we can get the real poop. Until then, it's still a case of "nobody knows for sure". Estimating pipeline lengths based off of a program that reports accurate results on P4 and Northwood cores is at best a guess, I would say.

    Incidentally, I looked at the source code, and while I haven't really studied it extensively, there is a CPU detection, so the mispredict penalty is calculated differently on P4, P6, and *other* architectures. Maybe it's okay, maybe it's not, but if accurate results are dependent on CPU detection, that sort of calls the whole thing into question.

    if CPU=P6 then printf("12 stages.\n")
    else if CPU=P4 then printf("10 stages.\n")
    else if....

    Hopefully, it *is* relatively accurate, but as I said, ~14 cycles mispredict penalty on an Athlon 64 is either incorrect, or AMD actually created a 15 stage pipeline and didn't tell anyone. :)
  • IntelUser2000 - Monday, August 30, 2004 - link

    Okay, I don't know further than that. But one question: Since the old P4 article from Anandtech states 10 stage pipelin P6 core, and Prescott is claimed to have 31 stages and you claim otherwise, it tells that there is individual errors in the SAME site. So whether Hannibal's site can be trusted is doubtful because of that fact too, no? Also, take a look at this link: http://www.realworldtech.com/forums/index.cfm?acti...

    I asked a guy in the forums about it and that link is about the responses to it.

    One example Hannibal's site may be wrong is this: http://arstechnica.com/cpu/004/prescott-future/pre...

    At the end of that link it says: "There's actually another reason why the Pentium M won't benefit as much from hyperthreading. The Pentium M's branch predictor is superior to Prescott's, so the Pentium M is less likely to suffer from instruction-related pipeline stalls than the Prescott. This improved branch prediction, in combination with its shorter pipeline, means improved execution efficiency and less of a need for something like hyperthreading."

    Now, we know Pentium M has shorter pipeline than Prescott but better branch prediction? I really think its wrong, since one of the major improvements of BOTH Prescott and Pentium M in branch prediction is improvements in indirect branch prediction, PLUS, Prescott and Northwood I believe, has bigger BTB buffer size, somewhere in the order of 8x, because Pentium M used indirect branch prediction improvements to save die size and putting more buffer definitely doesn't coincide with that.
  • Fishie - Monday, August 30, 2004 - link

    This is a great summary of the processor cores. I would like to see the same thing done with video cards.
  • JarredWalton - Monday, August 30, 2004 - link

    #49 - Did you even read the links in post #44? Did you read post #44? Let's make it clear: the Willamette and Northwood cores were 20 stage pipelines coupled to an 8 stage prefetch/decode unit (which feeds into the trace cache). This much, we know for sure. The Prescott core appears to be 23 stages with the same (essentially) 8 stage prefetch/decode unit. So, you can call early P4 cores 20 stages, in which case Prescott is 23 stages, or you can call Prescott 31 stages, in which case early P4 cores were 28 stages.

    If you look at the chart in the link to Anandtech, notice how the P4 pipeline is lacking in fetch and decode stages? Anyway, there's nothing that says the AT chart you linked from Aug 2000 is the DEFINITIVE chart. People do make errors, and Intel hasn't been super forthcoming about their pipelines. I'll give you a direct link to where Hannibal talks about the P6 and P4 pipelines - take it up with him if you must:

    http://arstechnica.com/cpu/004/pentium-1/pentium-1...

    Synopsis: In the AT picture, the P6 pipeline has 2 fetch and 2 decode stages, while Hannibal describes it as 3.5 BTB/Fetch stages and 2.5 Decode stages.

    http://arstechnica.com/cpu/01q2/p4andg4e/p4andg4e-...

    Here, the P4 and G4e architectures are compared, but if you read this page, it explains the trace cache and how it effects things. Specifically: "Only when there's an L1 cache miss does that top part of the front end kick in in order to fetch and decode instructions from the L2 cache. The decoding and translating steps that are necessitated by a trace cache miss add another eight pipeline stages onto the beginning of the P4's pipeline, so you can see that the trace cache saves quite a few cycles over the course of a program's execution."
    -----------------------
    Further reading:

    http://episteme.arstechnica.com/eve/ubb.x?a=tpc&am...

    The comments in the "Discuss" section of the article contain further elaboration by Hannibal on the Prescott: "The 31 stages came from the fact that if you include the trace cache in the pipeline (which Intel normally doesn't and I didn't here) then the P4's pipeline isn't 20 stages but 28 (at least I think that's the number). So if you add three extra stages to 28 you get 31 total stages."

    The problem is, Intel simply isn't coming out and directly stating what the facts are. It *could* be that Prescott is really 31 stages (as Intel has said) plus another 8 to 10 stages of fetch/decode logic, putting the "total" length at 39 to 41 stages. However, given the clockspeed scaling - rather, the lack thereof - it would not be surprising to have it "only" be 23 stages plus 8 fetch/decode stages. After all, the die shrink to 90 nm should have been able to push the Northwood core to at least 4 GHz, which seems to be what the Prescott is hitting as well.

    Unless you actually work for Intel and can provide a definitive answer? I, personally, would love some charts from Intel documenting all of the stages of both the initial NetBurst pipeline as well as the Prescott pipeline. (Maybe I should mention this to Anand...?)

Log in

Don't have an account? Sign up now