Intel's next-generation processor, 'Nehalem', will be made available running at 3.2GHz - if demos of the chip at Intel Developer Forum this week are anything to go by. Officially, the chip giant won't comment on the clock frequencies it will release the initial desktop and sever Nehalems - codenamed 'Bloomfield' and ' …
"Nehalem builds on the Core architecture with a native quad-core design; extra, shared L3 cache; and HyperThreading technology to allow each of the four cores to appear as two virtual cores to the host operating system.
While HT doesn't double the performance of a processor, it nonetheless should ensure that Nehalem outperforms a four-core Core 2 at the same clock speed."
Look, fire up your memory cells, OK? HT was useful if your code could meet certain specific requirements qua issuing a load/store or integer operation in concert with an float operation, IIRC. HT did not allow you to run two threads of arbitrary instructions simultaneously.
If you took the time to construct your software to take advantage of HT then HT was fine. Most software was not built that way and did not especially benefit from HT. What HT mostly did was give the punters 2 CPU graphs in Windows Task Manager.
So... now Intel rolls out the HT hype again... and somehow there is a fresh (or maybe not so fresh) crop of hacks to suck it up like The Truth. I suppose if Intel started spouting off about a new CPU technology called MMX then you lot would stand up and salute?
Not too impressive, bud.
HT might actually work this time.
Actually while HT on the P4 (at least earlier ones) wasn't exactly impressive, SMT (symetrical multi threading, which is what HT is) has been quite successful on properly designed CPUs, like the powerpc and was looking very promising on the Alpha 21464 (which was never produced but is where intel got the idea from after they took over the Alpha technology). Sun seems to have quite goot performance on their multithread capable CPUs.
The main reason HT doesn't work well on the P4 is that it doesn't really have many spare execution units most of the time, and even if it does the dispatcher seems to be too small so it can't actually start executing new instructions even if it has room to execute them. The tiny L1 cache probably doesn't help either if you have to start sharing it between two threads.
Perhaps intel ought to come up with a new name since they botched HT first time around.
Perhaps they should call it "Cool Threads"?
I bet you above is no other than an AMD fanboy
to comment 1: I am getting the feeling you'r just an amd a*s head. I bet if it was amd who said "we're now introducing hyper threading in our new CPUs" you would be all hoolahoop about it. I'm not an intel fanboy but since core they've been working real good for me. in the athlon64 days amd was king and I respect them for that, now in core time intel is king and I respect them for that also. you should keep in mind that there are specialized apps out there that are made to take advantage of intel's HT; the desktop platform isn't the only platform in the world and it also benefits from HT: the threads might not be simultaneously going down the pipe line but they are going down a half step or should I say "semi-simultaneous" as it takes advantage of idle resources in the cpu. I believe this is an advantage not a disadvantage. HT on the pentium 4 might not have been much because it was a poor architecture anyway. this time around things might be a bit different as it will be implemented on a slightly optimized core architecture. small changes to the implementation of HT could relay huge benefits to the end user.
I would really like to see how intel implement it, I would Laugh at them yet disappointed if they actually "muk-it-up" on such an excellent architecture.
I bet apple will take advantage of HT.
Note to you: quit being a fandoy, its despicable.
After hearing the phonetic fiascos of "industry experts" mispronounce "Willamette" and "Tualatin", I'm looking forward to their pathetic attempts to get "Nehalem" out of their mouths.
These are all place names in Oregon, which most can't pronounce properly either.
Oregon - OR uh gun
Willamette - Wil LAM met
Tualatin - Too ALL uh ton
Nehalem - Neh HAY lem
There you go, everyone.
And all fanboyisms aside, hyperthreading (like SMT, cache hits, MMX, etc.) will live and die based on the compiler. I don't know how you'd effectively write C++ (Java, C, C#) to manage the instruction code order of your packaged software. So Intel: help out Microsoft, Apple and the FOSS community with those compiler options, why don't you? I bet if you smartened up those compilers so they were exploiting all the tricks you packed in your CPUs, and we re-compiled the base OSs, drivers and MS Office, you'd speed up both my dual Xeon Alienware w/Vista (1st edition HT!) and my dual-core MacBook Pro both considerably!
"HT was useful if your code could meet certain specific requirements qua issuing a load/store or integer operation in concert with an float operation, IIRC."
...or *anything* in concert with an operation that had blocked on a memory access. That was the point. (On *my* code it worked quite well. :)
Improved Memory Controller
To be honest, the main thing I'm looking forward to on this chip is the improved memory controller. Nehalem has an on-board memory controller which should massively outperform the old front-side bus technology. A lot of classes of app are limited by memory throughput - and shared memory controllers basically halved the number of usable threads on many recent Intel architectures. Lets hope Nehalem fixes this as it is supposed to.
I think you are a little confused, what you are talking about it superthreading, not hyperthreading. Superthreading requires all execution instructions be from the same thread, where as hyperthreading extends superthreading and allows it to execute instructions from 2 different threads.
You can read up about it at http://arstechnica.com/articles/paedia/cpu/hyperthreading.ars/3 if you like.
Phenom B3 cant even beat old Q6600, expect more from Nehalem.
we all know what current intel processors can do and how it kicks the butts of current AMD processors, including the new Phenoms who despite TLB bug fix in the B3 revision, the supposedly better hypertransport 3.0, the integrated memory controller advantage of AMD, etc... Phenom 9850 still cannot beat the good old Q6600 kentsfield in overall performance. and to remind you guys, the Q6600 is not even a Nehalem.
see Phenom, part deux: Ars reviews AMD's B3 silicon revision
now that Nehalem is coming, expect greater performance since Nehalems will come with 3-channel integrated memory controller and a hypertansport-like bus which is the QuickPath Interconnect, etc.
more info on Nehalem in this Hardware Secrets article:
I have been a fan of the 486DX2, the cx686-p166+, the AMD 586 & 686, the Pentium II, the Pentium III, the Athlon, the XEON, the Athlon 64, the Opteron, the XEON in that order.
Lately I am enamoured of those 45nm 1333MHz FSB XEONs.
I guess that makes me a Cyrix fanboy.
Marketing - HT in this case - is marketing, and should be identified as such whether it is Intel or AMD foisting it in the public.
In about 15 years programming and sysadmin-ing I have never heard of 'superthreading'.
Traditional threaded software allows each thread to do what ever the programmer codes the thread to do: is assumes that any given thread runs on a general purpose processor and any given pair of threads do not have a relationship qua the instructions they send to their CPU modulo sync of data access. Which means if several threas want to all do floating point ops that is fine; or if they want to do all integer ops that is fine too; if they want to do some mixture that is fine also.
HT was (may be again?) a way to exploit that a P4+ CPU has more execution units than can strictly speaking be used simultaneously by single threaded code. Thus, if a programmer or compiler could conspire to issue instructions that could execute (1) on the FP unit and (2) the integer or load/store unit, then there was no reason not to send both to the processor.
But HT failed if the two above instructions had a dependency between them. HT actually had a long list of what the two instructions could not admit in terms of there relation to each other. SO HT was not threading. HT was crafted code, in much the same manner as programming CUDA or CELL is: if your algorithm fit the model then you were in business... but the model was quite restrictive, one did not just pass the compiler a switch telling it to produce HT code and magically get two threads of execution. One really had to structure one's code to meet HT's requirements regardless of the compiler option.
Few people did it: the people that just passed the compiler the option got noothing in particular for their effort: people that spent the time to understand HT model and their code got results directly related to whether thir algo's could be HT-ized. It was not a general fix, it was not even anything like an easy fix.
As for statements like "Superthreading requires all execution instructions be from the same thread", well my friend, that does not even make sense. Who is signing that cheque you get each quarter?
- Microsoft: We're hiking UK cloud prices 22%. Stop whining – it's the Brexit
- And so we enter day seven of King's College London major IT outage
- Thanks, IoT vendors: your slack attitude will get regulators moving
- Vodafone rapped with RECORD £4.6m fine for failing customers
- EU ruling restricts rights to resell back-up copies of software where originals are damaged, destroyed or lost