ARM has joined forcing with AMD to promote what they call "heterogeneous computing". ARM Fellow and vice president of technology Jem Davies took the stage on Tuesday at AMD's Fusion Developer Summit, insisting that the two chip designers share a vision of the computing future. "It may seem strange that AMD have invited …
The new black is actually the old black.
"....allow chip designers to add special-purpose – "domain-specific" – cores and circuitry..."
So the way of the future is for CPUs to have additional logic and instructions dedicated to specific tasks? If it looks like a CISC processor and quacks like a CISC processor........
Close, but no cigar
If it looks like a co-processor, if it quacks like a co-processor it is a co-processor.
8087 and 8089.
It is a pity the only one who tried to do anything moderately original with the Intel 80x87 arch was ITT with their 3x3 vector matrix version of 80x87. However, if we take the software developers who were old enough to be asked a question about co-processors and 80x87 assembler, 100% will say "Vector ops? On 80x87? You gotta be kidding". 99.999% will do so because they have never heard about it. The remaining 0.001 because they would rather not say what kind of code they wrote for it.
Yes, it's nothing new, but it's not about CISC either
If you use a mobile phone then you're already using a heterogeneous computing system. That's what a SoC (System-on-chip) is, a bunch of special purpose processors working together. The OMAP4 in RIM's playbook, for example has something like 16 processor cores, most being specialized (like 1080p video encode/decode). You've got (at least) two big issues when working with these architectures - programming them and getting their processors to work together in harmony, standards can help out there.
No ... you've missed the point I believe.
CISC throws hardware at the "general purpose" problem to make the one processing core as domain non-specific as possible. It provides passable performance for many domains, for a hit on power consumption. Real domain-specific hardware (DSH) is designed to do one thing and do one thing very efficiently. It will run faster, and use less power than the equivalent general purpose processor for that one set of tasks (and probably be unusable for anything else).
From my experience the main advantage of domain-specific hardware is mostly about the ability to turn things off when they are not being used. Geometry is shrinking faster than power consumption per gate, so you can put far more logic on a chip for the same $ cost, but you can't turn it all on at once without killing your power budget. The general concept is to use this "dark silicon" area to contain the DSH hardware. When you have work for it to do you power it up, pass over a job for it to do, run it as fast as possible, turn the silicon off again.
With suitable programming models to allow this you will get far better performance and power consumption than you could ever have gotten off a general purpose CPU. So not very like CISC at all ...
Wasn't the Amiga built like this?
A bunch of specialised chips operating in perfect harmony to provide the "CPU"?
The title is required, and must contain letters and/or digits
Yes, Agnes and Denise (and later the Fatter and Super Fat iirc)(
Blitter (Block image transferererer) and Co-Processor (awesome for creating what we called "Copper bars")
all in adition of course to the MC68000 or MC68020
(How sad was I as a child!)
Re: Wasn't the Amiga...
No, the Amiga was a bunch of specialised chips separated by a slow bus with a segmented memory architecture. No different in a lot of respects to any modern PC.
It's a design that works, obviously. I wouldn't call it optimal though.
"segmented memory architecture"
Maybe yours was - I don't think anybody else's was.
segmented memory on a 68000 ?!
Yes, it was segmented
The memory in the Amiga was segmented - not in the sense of the X86 family's "built in bank switching", but in the sense that not all memory in an Amiga was equal. There was "chip" RAM that all the various coprocessors could access, and non-chip RAM that only the CPU could access. Also, due to the constant sharing of chip RAM between all the parts, CPU code located in chip RAM ran slower than CPU code in non-chip RAM (due to the wait states caused by sharing).
One Way to Start
One way to start promoting code reuse would be to reduce the number of architectures. ARM? x86? PowerPC? SPARC? Itanium?
Why? They already got it right once, long ago, and now chip technology has caught up. It's not as if we're living in a world of 8-bit or even 16-bit microprocessors any more.
Admittedly, with the existing pool of Windows software, it's probably overoptimistic to think that the x86 architecture could just go away. But if you're not going x86, the other choice should be obvious - the other standard with a huge pool of software. z/Architecture, from IBM - today's 64-bit evolution of the 360 instruction set.
My memory is fading, but...
...doesn't the 360 lack a stack? I seem to recall instructions like BAL, and BALR (Branch And Link, Branch And Link Register). Fast, yes, but doesn't exactly make life easy.
re: One Way to Start
"Admittedly, with the existing pool of Windows software, it's probably overoptimistic to think that the x86 architecture could just go away."
And the existing, even larger pool, of ARM devices isn't hardly going to go away either!
*Two* arch's to rule them all...
I've long said that IMO there will eventually be two CPU architectures in use: x86 and ARM. x86 for things that are plugged in to mains power (or big batteries); ARM for things on battery (or trivial things that are on 24/7 like the microwave and thermostats). Each will be able to interpret the other well enough for occasional use; either the cloud, real ANDF's (or Java/flash et al) or Mac-type dual format exe's for very popular apps will deal with apps. Of course they'll have different performance, but only as you'd expect from the mains/btty dichotomy, nothing from the architecture.
"drinks will little parasols in" - go Jem!
If only there was some way to make some products uneconomical!
Oh wait. Market forces already did a pretty good job of killing off broad swathes of architectures over the past 20-30 years. If any one architecture was perfect for all workloads and was cheap to license by anyone it would have already succeeded, right? Well, turns out x86/AMD64 are not the be-all and end-all.
I'll point out that you've missed various colours and flavours of MIPS, incidentally. Plenty more microprocessor architectures out there that aren't dead yet, too.
Nor do many other CPUs
Basic ARM doesn't have a stack either. This doesn't really make life hard.
Exactly - emulating stack is trivial
In my high school days they used to torture us by forcing to do all work in Basic because that is what you could do with the base spec Apple 2 clones which were in the classroom. No stack. So no recursion. So the teachers got quite a a bit of enjoyment from giving us problems which could be trivially solved with it (combinatorics, travelling salesman, etc).
So I sat down one afternoon, wrote stack emulation in 2 one-liner procedures, shared it with all of my mates and we all started writing what the prof described as unintelligible gibberish - Apple 2 basic with an emulated stack. It worked quite well too. Too well in fact - we had teachers prohibiting its use as it was spoiling their "fun".
"the facts will prove or disprove whether you're right"
Nonsense. Look at how well the x86 has done, probably the worst architecture that had a chance of surviving.
The hairy elephant in the room
"I don't talk about OpenCL because it's a given. It's an obvious. The world of open standards will win in the end."
I would say that the reason he doesn't wnat to talk about OpenCL is because no manufacturer has full support for OpenCL on their GPUs or APUs at the moment. Not AMD, not nVidia. Which is why even though nVidia cards do kind-of support openCL pretty much everyone uses CUDA instead as it's much more feature complete and you can actually use it in real world applications.
Not to say that OpenCL is not the future, just that it's not the present.
ARM + AMD = ARMD, and ARM-x86 hybrid chips?
Is this announcement testing the water to see how the market and industry react? Perhaps before a merger is considered?
Also given that AMD have an x86 product line what about ARM-x86 cores where the ARM is used for longer battery life while the x86 is used to run legacy and or 64bit more demanding apps. Might be suited to Windows 8 for ARM, allowing legacy x86 apps to run at full or near speed alongside native ARM ones, rather than by software emulation compatibiliy mode.
Also aligns well to Apple's apparent strategy of making x86-based MacOSX more like ARM based iOS in the Lion release, perhaps a hybrid iOS-MacOSX device is an idea being considered?
Re: ARM + AMD = ARMD, and ARM-x86 hybrid chips?
"Is this announcement testing the water to see how the market and industry react? Perhaps before a merger is considered?"
That's an interesting rumour you have there!
I think it's more likely that AMD want to see their GPU-related stuff appearing in ARM-based kit. Currently, you've got each ARM vendor stuffing their own things on the silicon, and maybe there's some ARM technology for multimedia-related stuff, and one sees the highly proprietary PowerVR stuff appearing on various SoCs, but AMD could potentially get a bit of exposure at the very low end of the power consumption scale by teaming up with ARM licensees.
Merger - no, ARM getting a hypertransport license... Hmm..
More like ARM getting HT3 and socket licenses so an ARM based specialized core can be used a co-processor.
New name........ ARMAMD VAN CHIPPEN
Lord of the Chips
One chip to rule them all?
And in the darkness bind them?
Anyone remember TAOS?
- Microsoft: We're hiking UK cloud prices 22%. Stop whining – it's the Brexit
- Despite best efforts, fewer and fewer women are working in tech
- And so we enter day seven of King's College London major IT outage
- Thanks, IoT vendors: your slack attitude will get regulators moving
- AMD is a rounding error on Intel's spreadsheet and that sucks for us all