back to article ARM vet: The CPU's future is threatened

ARM's employee number 16 has witnessed a steady stream of technological advances since he joined that chip-design company in 1991, but he now sees major turbulence on the horizon. "I don't think the future is going to be quite like the past," Simon Segars, EVP and head of ARM's Physical IP Division, told his keynote audience …

COMMENTS

This topic is closed for new posts.
  1. Havin_it
    Trollface

    Battery solution:

    1x Dairy Milk Bar

    1x fishing rod

    1x treadmill with dynamo

    1x 30-something single woman

    Not all that portable admittedly, but I've got a patent pending on a nationwide network of charging stations :)

  2. Anonymous Coward
    Anonymous Coward

    Moore's law??

    I'm not sure if Moore's law was/is really applicable on mobile processor performance.I would like to benchmark Axim x30 with Intel's ARM V5 (XScale) running at 624mhz from 7 years ago against the latest smartphone around.

    If you look at the desktop world and how vast an advancement in architecture and clock speed we got since 2004, well no comparison really.

    As an aside, Is intel not kicking itself for selling xscale?

    1. Steve Knox
      Boffin

      Moore's Law:

      "The number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every two years."

      Note there's nothing specifically performance-related there. Yes, in the desktop world those advances were often used to increase performance.

      But in the mobile sector, they've been used as much or more for miniaturization, power efficiency, or adding functionality, which is why today's smartphones are smaller, and run longer, than an Axim x30, even though they have to give some of their battery life and space to the relative hog of the 3G/3G+ radio (not to mention the wi-fi, bluetooth, GPS, accelerometer, etc.)

      Re your aside, I certainly hope so.

    2. Anonymous Coward
      Devil

      The mistake with invoking Moore's Law, if we assume it really works for a moment,

      Is for an employee of any one company to assume it applies to their company...

  3. heyrick Silver badge

    Dedicated hardware best suited?

    Isn't this rather obvious? The microprocessor exemplifies the concept of jack of all trades, master of none. Frankly the only reason my netbook is capable of showing me animé is because there is enough grunt power to decode the video data in real time. But then my PVR with a very slow ARM processor can do much the same as it pushes the difficult stuff to the on-chip DSP.

    Likewise the older generation of MP3 players were essentially a Z80 core hooked to a small DSP, all capable of extracting ten hours out of a single AAA cell.

    Go back even further, the Psion 3a was practically built upon this concept. Bloody great ASIC held an x86 clone (V30) and sound, display, interfacing, etc. Things were only powered up as they were actually required. In this way, a handheld device not unlike an original XT in spec could run for ages on a pair of double-As.

    As the guy said, batteries are crap. Anybody who uses their smartphone like it's their new best friend will know that daily recharging is the norm, plus a car charger if using sat-nav. So with this in mind, it makes sense to have the main processor "capable" without being stunning, and push off complicated stuff to dedicated hardware better suited for the task, that can be turned off when not needed. Well, at least until we can run our shiny goodness on chocolatey goodness!

    1. Annihilator
      Boffin

      re: dedicated hardware

      The problem with dedicated hardware for task x is about where to draw the line. Having purpose built chips for every task soon stacks up to be a lot of chips in one device, and ramps the costs up too. Not to mention the design costs for a hardware solution plus the inability to upgrade it later.

      Besides, the same problem still applies - he's comparing the cost of a 2G modem with a 4G modem as an example. Even specialised hardware is still going to be more energy intensive - the scale still exists.

    2. sam 16

      But an ARM is specialized...

      ARM chips are RISK processors, specialized towards flow control operations and simple arithmetic. They use pipe-lining the push a lot more operations through per cycle than the CISC chip you get in your Desktop.

      But they are rubbish at the kind of high throughput mathematics that is required for video decode, and even wireless networking these days. CISC chips have massive instruction sets, giving access to a combination of DSP hardware and optimised microcode for vector math. It's not as extreme as a vector engine, but its there.

      For me, this proposal that packages should contain a range of semi-specialized hardware to carry out different types of generic computing task is a migration back to CISC, a surrender of the RISK concept that has dominated mobile devices.

      Backwards compatibility has crippled desktop CISC, and I hope that the new specialist CISCs will be a bit more pragmatic rather than being shaped by the migration from previous hardware. A nice way to achieve this would be for SOC vendors to offer a huge base of C++ libraries, with the proviso that the instructions set was prone to change between devices, and using it directly was asking for trouble...

      1. Monty Burns
        Facepalm

        @SAM 16

        Er.... do you mean RISC and not "RISK"?

        RISC = Reduced Instruction Set Computing/ers

        CISC = Complex Instruction Set Computing/ers

      2. Ru
        FAIL

        RISC vs CISC? Really?

        It isn't the nineties anymore. Get with the times, granddad. Whilst you're there, learn about the difference between a DSP and a general purpose microprocessor. Compare and contrast with the sort of highly parallel simple processing units used in modern graphics cards. The semiconductor world is not a simple place, especially when it comes to mobile device SOC cores such as those designed by ARM.

        When you're done, I invite you to take a look at the 'crippled' processors of today, and have a quick think about how monstrously powerful they are. A new non-backwards compatible instruction set would make everything sweetness and light, you say? Hello x64! You're not suggesting anything new, or even useful.

  4. proto-robbie
    Pirate

    y.a.f.t.

    Is there actually a continuing market for slightly faster kit at higher cost in the current climate? IMHO most kit has been running fast enough for the last couple of years, despite constant efforts to force us to buy more CPU to support the same functionality.

    Extreme gamers can link a few GPUs together, data warehousers can add terabytes of SD disk, and the rest of us can upgrade to Linux or Windows XP running Libre Office ;-)

    This article suggests it's time for software to catch up with the hardware.

    1. Refugee from Windows
      Linux

      Back to the 70's then?

      Maybe the way to make these devices to run faster is to tighten up the code. After all we've been getting rather a lot of bloat whilst Moore's Law has applied. In the 70's when processor time cost money it was a time when shaving the time off your code had a distinct advantage, and they didn't have cut'n'paste coders in that era.

      I'd predict a trimming back of all those functions that don't get used unless it's the 5th Tuesday in February, to make what does get used rather a lot quicker.

      Tux - possibly the home of better software.

    2. Kristian Walsh Silver badge

      Warning: Implicit car analogy

      There's not much of a market for faster kit, but there is certainly a market for kit that is as fast, but consumes less power in the process. Moore's law benefits this too: smaller features require less power to switch on or off. This is why you can run them faster, but it also means that, speed-for-speed, the smaller part consumes less power than a larger one.

      From mobile phones to data centres, power consumption is now the number one enemy. It's only really the desktop market that gets a free ride for this; but even here, large corporate buyers are waking up to just how much of their annual electricity bill is spent generating 3d images of pipes through the night, and it's having an effect on buying decisions.

    3. Arctic fox
      Thumb Up

      @proto-robbie Good points, I have to agree.

      If in addition they concentrated on battery life (we after all are talking about *mobile* computing here are we not?) instead of "my dick is bigger than your dick" "improvements" we would all be better off.

  5. Mikel
    Go

    Dimension Z

    He spins a likely tale. But we are beginning to explore the potential of the vertical dimension. Moore's law is safe enough for a good while.

    I like the heterogenous cores idea, as I've said here long ago.

    1. Asgard
      Happy

      3D Chips

      The future must surely be 3D where layers of a chip are sandwiched together. That also allows cheaper production as faulty layers can be checked for and rejected before the final sandwich assembly is completed. So it becomes a question of finding ways to increase mass production of layers (which can be improved), not a question of reducing the geometries of layers which cannot continue.

      That would even work with older larger geometries so older Fabs would still be very useful. (Plus older Fabs are still very useful for a lot of smaller more dedicated chips which is a very big market).

      @"since the Intel 4004 appeared 40 years ago this November"

      I hope that historic anniversary is recognised by the world's press. Technology has after all totally changed the world in the past 40 years and we all owe a lot to that historically important work.

      1. lpopman
        Go

        titular thingy

        Layered chips are already here. Apple use it and have a patent on their particular process, and flash memory cards have an ARM processor for wear balancing layered with the flash wafer.

  6. Neil 7
    Go

    Battery Technology

    It's interesting to think that if there was any *major* advancement in battery technology in the near future (and we hear about new "breakthroughs" every other month), ARM could be wiped out in the mobile space as there would be no need (or at least, far less of a need) for their power efficient hardware...

    A real-world 10-fold increase in battery storage density (naturally involving nanotechnology of some kind) is probably the kind of breakthrough that Intel dreams of, and ARM has nightmares about.

    Until then, go ARM!

    1. annodomini2
      FAIL

      Power efficiency is still critical

      1. Portable computing devices e.g. Smartphones, Tablets, laptops etc, even with a 10 fold increase in battery, this either allows for:

      a) An increase in performance with no degradation in battery life.

      b) Huge increase in battery life (becoming increasing important for many users)

      c) Balance of the 2.

      If you can halve the power consumption of the chip you can use a smaller battery for the same job, making the device cheaper and lighter.

      2. Data and Processing centre's are one of the largest consumers of CPU's, their operating costs are mainly power consumption, batteries are not going to affect this and many studies have recently shown that 10% reduction in CPU and memory power consumption has huge implications for their operating costs as there is much less heat generated and resulting reductions in cooling requirements.

      You may see as much as a 30-40% operating cost reduction for an equivalent setup.

    2. SadBloke
      Flame

      If you can't take the heat

      It might be nice to have a mega battery in your pocket, but how do you use a phone with a 50W processor without resorting to oven gloves.

      Power is an issue for desktop and server machines as well, mainly because of the cooling problem.

    3. Anonymous Coward
      Anonymous Coward

      title

      I'd say ARM would be quite safe in the event of a breakthrough in battery technology because they'd extend that life too. Which would you rather have, an intel based device that would need charging every 2 days or an ARM based one that would need recharging every 5 days? I know which one I'd pick ;)

      (Note: Numbers plucked out of rectal sphinctor as a means of giving an example, any resemblence to reality is purely coincedental. And yes, my spelling sucks :P )

    4. mark l 2 Silver badge

      new battery tech

      Even if a major break through came out tomorrow that made batteries 10 x more efficient, that would make my current mobile with a ARM chip need charging about every 10 days. With current Intel mobile CPU id probably only get half of that run time and no real other benefit (other than being able to run full fat Windows on my phone which i would have no interest in doing).

      Also most phone manufactures have no interest in putting an Intel CPU in their phones as they can get ARM chips from several sources much cheaper than Intel cpus. Look at the recent article about how much Intel want to charge for CPUs for the Ultrabooks to see how expensive Intel are compared to ARM.

      It looks like the ARM Vet is saying we need to go back to the design days of the Amiga 500 which had a relatively low powered cpu but lots of custom chips for handling other tasks which coupled with well written software made is seem much faster than PCs costing the same price. Maybe if commodore's management hadn't been so useless running the company into the ground they might have been a major player in todays mobile scene.

    5. Zack Mollusc
      Meh

      Meh!

      If battery storage density increased tenfold, the smartphone manufacturers would just fit smaller batteries of the same capacity.

      My first phone had six AA NiCads which took up 40% of the volume of the case. My current phone is powered by what appears to be an After-Eight mint which takes up less than 10% of the case.

      1. Trygve Henriksen

        My first wasn't THAT big, but...

        Yeah, that's the way it goes.

        Remember back when the phones had keypads where you could press keys with your finger instead of a toothpick?

        These days, I have to pat my pockets to find where my phone is. A few years ago, you could feel at least a little bit of weight.

      2. Magnus_Pym
        Flame

        Not only the heat

        Being able to produce ten times the electrical power would also involve having ten times the chemical power in the battery when fully charged. Flames indeed when you trip over with one of them in your pocket.

    6. Goat Jam

      Sorry to burst your bubble

      but batteries could be ten times better and it would still matter that your CPU burn as few watts as possible.

      It's like when guns and stuff get lighter it doesn't follow that grunts get to lug lighter backpacks around.

    7. Naughtyhorse

      naturally involving nanotechnology....

      I'd hazard a guess that stong halucinogenics would be a more productive route!

  7. Arctic fox
    Happy

    What on earth has happened here?

    A thoughtful, intelligent, fascinating and well written article from which I learnt rather a lot. Without any jokes, satire or the faintest smell of clickbait in it. Have I logged on to the wrong site?

    1. jai

      sometimes

      sometimes, you need a break from the trolling and the play mobile and the fanboy wars and it's nice to chill out with a cuppa tea, a digestive biscuit and a nice, geeky tech article on a saturday morning.

      i'm sure normal service will resume soon though :)

      1. Arctic fox

        @jai Re "Sometimes". I will admit that that nirvana you discribe (although the.......

        ........beverage in my case is coffee) is extraordinarily attractive. That was in fact my reaction to the article. It was indeed interesting and instructive and I felt refreshed after having read it instead of totally wound up and ready to bite someone's head off. Obviously El Reg made a big mistake and it won't happen again. No doubt somebody will be disciplined such this type of error is not repeated.

    2. Ralthor
      Joke

      There is always the possibility....

      ... that your clickbait sensor malfunctioning. :)

    3. Khephren

      agreed

      That's what I thought. For a minute I wondered if i'd opened Ars Technica by mistake ;)

      Great article, and Simon Segars, knows his stuff.

      1. introiboad
        Go

        Re: Ars Technica

        Thumbs up for mentioning Ars.

        Also, the CPU expert at Ars have been telling their readership for ages that ARM does not have any magic dust they can sprinkle over their chip designs to make them more efficient. The only reason they consume less power is because they are _way_ less powerful than x86 chips. The day the start approaching them in computing power, they will consume pretty much the same. Also, to the CISC vs. RISC debate people: Ars again mentions that instruction decoding into microops today comprises a very very small percentage of the CPU time, and therefore the ABI or instruction set is mostly irrelevant these days when talking of powerful chippery.

        Again, just quoting Jon Stokes and the other Ars experts, but they do seem to know their stuff.

  8. Neil Barnes Silver badge
    Thumb Up

    Chocolate? Chips?

    What's not to like?

  9. annodomini2
    Boffin

    He missed something...

    FPGA's

    He talks about a move to dedicated hardware, if designers can achieve this balancing act between CPU's and GPU's as he suggests moving to a device like an FPGA may also be possible.

    For dedicated tasks that overload CPU's and GPU's, ASIC's can't be beaten, not even FPGA's can match them for size and power usage.

    However, ASIC's by their very nature are inflexible, he mentions putting encryption hardware on the chip, how many times a week do we see on this site. "Some team, somewhere has cracked x algorithm or technique".

    An FPGA would allow the majority of the performance of the ASIC, but allow for the update to a new algorithm or technique.

    As stated in the article, one of the major costs is verification and once the silicon is set it can't be reworked, in FPGA it can, certain issues can be fixed with firmware updates.

    Also not all features are used at once so the device can be reconfigured on the fly to serve the purpose in use at a specific time, reducing silicon area and as a result power consumption, without compromising functionality.

    1. introiboad

      re FPGAs

      FPGAs are not power efficient, as far as I know. In my company we develop ICs and do all the software development on FPGAs, but then you need the real chips to test all the power modes and the current consumption.

      1. Anonymous Coward
        Anonymous Coward

        Don't know about power consumption ..

        .. I assume the main problem for FPGAs is the cost

        1. Anonymous Coward
          Thumb Up

          "the main problem for FPGAs is the cost"

          When compared with pure custom silicon, yes and no. Pure custom silicon in a high volume product is in principle cheaper per unit than an FPGA to do the same job, but the custom silicon has a much bigger up front cost (and in the event of a hiccup, reworking costs a lot more than changing the FPGA program). For low volumes the FPGA wins. In the middle, there's a discussion to have.

          You can get ARM-capable FPGAs too.

  10. Russell Howe
    Flame

    Power problem

    "We want bigger batteries so we can burn more power"

    Power => heat. How hot do you want your phone to be? I wouldn't fancy holding a running POWER7 CPU in my hand, even if it had a dirty great heatsink and fan.

    Dedicated hardware (yes, this is quite expensive) and highly optimised, clever software (yes, this is also expensive and difficult to get right). Good luck with that.

    1. Anonymous Coward
      FAIL

      I think he was saying...

      ....that the problem is run time rather than inability to increase instantaneous power consumption.

      Batteries are a problem, imagine how things would be before the advent of Li-ion and Li-poly....

    2. nyelvmark
      Boffin

      Power => heat

      Uh, wrong - unless you have some vague meaning of "power" in mind. By the simplest analysis, heat and power are simply two words for the same thing. Perhaps you meant to say that heat <> temperature, except that that would allow you to say, as I can, that I wouldn't mind holding any running CPU in my hand, provided that it was attached to a large enough heatsink.

      1. 401kill
        WTF?

        Heat != Power

        In simplest laymans terms they may equate, but let's try some simple analysis:

        Power = rate of energy conversion, ie. energy / time.

        Heat = form of energy.

        Over time, the energy stored in the battery is converted to heat.

        The more power the processor requires, the more heat produced in a shorter period of time.

        Therefore increased processor power consumption = more heat to dissapate over area of phone = hotter phone.

        I really would prefer increased run time over higher performance. My phone does everything required as long as I remember to charge it every day..

  11. Anonymous Coward
    Thumb Up

    "wiped out in the mobile space"?

    "ARM could be wiped out in the mobile space as there would be no need (or at least, far less of a need) for their power efficient hardware..."

    It would take a while.

    ARM don't build chips, ARM licencees do. These companies/people have years of experience in designing and building system on chip designs for specific markets. SoC designs that have all the important components a system needs (often including a DSP or two if the ARM's existing DSP capabilities aren't appropriate for the task at hand).

    ARM-specific features not found on alleged competitors also bring excellent code density (less memory for the same workload, ie cheaper) and other stuff along similar lines.

    Yer man from ARM raises an interesting point re the economics of chip manufacture at ever smaller geometries with an ever smaller number of customers (one solution to which is presumably for Intel to buy outfits like ASML), but I'd have been interested in some hard info on chip and wafer volumes currently being built at specific geometries. I'd be amazed if something tried tested proven and relatively cheap around 100nm didn't dominate the market - but I could be wrong. No one except a select few *needs* ~20nm technology.

    Today's Intel have nothing in their bag of tricks once they run out of process enhancements; they've not had a technical success outside of x86 enhancements for decades.

    Obviously the legacy commercial muscle of the Wintel empire is not to be sneezed at, but the future is in System on Chip design.

    It's what your right ARM's for.

  12. Cihatari

    African or European Swallow?

    1x Cadburys Dairy Milk has 35x the energy of a phone battery.

    Are they referring to a 50g bar or the 200g fat b*stard special?

    (Well, you have to know these things when you're a king, you know. )

    1. Anonymous Coward
      Facepalm

      Energy *density*

      You go flying off the bridge, screaming, into the abyss for not noticing that.

    2. Dave Murray
      Boffin

      Either

      35x the energy density, so either.

    3. Steve Evans

      Denisity is density...

      You think the 200g bar is the fat bastard special?

      So who eats the 1kg bar? Apart from me of course?

      Could run a laptop for a week on one of those, just long enough for my tummy to stop feeling bad!

      Omnomnomnomnom

      1. Chemist

        Chocolate

        Assuming that chocolate is ~100% fat & completely oxidized

        1g = ~37kJ

        So the chocolate equivalent of my 300g laptop battery would be ~~11MJ

        That should keep it going a while !

        1. Anonymous Coward
          Flame

          You could just as easily produce energy for your laptop

          By setting fire to the box it came in. But like the chocolate bar, the only way that works is to consume large amounts of oxygen and produce waste products. It takes a little longer to 'recharge' your chocolate bar if you consider the complete carbon cycle. So it's a little facetious to compare its energy density with a sealed battery.

          1. Chemist

            Plenty of people are thinking about ...

            Fuel cells (although AFAIK not chocolate powered ones)

            Seriously it's just a device to show how poor the energy density of batteries is compared with (say) the equivalent weight of hydrocarbon. It's only the same as electric car range 50 miles/ diesel car 600 miles

            1. Anonymous Coward
              Thumb Up

              Terminator

              Heh, that just reminded of that scene in Terminator 3, where one of Arnie's "fuel cells" gets damaged.

              Imagine having one of those in your laptop on the train and dropping it on the floor. It would take out 2 or 3 of the train carriages you were travelling in :D

  13. Adam T

    Good article

    Particularly the foundry side of the business.

    Makes me wonder, where are all these miracle technologies and fabricating methods, new materials et al we read about that are going to revolutionise computing. Are they all dead in the water, or are there just no takers because everyone's enjoying silicon?

    1. Ru

      everyone's enjoying silicon?

      Give it time. Semiconductors take a very, very long time to trickle through from the drawing board to consumer devices. The stuff in your shiny brand new smartphone started life several years ago, and that was based on tried and tested technology.

  14. John Sanders
    Boffin

    Moore's law

    Its not the end of anything, pretty much this article summarizes what I have been observing for years, that CPU's are pretty much farted out of the FAB full of bugs, and with barely any more optimization than what the smaller geometry inherently gives to the processor.

    Intel/AMD and the like just spit things out.

    Well I for one welcome the end of moore's law and the beginning of build a good product law.

    What this means is that if they can not cram more transistors into less space, they will have to start thinking on ways to make the same number of transistors do more more efficiently.

    Perhaps that will also put an end to the huge cooling monsters modern CPU's require.

  15. Neal 5

    a new way of thinking

    yes, quote from your story. Hits the mark exactly.

  16. Anonymous Coward
    Happy

    Edible Communications

    The holy grail of mobile power:

    Insert bar of chocolate into phone (with 35x energy density that's about one a month at a quid apiece - who needs chargers?)

    Extract and consume energy (=calories) from chocolate - leaving just the flavour.

    Lick phone (OK, so this bit needs work).

    Organic and fully renewable.

    1. Darryl

      One problem

      What happens when you accidentally leave your phone on the dashboard of your car in the hot sun?

      1. Francis Boyle Silver badge

        Telephonic diarrhea?

        A letter ->k

      2. Steve Evans

        @Darryl

        You lick the dashboard of the car obviously!

        1. Vic

          @Steve

          > You lick the dashboard of the car obviously!

          Don't use a suction cup to fix it to the window, then :-)

          Vic.

  17. someone up north
    Megaphone

    what is needed is a " Lesse's law " to replace the Moore's law

    Moore's law -> Less's law

    what is the Lesse's law :

    let's divide and share out ,

    just like open source

    1. Magnus_Pym

      We don't need no stinking laws.

      Divide lesses law by Moores law and the law is cancelled out. We would just be left with less is more.

  18. sueme2

    mmm chocolate

    Better batteries? ask the Chocolate Factory.

  19. nyelvmark
    Boffin

    Apples and oranges, chalk and cheese, ARM and x86

    So many commentards seem to be falling into the trap of comparing the ARM with the x86 that I feel moved to write an educational piece.

    The problem revolves around the term "CPU". Both an ARM and an x86 can be classified as Central Processing Units, but what exactly do we mean by "processing" and where exactly is "central"? I blame the OED, which should have a ruthless and notoriously effective hit-squad, silently removing from the gene pool all those who wilfully introduce ambiguity into the language.

    Some history: The RISC versus CISC debate came to a head when the world's most successful microprocessors were the Z80 (about 500 different machine instructions) and the 6502 (about 150), but petered out when Intel introduced the 486, which used a RISC-like core with microcode to implement the CISC instruction set, with simple instructions being passed straight through to the core for rapid execution. This made the simplest instructions execute at near-RISC speed, whilst retaining all the advantages of CISC, such as hardware iteration.

    Almost all the ARMs sold today are not physical devices, but licences to use the ARM architecture in a custom SOC (system-on-a-chip). The x86 is _already_ an SOC - one that Intel have been developing for 20 years. People design their own SOCs around the ARM core because their applications are nothing like a PC, so the x86 would be a poor choice. If your application _is_ similar to a PC - particularly, if you want it to run PC software, or to port PC software to it - then an x86 is the obvious choice, unless you think you can design something better, given that Intel have a 20 year head start.

    The point is, that to compete with an x86, you need to bolt a lot of stuff like multi-parellel pipelining, speculative pre-processing etc. onto the ARM, and to compete with the ARM, you need to cut the x86 open, extract the RISC-like core, and then bolt your own bits onto it. To draw an analogy or two, an ARM CPU is like C, and an x86 is like C++. We'd much rather have the simple elegance of the former, but sometimes tractors are better than Ferarris.

    I hope this helps somebody.

    1. druck Silver badge

      @nyelvmark

      The 486 was a classic CISC like the 386 before it and the Pentium after it. The Pentium Pro was the first to use the P6 architecture which translated x86 instructions in to more RISC like micro ops, which the core executed.

      1. nyelvmark

        @druck

        I stand corrected. Thank you.

    2. BristolBachelor Gold badge

      x86 SOC ??

      I must admit to being a little out of date with the recent intel x86s, but they now have SOCs?

      No need for a "chipset"? The processor connects directly to the disk, ethernet, USB, sound, PCI, etc.? The processor includes real-time clock and interrupt controllers, perhaps DMA too?

      If you look at the design of a system using an ARM SoC, there is normally very little else needed, whereas when you look at a design using an x86, it still needs at least a Southbridge chip plus several others.

    3. Peter Gathercole Silver badge
      Meh

      @nyelvmark - "20 year head start"

      I'm interested in what you are comparing with what.

      ARM silicon started appearing in about the same time (give or take a year) as the 80386, and IIRC, ARM systems actually fared relevantly well in benchmarks compared to the i386, and even then were clocked at much lower clock speeds.

      So although Intel had all of the years of 8086 development under their belt (which, incidentally, was less that 10 years), as 32 bit architectures, you can consider ARM and the first 32bit x86 processors as being of the same generation, and actually makes the ARM a more 'mature' processor than the 'great leap forward' of the i486.

      1. Anonymous Coward
        Coat

        @nyelvmark

        It's interesting you mention the Z80 vs 6502 years... I remember looking at the instructions sets for each, and the clock cycles required for their instructions. The 6502 could do most things way faster than the Z80, even when the Z80 was running at a higher clock speed. So it came as no surprise that ARM, a company that spawned out of Acorn (makers of 6502 based machines such as the Atom, BBC Micro and Electron) was focused on small but efficient instruction sets.

        I was a 6502 man... In my book you can't call yourself a machine code programmer if you can't do multiplication with nothing more that bit shifts and add commands!

        Mines the one with several Rodnay Zaks books in the pockets.

        1. nyelvmark
          Happy

          Nostalgia

          Ah, yes - Rodnay Zaks of the improbable name, author of "Programming the 6502" and "Programming the Z80" which duplicated about 25% of their content and dedicated an entire page of mostly white space to each instruction. The NOP pages were especially helpful.

          Zaks' explanation of multiplication (8-bit processors didn't have hardware multiply) however, stood me in good stead many years later when I needed to multiply a 17-bit number and a 23-bit number (from 2 A to D converters) on an 8051. I could do it with a 32-bit C maths library, but that involved lots of redundant shifts and adds because of all those leading zeroes, and I was running out of realtime. So I did the multiplication using shifts and adds in C and halved the run time. I checked the assembly-code output, and it was exactly the same as I would have written myself, so I left it in C.

          I too preferred the 6502 over the Z80, but in the days before instruction caching, pipelining etc, the Z80's better code density amounted to faster execution speed for many routines. And how could you not like DJNZ, for example?

          1. Peter Gathercole Silver badge
            Meh

            DJNZ e

            Z80 DJNZ e - 13 T states if branch taken, or 3.25 microseconds at 4MHz

            6502 DEY ; BNE e - 5 clock cycles if branch taken, or 2.5 microseconds at 2MHz

            OK, it's one more byte (3 rather than 2), but your assertion that code density == speed is completely wrong when considering 8-bit microprocessors, because there was no overlap in instruction fetching, decoding and execution. The time of any instruction on either a Z80 or a 6502 is exactly what it says, from fetching the instruction and arguments to completion.. From the end of the last instruction to the end of the next is an absolute time, and is easy to determine.

            Many Z80 instructions run into 15-20 T states, meaning that there are some situations where it is quicker to run several simple instruction than one complex one, even in the Z80 machine code.

  20. SkepticalOptimist
    Happy

    Exactly what you'd expect to hear from ARM

    ARM being a design company, of course one would expect them to say that the future of microprocessor will depend on design innovations, not on continued process technology innovation.

    I'm sure technology-focused Intel will make a case (if not at Hot Chips, then at IEDM, at ISSSC, or at their developer's forum) that Moore's law is alive and well.

    The death of CMOS technology scaling has been predicted by many in the past, and all of them had to eat their words. Segars makes many good points, but calling the scaling 'brick wall' is a risky business.

    1. Naughtyhorse

      errrrr...

      Q: didnt i read here that the great Moore hisself no less admitted that the 'law' only ever really worked for the 70's and 80's a bit, and was basically bollocks anyway?

      (Ans: yes i did read it here... just cant remember where!)

  21. Erin in Rowayton

    Erin in Rowayton

    Small correction: It was not TJ Rodgers but Gerry Sanders of AMD who once said that “real men have Fabs.” All of this author’s comments about lithography challenges are well know. Intel has stated plans down to about 10nm with lots of conviction. If one shifts the Moore’s Law rule to one of simply system speed – from transistor doubling every two years, with implied speed – then things like interconnects, stacking, architectural heterogeneity and new materials such as those related to III-V can offer very large functional and processing speed benefits. But, only the largest companies will be best positioned to capitalize on such inventions. And just for fun: Suppose a “wild duck” use for some of those billions of transistors was for some clever energy harvesting - then what happens to the battery life issue? So, what companies are best positioned?

  22. Anonymous Coward
    Holmes

    "x86 is _already_ an SOC"

    Where can I find an SOC with a decent performance core with DSP extensions, hardware accelerated cryptography, two ready-to-go Ethernet interfaces, a PCI interfaces, controllers for DRAM and Flash ROM, and power consumption well under 2W. And more.

    Intel, that's where.

    It's their IXP42x.

    Or rather, it was, back in 2006 or so, because it's an Intel Xscale (StrongARM) SoC, not an Intel x86 SoC, and it dates back to when Intel actually had a StrongARM business (which they sold off to Marvell around that time).

    If anyone wanted such an SoC today they can pick from a number of ARM licencees.

    If Intel have such an SoC product today based on x86, they've kept it very quiet. What Intel do have today is no more an SoC than stuff from the Motorola 68K era twenty years ago. A handful of companies may have genuine x86 SoCs but they have no unique selling point (well, one, see below) why anyone would pick them over an ARM SoC.

    Meanwhile in the years since 2006 the various ARM licencees have introduced various other capabilities to the ARM range, capabilities which leave any likely x86 SoC literally years behind.

    "If your application _is_ similar to a PC - particularly, if you want it to run PC software, or to port PC software to it - then an x86 is the obvious choice"

    Watson, I do believe he's got it! If you're Microsoft-dependent you're x86-dependent too. Otherwise, you're not, as any mobile phone, router, TV, printer, or other mass-market cost-sensitive SoHo equipment will prove.

    Still, if you want to call a random x86 chip an SoC, despite the amount of motherboard glue it needs around it which an ARM licencee puts **on the chip itself**, feel free. But the industry in general has a rather more useful definition.

    1. Tom 38

      x86 SoC

      They do exist: http://www.vortex86sx.com/

      but like you, on the whole I think I'd be happier with an ARM designed SoC.

      1. Steve Evans

        Vortex...

        Damn, beat me to it!

        That page is a bit out of date now. The latest is the Vortex86mx+ which integrates the RAM and video memory, so you don't need to provide both externally, just RAM, and then the onboard VGA interface can "borrow" some of it.

        Got the slightly older 86mx here, runs at a smidge under 1Ghz, onboard sound, vga, 2 serials, ethernet, USBs etc etc. The box I have also has an SD card slot. Power consumption? Well I can run it off a USB port, so total system consumption is under 2.5watts. Obviously can't do this if I start handing hard drives off the USB ports though.

        Quite happily running Linux, complete with Gnome desktop with an SD friendly zero swap file.

  23. s. pam Silver badge
    Gimp

    What a funny contradiction ...

    This fortnight's Economist has a fascinating article on Moore's "Law" and where things may be going. You may want to take a read of http://www.economist.com/node/21526322 which directly covers the fact that architectural changes at the wafer level may well mean Moore's "Law" for the next few years.

    Even though Moore is dead, the assertion certainly is NOT!

  24. Eddie Edwards
    Thumb Down

    Coherency is not a requirement

    As the Transputer guy said just the other day, when will Americans get over the fact that you can't have a hundred cores all with the same view of a single shared memory system? ARM, you're going to screw your architecture if you kow-tow to this nonsense. Cell does just fine without coherent memory. Just put in fine-grained partial cache flushes and async notification of when they complete, and you're golden. Coherency costs - you spend silicon on it, you decimate your design space to achieve it, and then you use Watts to maintain it. Spend that silicon on something useful, and save the power.

    That goes double for hardware that's going to be specialized to radio transmission using a given protocol anyway, and can be programmed once for all users.

    1. asdf
      FAIL

      but the CELL BE blows

      >. Cell does just fine without coherent memory

      Funny the marketplace said otherwise. And as anybody that has ran any kind of science app on a CELL for any amount of time knows the SPUs often become unresponsive and require a reboot. You can make your point but using a Sony/IBM implementation as your example probably wasn't the best choice.

      1. Magnus_Pym

        I agree/disagree

        Yes CELL isn't great but referencing the marketplace as an arbiter of what is good fundamentally misses the point.

  25. SoftFox
    Holmes

    Re: Apples and oranges, chalk and cheese, ARM and x86 #

    You make some good points however I wouldn't say the X86 was SOC in a conventional sense. Its just that everybody expects the CPU to to provide more features through the years as part of the basic 'brain'. For example early on hardware co-processor was developed/offered for floating point maths. Now its standard part of x86 and most CPUs.

    The ARM is well advanced on this due its small size and and variety of developers all wanting different features. An ARM SOC is likely to have a GPU, custom DSPs, and most of the associated buswork etc. Basically a full chipset which the x86/Intel is only just started fabbing.

    On a related note and and in response to some other comments. As Intel has noted to their great cost (in dollars) it is far far easier building up from a very efficient chip adding stuff as required and keeping power consumption down in the process by optimizing than the reverse process. This is constantly keeping ARM ahead with faster cleaner development etc and suspect will do for some time. Secondly variety is the spice of life as they say, With Nvidia, Qualcomm, Samsung etc. all competing now better than ever (Dual core/Quad core etc..) its pushing speeds along even faster.

  26. asdf
    Boffin

    what will break Moore's law

    Quantum computers much like the earliest digital computers will break Moore's law but until then its all digital band aids.

  27. John Savard

    Already Invented

    The power source of the future he is looking for is called the "fuel cell". They used them on the Apollo moon missions. Electrical power at chemical fuel energy densities.

    Admittedly, they're more suitable to luggables than pocketables... but if people can refuel cigarette lighters, then I suppose a hydrogen port on a cell phone is also possible.

    Or that's it - it might not be great for global warming, but why not a cigarette lighter that supplies heat to run a tiny steam engine? Powering a little generator, naturally.

This topic is closed for new posts.

Other stories you might like