back to article HP hooks up with Calxeda to form server ARMy

HP is partnering ARM-licensee Calxeda to build energy-efficient micro-servers for large data centres, the WSJ reports. Calxeda is producing 4-core, 32-bit, ARM-based system-on-chip (SOC) designs, developed from ARM's Cortex A9. It says it can deliver a server node with a thermal envelope of less than 5 watts. In the summer it …

COMMENTS

This topic is closed for new posts.
  1. Flocke Kroes Silver badge

    Power, perfomance and the thing Intel doesn't talk about

    So in a year or two the next generation Intel chips will be able to compete on power requirements with the last generation of ARMs. Some tasks are sequential and benefit from a big high performance core, but servers mostly run parallel tasks that can be run more cost effectlively on several small cores. Intel's real challenge would be to compete on price, but I cannot see that happening this decade.

    1. Anonymous Coward
      Anonymous Coward

      If I'm reading this right

      ...Intel still has a ways to go on power because this seems to imply that the 5 watts includes memory:

      "Calxeda's initial reference design will be based on a quad-core Cortex A9 SoC that consumes 5W including associated DRAM. "

      http://www.eetimes.com/electronics-news/4213963/Calxeda-describes-ARM-server-SoC

  2. Anonymous Coward
    Anonymous Coward

    So what would these servers be used for?

    The general understanding from what I have read is that Windows 8 will support ARM processors, but existing x86 software would not be able to run on such a box, so what would ARM based servers be used for?

    1. Matt Bryant Silver badge
      Meh

      Webserving?

      After all, for real enterprise apps like databases or virtualisation you need the extra memory addressability of x64. If it was 64-bit ARM that HP was working on, then Chipzilla might be worried, otherwise this is more of threat to webserver CLUe like Niagara.

      1. Peter Gathercole Silver badge

        @Matt

        I seriously consider the push to virtualisation as being a hardware vendor led campaign in order to justify the purchase of ever larger and expensive single servers.

        It would be just as easy to have a server farm mentality using very small footprint, high density separate address space individual systems with networked shared filesystems. You know, network booted devices with common OS images. Maybe dozens of them per 2U server like BlueGene or blade centres, running much more simple OS's than windows has become.

        In fact, if the cost and power consumption is right, why even go down the distant processing model at all. Put ARMs in the display devices (oops, they are already there!) with decent network connectivity (which is already required for RDP/VNC/Citrix) and a lightweight network based OS, and dispense with the huge server based processing complex completely. Move back to a file server model with much more modest systems with lots of storage in the data centre.

        You would have to be careful about management issues, but I'm not advocating a return to the every-pc-has-it's-own-OS-and-applications, more like the SUN's "The Network IS the Computer" diskless boot model, so that the device on every desk is identical. One fails, replace it with another. All user data is on the fileservers, and the device on your desk is just a way of accessing it.

        This is where low power SOC ARM systems can really go, and can probably provide at least 70% of the requirements of the business community with huge cost and energy savings.

        1. Matt Bryant Silver badge
          WTF?

          RE: @Matt

          "....hardware vendor led campaign...." What have you been smoking? Virtualisation allows much more efficient use of servers, the vendors were much happier back in the days of 20% average server useage before virtualisation came along and upset the applecart. And the majority of virtualised environments (even the UNIX ones) I hear of are going in as consolidation exercises on blades or 2- or 4-socket rack servers, not big servers.

          As for ARM-based thin-clients, the article specifically says these are datacenter servers, not desktop VDU boxes. And please don't mention SUN's thin client farce, not unless you're doing so with a suitibly sarcastic tone. You'll find tech like VMware's VDI offers much more flexible options.

          Maybe hp are aiming for the telecoms market or maybe they see the opportunity for some form of high-speed, massively-parallel switching device. Or just a webserver consolidation platform.

          1. Peter Gathercole Silver badge

            Ahh yes. The vaunted efficiency argument....

            I'm not sure I agree about the vendors preferring the days of separate servers, because the rackmounted server market became very cut-throat, and the vendors were not making much money per server, even it they were selling a lot of them.

            What virtualization has allowed is vendors to tell customers that they are justified in replacing perfectly serviceable datacenter servers with years of life with brand new, high margin, expensive servers. For the vendors, high margin small volume is preferable to low margin high volume. That's why IBM's mainframe business is still very profitable.

            I'm sure that the vendors can produce spreadsheets and charts to prove that they will save money on power, space, infrastructure and support costs by doing this, but that is what marketing people do. It will be interesting looking back in a few years time, but I'm not sure whether anybody will be publishing figures to see whether the savings were realised.

            I was working on introducing virtualized systems six years ago in the UNIX space, and whenever we tried suggesting combining workloads so that the average usage of the workloads approached 90%+, we always got tripped up by the customers (separate departments in a large UK bank buying computer services from a central IT department) wondering loudly what happened to their workload if unscheduled peaks in separate workloads coincided. They never liked the fact that in these situations they might get less predictable batch timings than if they paid for guaranteed capacity. The result was that we put hard limits on each of the LPARs, effectively the same as them having systems of fixed size. They could not afford missing critical deadlines by uncertainties regarding job run times.

            I admit that this was before it became easy to shuffle partitions live between different physical systems, but it became clear that end customers were not prepared to compromise in order to make more efficient use of the installed capacity.

            I'm not involved in such work at the moment, so maybe 'education' or 'marketing presentations' are better at convincing customers nowadays.

  3. Anonymous Coward
    Anonymous Coward

    "what would these servers be used for?"

    I'm not sure which planet you're on, but the planet I'm on has software available out there that doesn't need Windows. I believe some of it even comes without a cost to buy, and you can get the source if you want. But please don't tell the usual Microsoft-dependent suspects in IT, as they may see these things as a threat to their "estate".

    1. bazza Silver badge
      Thumb Up

      Quite right

      But it is highly likely that MS have realised this and that is why there is all sorts of ARM-ish noises coming from them. Of course Linux is already there really but has to cope with the wide variety of ARM SOCs. But if MS can standardise ARM machines in a way similar to PCs currently are then that would help Linux too.

      Watch out, Intel.

  4. Torben Mogensen

    Cloud

    In addition to web servers, cloud servers is also a possible application.

  5. Anonymous Coward
    Anonymous Coward

    "all sorts of ARM-ish noises coming from [MS]"

    Except there is so much confusion amongst the noise that it's hard to tell fact from friction.

    "the wide variety of ARM SOCs. But if MS can standardise ARM machines in a way similar to PCs currently are then that would help Linux too."

    Absolutely spot on. The law of unintended consequences may work quite nicely in this case. We can but hope.

    Hey Matt.

    "for real enterprise apps like ... virtualisation you need the extra memory addressability of x64. "

    Wake up Matt. Why would I want to virtualise when the real hardware is going to be around the size of a mobile phone motherboard, using a bit less power than a mobile phone (no RF, no display, no kbd) and rather less expensive too (less hardware, plus no RF licences, no battery, etc) ? Maybe that's not quite happening this week, but why not soon?

    Obviously I realise virtualisation is de rigeur amongst Wintel-dependent IT departments, but outside the asylum...

    1. Matt Bryant Silver badge
      FAIL

      RE: Wake up Matt

      "....de rigeur amongst Wintel-dependent IT departments...." Actually, as well as Windows, we have RHEL, AIX and hp-ux all virtualised on different platforms. Apart from some VDI test instances, it's all 64-bit, and about 95% of the images are using much more than 2GB of RAM. Maybe you should get some experience in an enterprise environment. Good luck trying to run an enterprise instance of Oracle RAC or DB2 with less than 2GB of RAM for your hot tables, even with SSDs!

      1. An ominous cow heard

        Maybe you should get some experience in reading and logic, Matt.

        You say you have "RHEL, AIX and hp-ux" virtualised.

        Does that sound like the behaviour of a Wintel-dependent IT department?

        Just because your IT department is not Wintel-dependent doesn't mean there isn't a huge quantity that is Wintel-dependent.

        Don't forget also that, if appropriate, you can have >2GB of physical memory without a >2GB logical address space, just like Xeons used to.

        1. Matt Bryant Silver badge
          Stop

          RE: An ominous cow heard

          "Maybe you should get some experience in reading and logic, Matt....." Maybe you should actually stop for a second and get an adult to help you think about why we're using expensive platforms like 64-bit UNIX on Power and Itanium rather than just all Wintel/Lintel? That's because x64 doesn't (yet) meet our business requirements, so there is no way an even less-capable ARM CPU, 32-bit or 64-bit, is going to do the job. And seeing as I really don't think IBM and hp are making kit just for us, it would seem there is a large market out there that also won't be able to do what they need to do on x64, and definately not on ARM.

          "....you can have >2GB of physical memory without a >2GB logical address space, just like Xeons used to...." Yes, and that was a software trick, which gave us all types of fun with memory holes! I remember working with hp Netservers many years ago that were prone to the issue - your memory useage would go above 2GB and suddenly the server crashed! We switched to Compaq Proliants because they had a clever trick for pushing the memory hole higher up the memory, so you only hit it if you went over 3.6GB, but it was still there. Thanks, but I think I'll avoid that again, especially as Atom doesn't have that issue.

          Yes, a low-powered option to x64 will do well in such low-performance tasks as webserving, fileserving, NAS and printserving, which are pretty much OS-agnostic, but it won't even hold a candle to current low-end Nehalems in terms of performance, let alone Itanium, and performance becomes more and more of a factor as you get into departmental servers and higher up the enterprise. Low CPU current draw alone is not the only factor in making server buying decisions, especially for generic business servers, and especially if it means you have to recompile and re-test all your applications. Atom won't require a recompile or waiting for my app vendors to re-accredit and support a new core, as ARM will. Webserving has a low barrier of entry for a new CPU design - costs are usually a prime concern, so low current draw is a big advantage; massive memory footprints are not required; lots of small threads can be handled in parallel, each on a separate core; and webservers will probably perform just as well on ARM as x64 when the limitting factor is usually the bandwidth available to the remote user accessing the website. Now, you try and string together a coherent post about why you disagree and we can call it a discussion, otherwise you're just sprouting laughable wishful thinking.

  6. nanohest

    Re: Webserving?

    "for real enterprise apps like databases or virtualisation you need the extra memory addressability of x64"

    no

    for real enterprise apps like databases or virtualisation, you need the extra memory addressability of System Z

This topic is closed for new posts.

Other stories you might like