back to article IBM's OpenPower gang touts first proper non-Big Blue-badged server

The OpenPower Foundation unveiled new hardware – including its first commercially available server – at its first OpenPower Summit on Wednesday, a mini-event nestled within Nvidia's GPU Technology Conference in California. The foundation has also forged a relationship with the Facebook-led Open Compute Project, which operates …

  1. Nate Amsden

    might this

    OpenPower thing and tyan and the likes making parts for it end up like Itanium? For a while a few "white box" companies were making Itanium things, market just wasn't there for them(I think HP is the last shop making Itanium systems). I expect the same to happen to Power.

    I'm sure it will continue to do fine in it's IBM niche on IBM hardware..

    1. A Non e-mouse Silver badge

      Re: might this

      I think there's a real difference between Itanic and this OpenPower initiative.

      Itanic was a plan by HP and Intel to design a brand new processor architecture to replace PA-RISC and x86. It failed because it was late, slow and expensive.

      The OpenPower initiative is to get more people using & buying Power based systems. I suspect IBM's idea is that if people start with cheaper OEM Power-based systems, IBM will be able to up-sell them to their own (more expensive) systems. By increasing the availability of Power based systems, it keeps the architecture relevant and people buying Power based products and services.

      Will IBM 's plan work? I have no idea. But there are two massive mountains in front of them: x86-64 and ARM and IBM are going to have to work to keep their Power architecture in the frame.

  2. CheesyTheClown

    What's the advantage?

    ARM is already an alternative architecture to x86 and 10 years after hearing about ARM in the server space, enough companies have invested to make something look like it might start happening. GPU acceleration of most code sounds really nifty, but unless compilers come out which can pretty much handle accelerating code as part of the normal compiler chain, it's benefits will remain untapped by most. I think database developers more than most are generally quite happy just adding more capacity to compensate for performance lost due in favor of massive amounts of clean code.

    Sure, I can get PPC linux and OpenStack... Great! But I can also get that on ARM... So why would I consider adding PPC blades to my infrastructure when in reality, I can't justify the cost difference to begin with. ARM and x86 is CHEAP and power efficient. I don't even care what CPU I'm using so long as the applications just work and the power bill is small. POWER almost certainly is not cheap. The CPU they're discussing is a massive powerhouse, but I would be absolutely shocked if I can say it will have an equal or lower TCO after 3 years.

    What do I do when people grow bored of this in a year? What do I do when I can no longer buy new versions of the chip?

    1. PowerMan@thinksis

      Re: What's the advantage?

      Power8 technology is roughly 2X greater performing over x86. Factor that in with hypervisor options of No hypervisor, PowerKVM or PowerVM then look at the cost of the Linux only servers and your assumption they are more expensive is no longer the case. Look at http://www-03.ibm.com/systems/power/hardware/linux.html to see for yourself. Remember these prices are list. Also note the performance, scalability and reliability capabilities which Power is known for and you can see Power8 has redefined itself.

    2. Michael Wojcik Silver badge

      Re: What's the advantage?

      GPU acceleration of most code sounds really nifty, but unless compilers come out which can pretty much handle accelerating code as part of the normal compiler chain, its benefits will remain untapped by most

      It doesn't have to be done by the compiler; developers just need to adopt better techniques. There's a good article in the June 2014 CACM on using an embedded DSL to produce highly optimized CUDA and other sorts of SIMD code from simple high-level abstractions. Sometimes these DSLs (particularly "deep" EDSLs which output parse trees rather than source in the embedded language) offer optimizations that can't safely be done on code in C similar languages, such as map fusion.

      Creating those EDSLs is not particularly difficult, and learning them ranges from easy to trivial, depending on the application domain.

      Then you have newer general-purpose languages which are designed to work well with SIMD architectures, such as Julia. They've mostly been developed, and taken up, by the number-crunching community, but then that's mostly where the vectorizable applications are.

  3. Matt Piechota

    Linux on POWER

    I had some Linux on POWER stuff a bunch of years ago (p595 and p570). They were cool boxes (the 595 especially), but Linux on POWER was always just odd enough that it was tough to put to good use. Mostly, our internal customers would come along and want to use one and later mention that they had some commercial software (like MATLAB) that only comes in x86 binaries. Sorry, not going to work.

    Maybe there's a market since ARM has somewhat pried open the defacto x86-ness of Linux, but realistically I don't see too many commercial vendors supporting POWER (if they even know it exists). After that, you're looking only at the FOSS stuff which it's tough to make a case for going in this direction when x86 is so common and well-supported. The only market I can really see this working would be something that lends itself to POWER's advantages, and only (as someone above mentions) if the compilers actually optimize well.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like