back to article IBM gets into server transit business

Put a Big Blue wrapper around your legacy apps and cut data centre operational expenses, floor space and energy costs. IBM has bought a company so it can migrate applications from competitor's boxes onto its own mainframes, PowerPC and Intel servers. The company is Transitive and its QuickTransit technology can dynamically …

COMMENTS

This topic is closed for new posts.
  1. Matt Bryant Silver badge
    Boffin

    Very smart move.

    I bet the deal is causing a few nervous looks in Apple. So much a smarter deal than Sun's buys of StorageTek and MySQL, and probably at a fraction of the cost too! We've used Transitive to migrate old SPARC apps onto HP Itanium and it just works, so I'm hoping IBM don't stifle the Itanium side of the code-tree, but IBM have now made the Power series probably a first-choice for a simple migration platform with punch.

  2. Anonymous Coward
    Thumb Up

    Hurry! We need this.

    In the new world of virtualization it only makes sense to use this technology to consolidate on the best hardware available, mainframes and Power6.

    Sun/Fujitsu really missed the boat only offering O/S container technology and Ldoms are only available on the very low end systems which only partition threads to specific O/S's.

    This will be huge here in WallStreet as all the IT shops have efficiency as their top 5 priorities.

  3. Anonymous Coward
    Stop

    Efficiency

    Strange...

    just when I thought containers were the most efficient way to virtualize/consolidate things...

    All other technologies (Ldoms, ESX, Transitive) have much more overhead...

  4. David Lester
    Coat

    A former Labour Minister does Nominet's job for them...

    Alisdair was a lecturer here, not a research student.

    And as I recall it, the initial development of the idea was undertaken by final year undergraduates.

  5. Matt Bryant Silver badge
    Happy

    RE: Efficiency

    So, never heard of HP's hardware partitioning, then? It allows you to split their cell-based Integrity servers and Superdomes into multiple, hardware independent servers inside the same chassis, with absolutlely ZERO software overheads for the individual partitions, no shared resources, and no chance of one partition taking down another (as there is with the Sun partitioning/virtualising technologies). Just a bit more efficient than anything Sun has, even in the vapourware category.

  6. This post has been deleted by its author

  7. zvonr

    Time to migrate away from power :-)

    Maybe IBM wants to migrate away from Power architecture? It worked really well for Apple :-).

    x86 is getting faster and more scalable with every generation, and at the volumes x86 has, no other platform can really compete including Intel's own Itanium....

    Either way, emulation is always slow ... and more unreliable ...

  8. Anonymous Coward
    Happy

    Re: hRE: Efficiency

    ...smugging myself up so I can sound as much like Matt as possible.... OK.... Feeling quite smug now...

    Matt, perhaps you have not heard of Sun's Dynamic System Domains that have absolutely zero overhead, and always have. I know you really only know what your HP marketing folks tell you, but you really could step out from what you think you know and learn something new for once. By the way, even HP acknowledges overhead in their domaining technologies, though they won't say how much. You don't have to lie to make a point.

    Nice FUD piece HP Sales Grunt.

  9. Jesper Frimann

    RE:Re:hRE:Efficiency

    Well... overhead can be calculated in many ways.

    Well if you take SUN domains, HP's v/nPars and IBM's LPARS is that what they basically do is to divide the server into smaller pieces.

    This is good if you do not want to pay for 8 CPU's , that your server has, running Oracle when you only need 4 CPU's, hence you can make a partition that only runs on 4 and on the rest of the CPU's you can then install another partition on, the remaining 4 CPU's

    Now the bad thing is that all the CPU time that isn't used is still wasted, so if you have an average utilization of 20% of both your 4 CPU's partitions. Then you actually have an overhead of ... 80%.

    In comes SPLPAR, shared pool LPAR, which you can run on the power5(+) and power6 hypervisor. Here you truly virtualize your CPU's and CPU resources is seen as a pool from where you can get what you are entitled to. which have been set up by fairly simple rules. This means that I can keep on making virtual machines on the server and tap into that 80% idle CPU time from the example above, simply by the fact that if no virtual machine is using the CPU resources then another virtual machine can. Now this is virtualization, not like LPARS,v/nPar's or domains.

    // Jesper

  10. Bill

    Re:RE:Re:hRE:Efficiency

    You are quite correct Jesper. IBM's use of SPLPAR's is really why Sun went to Containers and LPARS (especially Containers). The problem with IBM's SPLPAR's is they have horrendous overhead. I've heard that 10% is not uncommon, but even more has been seen. Containers on the other hand hover around 0-2% at the most. I'm not an expert on SPLPAR's, but my understanding is that all of the issues, such as single points of failure that could be attributed to Containers also affect SPLPARS. The nice thing is that unlike HP, IBM actually tells you about the actual amount of overhead (even if it takes a little digging and complining).

  11. Jesper Frimann

    Re:Re:RE:Re:hRE:Efficiency

    Hmm.. Bill you don't get it.

    Let me try to explain, it is not SPLPARS versus containers, AIX has it's own version of containers

    called WPARS which is basically a clever extension of the workload manager. Furthermore WPARS have the ability to be able to move between machines, much like the vmotion of vmware.

    So there is nothing wrong with the 'container/jail/WPAR/IVM' concept, of running semi isolated containers inside an OS image, but one size doesn't fit all.

    For example I would never run a SAP test environment inside a container/wpar inside the same OS image that also held the production environment. It would be madness.. not that I haven't seen it done.

    Now the overhead, where I btw in my 4 years of working with SPLPARS have never seen anything near 10% (4-5% I've seen), and it is really not an issue. You have to compare it to LPARS/domains/v/nPars where you have the same problem as you have with smaller rack stack and pack'em machines. If you aren't using the CPU's capacity inside a partition, then nobody else can use it and it goes to waste. On a machine running SPLPAR you can reuse the CPU resources if they aren't used by the virtual machine they are allocated to.

    On a machine that runs SPLPAR you can easily reach 60-70% average utilization of the physical machine, compared to perhaps 15-25% on a machine that runs partitioned. And the 4-5% you are paying in overhead you get back ten fold.

    Let me give you an example from the real world, I'm currently in the process of analysing what we need to buy to replace 8xp690'es with around 192 power4+ cores. And some of those machines will still do 30K+ TPMC per core. So they are still with todays eyes fast. But I figure I can squeeze those 8 refrigerators into 2x16/3x12/4x8 ways power 570'es, and still have room left for other consolidations.

    And if you really really don't want to loose those few percent then run your virtual machine in LPAR mode. Then you have the same limitations as domains and vpars.

    But I've never experienced a 2+ socket power server that is power5 or newer that have run out of

    cpu resources. They will always run out of memory first, or IO first, cause people size those resources to low.

    I would say, depending on your application, if you for example are running SAP with an Oracle Database, and test, education, prod and development on the same machine, then having around 32-48 GB of RAM per CPU (or core) is not a bad idea on eg a power 570.

    Now is the POWERVM hypervisor a single point of failure ? Yes sure it is, but it is a lot lot lot more stable than a OS, it's not a program or oslike thingie that executes under the virtual machines, I would more compare it to a read only shared library. Sure an error in the library could bring down a virtual machine, but to bring down the whole machine is very unlikely.

    Also I would have no problem having virtual machines running on the same physical machine that were on all four different zones of a firewall, besides having to neuter the HMC network :)

    // Jesper

  12. Anonymous Coward
    Stop

    OOUT OF THE FRYING PAN, INTO THE FIRE

    i.e. a migration from Unix hardware to Unix hardware

  13. Anonymous Coward
    Dead Vulture

    Virtualization Overhead? Sun FUD

    Don't forget that Sun's Sparc chips are only 20 to 30% of the performance of Power6 chips.

    Unfortunately, most customers will gladly take 5% overhead on a chip which has 3X the performance.

    The Sun world is being RIF'd

    SPARCIV+ is end of life in January

    SPARC64VII (a Fujitsu chip) has less performance per core than SPARCVI so the performance per core comparison gets worse.

    Basically Sun is on a more cores per chip ignore performance per core and software pricing.

    Give me a package and get me off this ship.

    allsopp

  14. Anonymous Coward
    Pirate

    Transitive -interesting but irrelevant...

    I went to Transitive, and speculated either Intel or IBM had their hand in. Its clever stuff, but really falls far short of buying new Sun kit (and lets face it, a low end Niagara box is cheaper than a QuickTransit license to run some old app on an E4500).

    Run it natively you loser, and if your app isn't available for AIX remind yourself its because you're running a second tier OS in the eyes of most vendors! If you desperately wanted off Sun, I guess its an option - although the idea of running Solaris apps on IBM's gimp masked AIX really makes my skin crawl.

    @allsopp 'Don't forget that Sun's Sparc chips are only 20 to 30% of the performance of Power6 chips'

    Thats a single USIIIi, I've gone through as many equivalent benchmarks as IBM and Sun forget to play silly buggers for, and one of the Power6's cores is about 4xUSIIIi in real world general use (ie slowed down by disk, RAM etc).

    It really really really depends what you are doing and how you measure it, there are lies, damned lies and benchmarks.... Power6 @ 4Ghz is certainly not faster than a 3Ghz Xeon and is substantially more expensive and power hungry.

  15. Jesper Frimann

    What ?

    >I went to Transitive, and speculated either Intel or IBM had their hand in. Its clever stuff, but

    > really falls far short of buying new Sun kit (and lets face it, a low end Niagara box is cheaper

    >than a QuickTransit license to run some old app on an E4500).

    Well I would clearly rather run things natively on Solaris rather than having them emulated on some other platform. That should be a nobrainer. But, Solaris and AIX both being POSIX 'nice guys' then a port might be easy enough.

    And if we are talking something like running SAP, Oracle, DB2 etc then the problem should be that big.

    >Run it natively you loser, and if your app isn't available for AIX remind yourself its because you're

    >running a second tier OS in the eyes of most vendors! If you desperately wanted off Sun,

    >I guess its an option - although the idea of running Solaris apps on IBM's gimp masked AIX

    >really makes my skin crawl.

    Well calling AIX a second tier OS is bull, and you should know it.

    If you want to know why many software vendors will try to sweet talk you into running their app on Solaris then check out what they will be charging you in licenses fee. This includes IBM's own software sales people. They all love Solaris, even more than Windows.

    I mean just running Oracle on a T5140 is bloody is 285KUSD for the Enterprise edition in list price.

    >It really really really depends what you are doing and how you measure it, there are lies,

    >damned lies and benchmarks.... Power6 @ 4Ghz is certainly not faster than a 3Ghz Xeon and

    >is substantially more expensive and power hungry.

    What are you talking about ? When pointing a finger at someone, there are always three fingers pointing back at yourself. Lets take a benchmark where the results are actually used for input to RL sizing data, SAP.

    http://www.sap.com/solutions/benchmark/sd3tier.epx

    JS12 with 1 chip with 2 cores POWER6 at 3.8GHz 35160 SAPS per Core.

    bl680c with4 chip with 4 cores Xeon @2.4 GHz 10638 SAPS per Core

    Now if we correct for going to 6 cores per chip on SAP we get

    bl680c with 4 chips with 6 cores Xeon @2.4GHz 8963 SAPS per Core

    Now if I want to do 70K saps I need a js12 with one chip and I need a two chip bl680c

    Now the BL680c costs somewhere between 10319$ and 11.839 $:

    http://h71016.www7.hp.com/dstore/ctoBases.asp?oi=E9CED&BEID=19701&SBLID=&ProductLineId=431&FamilyId=2063&LowBaseId=&LowPrice=&familyviewgroup=1454&viewtype=Matrix&Matrix=

    The js12 in a similar configuration is 4687$:

    http://www-03.ibm.com/systems/bladecenter/hardware/servers/js12/browse.html

    So it's cheaper. And no matter how you try to do your calculations, then one POWER6 is not producing more heat than 2 Xeon chips.

    // Jesper

This topic is closed for new posts.

Other stories you might like