Der er en nr 2 men der er langt derned det er mit kølvand der køler dem ned
"And when we slice up Power servers we do not get 100% of the CPU power available to the instances, so please tell me where it's going if not in the virtualisation?"
You still don't get it do you. Let me try with a little picture/example.
We have an older machine. It could be a E25K,SD or a p690, it really doesn't matter. On all machines we can statically carve the (or parts of) machines up into 8 chunks of lets say 4 Processor cores.
We then have 8 applications that happily crunch along inside each little partition with their normal lousy lets say 20% average utilization.
Now we replace the machines with a SD-2 with one 32 core IVM npar with 8 guests each with 4 processors, and a p780 with 8 virtual machines each with 4 virtual processors. But hey why not exploid the fact that we can overcommit the machine. On IVM we quickly follow HP best practice (or so my HPUX guys calls it and they might be wrong) and do a 50% overcommitment adding 4 more 4 processor core guests, raising the average utilization of the machine to 30%.
On the power 780 server I quickly add 10 more virtual machines with 4 virtual cores (that is my standard overcommitment factor for that machine) each raising the utilization to 44%.
And there was much rejoice in both the HPUX group and AIX group, and they all went to drink the Wintel guys ,under the table, cause that is what they did on a friday afternoon.
And you ask what is the overhead ? ehh.. it's huge in both cases.. negative overhead that is, as I get much more work done.
Is there penalty ? Sure there is, just as there is a kernel penalty related to running more than one process on a multi processing kernel. But hey you do sound like that punch card Mainframe guy from the 60'es that insist on running a single task on a single machine. Wake up dude.
"Oh, you mean 8192 shared in one OS instance - nice isolation of any software fault there! So, one memory error and you lose 8192 applications at once - great design! "
"Memory errors have always been a problem on HP Unix machines", one of my friends who used to be work at HP's support org. I don't agree. But since you keep talking about it, then perhaps there is something about it?
WPARS is pretty good isolation, sure it's not OS software stack isolation. But is pretty good isolation. The isolation stack we normally work with is like this:
Same os, rsets isolation, Wpars, Virtual machine, Physical machine. The longer down the isolation road the better the isolation but the price also goes up.
And still HPVM is a HPUX instance with Guest running inside it. Kind of like VMware in the old days right ?
And overhead lets see... on a 2TB power 780 the memory overhead will be .. 41-77 GB for a fully loaded machine. The later number with VIO and all, and all partitions being able to grow to a factor x2 in Memory capacity (max_mem=2xdes_mem).
For a SD-2 with 2TB inside one IVM with Max memory used the memory overhead is ... 321 GB Wooohh.. man I understand why you want to talk about overhead. First 8% over head then 8,3% again.. man.. sure is a good solution. So a factor of 4-8 in overhead... sure.. IVM rulez. *CACKLE*
"Well, seeing as Pseries partitioning has just about caught up with where the Integrity range were eight years ago (and still hasn't matched Integrity on true hardware partitioning), "
No they haven't caught up with the overhead thing. And you don't get it.. we don't want hardware partitioning. We have no need for it.. it's a waste of resources. Why do we want to carve your server up into what could just as well be cheaper machines ?
"And then hp's new Integrity designs fit into those hp blade chassis that have been caning the IBM blades for years"
Yeeesss.. lets order a highend server that uses the same components as the cheapest blade system around.. Yeah right.. *cough* *cough* hopefully customers aren't that stupid.
"when will IBM catch up with hp and offer the advantages of embedded switches and tools like Virtual Connect for anything above the bottom end of the pSeries range?"
Eh, an embedded switch ? What for ? I use virtual (not to be read vlans) networks inside the machines, You know LAN in a CAN, style. If I want to go outside I'll use a SEA adapter, (software virtual switch) or a HEA (hardware virtual switch).
"No I don't want to use a punch card reader.. I have a removable hard drive".. "What are hard drives not secure cause you cannot read the bits manually"......
"do you think they'll be able to do it with a better CPU than the crippled one they had to put in the P7 blades becasue they can't make the current blade chassis handle the cooling and power required for the real P7 chips?"
*CACKLE*
It's like shooting a fish in a barrel.. at point plank, with a shotgun.
Lets see...
IBM PS702 2 sockets and 16 cores takes up 2 slots and does 520 specINTrate2006.
HP i860c i2 2 sockets and 8 cores taks up 1 slot and does 134 specINTrate2006
HP i870c i2 4 sockets and 16 cores taks up 2 slot and does 269 specINTrate2006
HP i890c i2 8 sockets and 32 cores taks up 4 slot and does 531 specINTrate2006
Yeah.. the i890c i2 wins ! But wait... you can only have 2 of those in a 10U c7000 chasis, and you can have 7 PS702's in a 9U Bladecenter H. That is a compute density of 106 specintrate2006 per U for the i890 i2 versus 404 per U for the PS702... ohh.. ohh... ...
Power usage then.. HP is good at that. Lets see i890 i2 uses ... 3184 Watt max power.. the PS702 only 700 watt.. ARGH.. what about the i870 i2 then.. 1592 Watt ? What about the i860 i2 then 796 Watt ? How can this be ?
Price then .. HP blade products are cheap !!!!!
Yeah.. PS702 with 16 cores AIX and 32 GB RAM and 2 disks is 196K Dkkr. Woo that is expensive..
lets see..hmm there it is i890 i2 with HPUX 32GB ram and 2 disks and 32 1.73 cores is 809K Dkkr, WHAT wait lets take cheaper cores... 1.33GHz that gotta be cheaper. .. what 527K Dkkr ?... basically you need the i860 i2 with 8 cores to beat the PS702 price with 30K Dkkr. But that is 2 Tukwilas versus 2 POWER7's and we all know who the faster there.
Although you'll just cook up some witch brew about benchmarks and and to cloud the issue.
Again, when your competition is so much in front of you that you can't see what is going on...
// Jesper says have a nice weekend