back to article Server workloads to go '70% virtual' by 2014

By 2014, more than 70 per cent of all server workloads installed that year will be plunked down not on a bare metal piece of iron, but on a virtual or logical machine of some kind, according to the (virtual) box counters at IDC. But don't get the wrong idea. There are going to be plenty of old-fashioned physical boxes still …

COMMENTS

This topic is closed for new posts.
  1. Steven Jones

    Not that ambitious

    If 70% of workloads are going to be run in virtual machines, then that still implies the great majority of instances will be physical. If we work on the basis of (say) seven VMs per physical machine (as that's reasonable an makes the maths easy), that means out of every 100 system images, 30 will be on dedicated physical servers whilst the remaining 70 will be virtualised on 10 physical machines. That ratio would mean 75% of physical servers would be for non-virtualised use.

    One issue with virtualising (at least on x86/x64 hardware) is that the relatively small capacity of VMs can lead to "VM sprawl" as more instances are required for the same workload. That can lead to a big increase in support effort with many more OS images to maintain, configure and support.

    1. Random_Walk
      Coat

      Yes, no, maybe... :)

      Currently, @work we're about 85% virtualized (not counting the ESX/i boxes themselves obviously), and so far, no major problems.

      The VM sprawl often doesn't happen (IMHO and experience) because workloads have to be spread out, but more often than not because vendors, other departments, and even other sections within IT always want "Yet Another VM", but treat them as semi-disposable (or worse, forget they already have a near-dozen that they never/rarely use, or in the most incompetent of cases, try to use VMs as some sort of clunky unofficial code versioning system... urgh).

      ...Miss? Mine's the one with a backup copy of the vCenter DB in it.

    2. Lusty

      RE: Not that ambitious

      "the relatively small capacity of VMs"

      if you consider 8 core and 255GB memory to be a small footprint for a server then either you are unaware of monitoring software or are working with quite speciality servers.

      Not directed at you Steven, but may be useful to those who are curious, the average workload these days based on the many, many capacity planning excersises I've done is happy with 2 cores and 4GB memory. If a machine seems sluggish with this configuration, have a look and see if your average disk queue is larger than 4 - this is usually the bottleneck :)

      1. JEDIDIAH
        Linux

        Small frys and whatnot...

        2 cores and 4GB barely qualifies as a DESKTOP machine these days.

        If your "servers" are so puny, then it is little wonder you virtualize them.

      2. Steven Jones

        Big VMs

        large VMs can cause big issues with load balancing in a typical farm. Also vCPUs are not a real CPU. You can't expect the same sort of throughput, especially on a large VM with the issues involves with core scheduling and so on. It's a lot better than it was when cores were co-scheduled, but if you can effectively get the equivalent throughput to an 4 physical core machine on an 8 vCPU envionment in a mixed workload envionment, then you are doing well unless you run things with very low contention levels.

        As we have some physical workloads running on 48 x64 cores, and a large number of 24 x x86 core workloads, then it is more efficient to go physical in many cases. Environments like J2EE are already virtual. Creating more OS images is not always the best thing to do. Also, the cost of ESX can't be ignored.

        VMs have their place. However, it's not always the answer to all things and if the penalty is generatings many more operating systems to manage, it can be positively bad.

  2. Mike Shepherd
    FAIL

    Tell me

    Tell me why I'm supposed to care.

  3. Anonymous Coward
    Pint

    "kill tape backup once and for all"

    "If anything could possibly kill tape backup once and for all, virtual server failover just might be the thing to do it."

    Do readers understand the difference between backups and RAID? I'm not sure the author does.

    You have RAID for continuity of data availability when a disk fails.

    Backups cater for something different.

    Same for server failover by virtualisation.

    If a misbehaved piece of software screws up the order processing or payroll or even the new content for the website, neither RAID nor server virtualisation is any use at all. When that happens (not if), backups (tape or otherwise) are what you need. RAID is probably important too. But probably not virtualisation, and definitely not paid-for virtualisation at VMware prices.

    In the small office market it's probably just as effective to buy a 2nd server and have it as an on-site spare which can be used for expansion (or testing) when needed.

    Merry Christmas.

    ps

    I do realise one could keep a selection of archive copies of the server image, once a day or whatever. Which is basically equivalent to tape backup.

    1. Random_Walk
      Thumb Up

      Agreed.

      I was kind of wondering that too... especially if/when the SAN where the datastores live ever goes titsup.

      Now if he was talking about VTLs, I could understand... but anyone who relies on (SAN-defined) snapshots or RAID for their backup (*snort*) should be frogmarched to the tape library and educated on its use.

    2. Wize

      Well said.

      Delete something off a raid drive and its gone.

      Users want a file back they deleted (or accidental overwrote with junk) three months ago. No virtual servers or raid array will ever solve that one.

  4. Anonymous Coward
    Coat

    Software Licensing will have to change

    Several of the top Commercial Software Companies currently charge licenses based upon the TOTAL Capacity of the System. Because they say that 'You might run our app on all cores' so just in case you are going to pay for capaicty you may never use but you want to use our crap so give us the money"

    So who in their right mind would walk headlong into this mess?

    The majority of PHB's who want to be 'in the cloud' as it sounds cool.

    AFAIK, only IBM with its LPARS and underlying H/W comes close getting it right.

    So rock on the Mainfram Clouds.

    Otherwise, why bother with all this fluff? (Fluffy clouds...)

    Ok, I'm outahere.

    1. Joshua Goodall
      Troll

      why not come out and just say

      "screw you larry"

  5. Henry Wertz 1 Gold badge

    Virtualization...

    Makes sense in cases where departments or customers want their own OS images, and for cases *cough*Windows*cough* where they keep using these apps that want 1 app per server, and don't have the concept of chroot jails and the like, but the application is not going to load the server down enough to really need it's own server.

    Of course virtualizing everything doesn't make sense -- if a single VM is going to be heavy enough that you'd only put one VM on a server, you might as well run it on the bare metal and avoid the overhead of virtualization.

    Agreed w/ AC -- IBM got this right with LPARs like 30+ years ago. I have to laugh about those who think virtualization and "cloud computing" are anything new.

    1. Anonymous Coward
      Anonymous Coward

      Re: Virtualization...

      "Of course virtualizing everything doesn't make sense -- if a single VM is going to be heavy enough that you'd only put one VM on a server, you might as well run it on the bare metal and avoid the overhead of virtualization."

      For the most part, you're dead on right. However, I have seen environments where it makes more sense to place a single VM on a given server rather than use bare metal, though, as it generate flexibility for migrating those servers. If I want to move a physical server between boxes, it's a pain and is going to have an outage. If I virtualize that server on the bare metal, though, I can move it from one server to another relatively painlessly, provided the VM uses shared storage.

      Well, painlessly other than the license cost for the *Motion capability in your hypervisor of choice.

  6. Anonymous Coward
    FAIL

    It's about the software stupid

    Until the likes of Oracle change the licence model VM won't take off in systems.

    If I buy a 6 proc box, and want to run a few web servers with dedicated procs and (say) 2 procs for Oracle, I still have to pay for every core in the box on the Oracle lic, which makes VM far more expensive than individual boxes.

    Mines the one with loads of real tin.

  7. Ammaross Danan
    Dead Vulture

    70%

    I would assume that the 70% VMs are /actual/ VMs, and thus the remaining 30% would be a mix of single-OS installs and the more fringe "cloud" setups? Or would a server joined into a distributed cluster count toward the "VM" figure?

    /reg, where's my "?" icon?

  8. Pete 2 Silver badge

    Welcome back to the Mainframe

    > more than half of all installed application instances will run inside a virtual machine. This has profound implications ...

    All it really means is that applications will run under a 21st century version of MVS (or z/OS as it's evolved into). Nothing much changes - at least not for those of us familiar with BIG computers (physically, that is). All we can hope for is that the disciplines required, training and processes needed to keep these virtual environments running with 6*9's uptime can be relearned by the next generation of operators.

  9. Lusty
    Boffin

    curious

    The use of the word iron rather than tin in the article got me wondering; what percentage of machines globally were virtualised before VMware came along. I'd be surprised if it wasn't higher than now considering that Mainframes and the like made up a higher percentage of servers than they now do. In fact, I suspect that the first few years of VMware were an all time low of virtualisation given the boom in wintel at the time.

  10. Mikel
    Thumb Up

    Too many topics, too many options

    I'm seeing the "If it won't run in a VM, you can't have it" policy being implemented in more places these days in larger organizations. Running one service per VM simplifies software lifecycle management, reduces potential conflicts. Still getting the question "Why shouldn't I run services also on my VMHost native OS?" - that particularly from folks new to this new model.

    Oracle... what are we going to do about Oracle... Well, eventually they'll price themselves out of the market. They'll milk their existing accounts to death forever of course, like Novell did, but nobody in his right mind is going to sign up for that abuse as a new account.

    Back-end storage is critical, and the migration from Fiber Channel to iSCSI is more of a stampede at this point. Actual performance metrics are critical. A lot of folks are going to take offense at this, but it is what it is. The beautiful architecture and reliability of FC doesn't override the fact that you can get much more reliability for less by having massive redundancy and geographic separation in your stores using iSCSI for far less - and it doesn't take a fiber guru to set it up. The FC era is coming to a close for the same reason that Oracle's is - they're excessively fond of it and people are becoming less willing to sit still for the value-building story. They have lives they want to get back to.

    And then there's backup. Tape is truly dead. Synology is offerring today a 15TB NAS that scales to 45TB raw, or about 38TB usable starts at $900: http://www.pcworld.com/businesscenter/article/214523/synology_announces_new_15tb_nas_device.html Various technologies can turn this into iSCSI with unlimited thin snapshots, waypoints, or whatever you want to call them. Any systems analyst can figure a way to build a reliable backup system with that, and it's only one example of hundreds. Another interesting option is the BackBlaze box with 135TB of raw storage in 4U (1.3PB/rack), and if you build it right it can host VMs too.

    Yes, people are still confusing High Availability, Disaster Recovery, and backup. They always will.

    In the Enterprise and SMB space I'm seeing more people willing to try new things than ever before. The pace of change is stepping up. I think that's a good thing.

This topic is closed for new posts.

Other stories you might like