back to article EVO:FAIL as VMware quietly shelves its EVO:RAIL recipe

VMware launched VSAN 6.2 yesterday, proclaiming it as the company's new hyperconverged software – and repeating the claim that VSAN adoption makes it number one in the hyperconverged market. Which is all well and good, save for the fact there's one thing missing: namely, EVO:RAIL, the software-defined recipe for roll-your-own …

  1. Anonymous Coward
    Anonymous Coward

    New might not be better

    If a rack server and a SAN does the job well, buying a replacement rack server and SAN might be the best option. So many people in IT seem to think that something new and whizzy will be better just because the vendor marketing says so. This and the vCloud Air fiasco show that new and whizzy need to be accompanied by meeting an actual use-case. AWS and Azure show those new use-cases yet many are trying to crowbar their old paradigms into those platforms.

    Write down your requirements.

    Find solutions which meet those requirements

    Score the solutions.

    Change solution only if the new whizzy scores higher for reasons which meet your requirements.

    Whizzy is not a requirement of people over 5 years old.

    1. Anonymous Coward
      Anonymous Coward

      Re: New might not be better

      I have seen the "upcoming product", and it's pretty good. There is truth to what you're saying, and I mostly agree, but there is a definite use case for this new one. Growth is much easier, things snap together, both at a hardware layer as well as a software layer.

      The new appliances are most definitely changing the way IT is purchased. Lots of companies are going to be going hard at it in 2016 - hyperconverged will be a bright spot in an otherwise gloomy IT landscape .

  2. DavidCarter

    I saw EMC's presentation t he other day on VxRail. It's still too expensive for SME's or Remote / Branch offices and they won't make small spec'd nodes for it

    1. Anonymous Coward
      Anonymous Coward

      what was the cost?

      Did they present a cost? what were the ranges?

  3. JohnMartin

    New is better sometimes and usually wins in the end (by which time its not new anymore)

    -Disclosure NetApp Employee opinions are my own etc -

    The big advantage for hyper converged, at least in theory, are that the scaling units for your IT infrastructure are

    standardised - which reduces the need for planning

    small - which reduces costs and risk

    easily obtained - which allows just in time purchasing

    easy to install and maintain - which reduces, planning, risk and time

    If you do the economics of an HCI solution vs a storage-less server+share storage array solution most of the HCI solutions come out more expensive for the same set of workloads. Much of the modelling is based on where a customer may already have sunk costs e.g. if you already have the VSAN licenses as part of an ELA and you've just lost your storage admin, HCI starts looking really good for a whole stack of workloads, conversely if your DR and backup strategy has already been optimised around advanced array functionality and you've implemented a lot of storage automation features already, then HCI requires a lot of wheel re-invention to cater for the bigger overall picture. The real answer for the moment isn't one or the other but more of a case of "why not have both" (assuming you're happy to stump up for a scalable 10Gb Fabric and convince the network guys that spine-leaf isn't so evil after all)

    Putting aside the arguments around purchasing economics and the merits of technical incumbency, what you get from HCI though is generally easier to scale and maintain than a traditional infrastructure organisation where the network, server and storage teams spend more time on turf wars than they do solving business problems.

    The trouble is that the HCI standardisation process almost always results in one part of the infrastructure being over provisioned (aka wasted), because most people haven't / won't implement the HCI infrastructure at sufficient scale to benefit from the law of large numbers. People talk a lot about HCI being hyper scale, but IMHO its still got a long way to go before most people can safely deploy hundreds, let alone tens of thousands of HCI nodes in a datacenter. For the moment ten to fifty is about what I see at most places.

    From some rather limited reviews of the literature and talking to folks (not exactly an exhaustive analysis, usually its the network fabric (which is the most expensive and highest margin part) that gets over-engineered, followed by storage capacity, followed by CPU. The only thing people cant seem to get enough of is RAM, but that might be because a lot of the HCI stuff I've seen has been deployed for VDI. I think that's why the pre-configured EVO appliances have traditionally been so expensive.

    Net result is that for a lot of people, the scaling units (particularly in EVO:RAIL) simply haven't been small enough, and the only way to drive the efficiency necessary seems to involve a trade-off on the standardisation aspects.

    In theory composable infrastructure is supposed to address a lot of those shortcomings, though that tends to mean you lose white box economics and fungibility. It will be interesting to see if the federation appliance addresses that.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like