New is better sometimes and usually wins in the end (by which time its not new anymore)
-Disclosure NetApp Employee opinions are my own etc -
The big advantage for hyper converged, at least in theory, are that the scaling units for your IT infrastructure are
standardised - which reduces the need for planning
small - which reduces costs and risk
easily obtained - which allows just in time purchasing
easy to install and maintain - which reduces, planning, risk and time
If you do the economics of an HCI solution vs a storage-less server+share storage array solution most of the HCI solutions come out more expensive for the same set of workloads. Much of the modelling is based on where a customer may already have sunk costs e.g. if you already have the VSAN licenses as part of an ELA and you've just lost your storage admin, HCI starts looking really good for a whole stack of workloads, conversely if your DR and backup strategy has already been optimised around advanced array functionality and you've implemented a lot of storage automation features already, then HCI requires a lot of wheel re-invention to cater for the bigger overall picture. The real answer for the moment isn't one or the other but more of a case of "why not have both" (assuming you're happy to stump up for a scalable 10Gb Fabric and convince the network guys that spine-leaf isn't so evil after all)
Putting aside the arguments around purchasing economics and the merits of technical incumbency, what you get from HCI though is generally easier to scale and maintain than a traditional infrastructure organisation where the network, server and storage teams spend more time on turf wars than they do solving business problems.
The trouble is that the HCI standardisation process almost always results in one part of the infrastructure being over provisioned (aka wasted), because most people haven't / won't implement the HCI infrastructure at sufficient scale to benefit from the law of large numbers. People talk a lot about HCI being hyper scale, but IMHO its still got a long way to go before most people can safely deploy hundreds, let alone tens of thousands of HCI nodes in a datacenter. For the moment ten to fifty is about what I see at most places.
From some rather limited reviews of the literature and talking to folks (not exactly an exhaustive analysis, usually its the network fabric (which is the most expensive and highest margin part) that gets over-engineered, followed by storage capacity, followed by CPU. The only thing people cant seem to get enough of is RAM, but that might be because a lot of the HCI stuff I've seen has been deployed for VDI. I think that's why the pre-configured EVO appliances have traditionally been so expensive.
Net result is that for a lot of people, the scaling units (particularly in EVO:RAIL) simply haven't been small enough, and the only way to drive the efficiency necessary seems to involve a trade-off on the standardisation aspects.
In theory composable infrastructure is supposed to address a lot of those shortcomings, though that tends to mean you lose white box economics and fungibility. It will be interesting to see if the federation appliance addresses that.