back to article Violin: Don't weep for Tier 1 storage... it'll soon be flashtastic

Flash will replace all enterprise tier 1 storage, and shared arrays will prevail over server-side SANs, while network latency remains a solvable computer science problem, argues Violin Memory CEO Kevin DeNuccio. The market and technology shocks waves of cloud, virtualisation and flash are coming Violin's way, with the all- …

  1. Anonymous Coward
    Anonymous Coward

    that's a big bet

    The idea that software can fill the gaps is spot on - everything missing from Violin is software aside from a lower price tag - the hardware is top notch. The problem is, as an enterprise customer why wouldn't I buy that software product from the guys with decades of experience (NetApp for instance) who also have years of experience in flash (NetApp) and disk? Software is very hard to get right, and the new entries into this market are struggling with issues that have been solved years ago by the big boys. Adding dedupe and replication has been a priority but the real money is the rest of the massive software ecosystem such as backup, and a workable backup suite integrated into your array is no easy thing to add, especially once people start asking compliance questions.

    I like Violin, I really do, but in my opinion I'd like them a whole lot more and an R&D company who licences cool tech to storage vendors rather than an R&D company who locks down technology and refuses to play with others. All they are doing is delaying that cool tech while the other vendors work around their patents.

  2. ToddR

    Hmm VMAX out Violin in?

    So I put my enterprise b******s on the line for a 9 year old startup, who are still burning cash and who have no OEMs, no thanks.

  3. ToddR

    Sustained writes flash sucks

    Replacing ALL tier 1 spinning stuff with flash is a non-sense, as you can't get around the sustained write problem that all flash has versus DRAM.

    1. Nate Amsden

      Re: Sustained writes flash sucks

      How does 8PB of writes for an SSD sound? That's whats behind HP's 5 year unconditional warranty on 3par SSDs.

      Deploying my 7450 in about a week or two.

      Violin is 2 years too little too late

      1. El Storage Guy

        Re: Sustained writes flash sucks

        Nate, you missed the point earlier... it is NOT about how much you can write on an SSD. Is about "Sustained Write BW". Maybe it fits your environment (low write BW), but for many others, SSD/Flash can't provide that.

      2. Anonymous Coward
        Anonymous Coward

        Re: Sustained writes flash sucks

        So is it 8PB or is it unconditional?

  4. Cthugha

    A bit of flash nonsense

    always brightens up a monday......

  5. Trevor_Pott Gold badge

    All flash network arrays are going to beat server SANs? Wha?

    1) ServerSANs can use things like "memory channel storage" the provide latency traditional SANs can only dream of.

    2) The "interconnect problem" with server SANs is the exact same problem that traditional SANs have...with the differencing being that server SANs can get around scaling largely by switching to multicast. Traditional SANs and scaling are...more unique.

    Arrays may never go away entirely. There will probably always be room for them as a means of bulk storage. But in the long run, server SANs are going to be hard to beat. Centralized storage was a bandaid. The best solution is always to have the data as close to the processing as is feasible.

    1. Nate Amsden

      The best solution will vary of course. For me higher priority goes to high reliabilty, proven systems. Server side san sounds nice on paper but to me too complex to build a solid system yet anyway in a few years maybe itll get there.

      Forgive typos am on my phone

      1. Trevor_Pott Gold badge

        Each person has a different risk envelope. I have lived and breathed server SANs for the past two years and thus they don't seem at all complex to me. Certainly no more so than fibre channel and LUNs!

        Do I think that Fortune 500 need to wait for some of these folks to prove out before putting tier 1 apps on? Absofuckinglutely. But not because of the tech; the problem is ensuring that the companies in question have the support networks and experience required to provide true tier 1 class support.

        But the tech? The tech is solid...so long as you buy from the right company. At least two of them are pretty buggy still.

        But it's ready for tier 2 apps in the fortune 500. It's probably ready for tier 1 in the commercial midmarket. Server SANs are just...really not that hard anymore. They're not special. They're not new.

        What is new are the companies providing the tech. They all have growing up to do.

    2. Lusty

      Trevor, the interconnect problem is certainly not the same for server SAN compared to traditional SAN (counting P4000 as server SAN in this instance). In a traditional SAN you send the data to the storage and it's confirmed as written, in the case of Violin this is done via the PCIe interconnect ideally or IB, Fibre for the less well off. With server SAN, that same write has to go out and back to the second copy server, usually via Ethernet over an Ethernet network. In almost every real world case this results in higher latency for the server SAN. Anything the server SAN guys say to the contrary is from their "testing" which ignores data consistency issues completely in favour of better stats. EMC, NetApp, HP, HDS never ignore data consistency for their tier 1 systems even in testing, hence the apparent difference to the layman.

      As for using volatile memory for storage, the same is true - yes it's quicker, but only in the same way as strapping solid fuel rockets to your car. Survival rates are considerably lower in exchange for a faster ride.

      In addition to the above, CPU and memory usage for modern SAN operations are huge. I don't really have that sort of spec to spare on my virtualisation platform, and if I did, it would have cost as much as a real SAN to purchase the extra hardware and more importantly software licences. For each virtualisation host with Windows and VMware you're looking at more in licence costs than hardware costs - these are usually ignored by the software SAN guys.

      1. Trevor_Pott Gold badge

        Hi Lusty, I'm sorry, but you're wrong. While ethernet is a possibility for server SAN interconnect, it is by no means the required interconnect. Infiniband is quite popular for latency-sensitive deployments, and direct PCI-E interconnect (see: A3Cube) is also available, and works quite well, thank you.

        You might also consider things like "write double local, confirm back to application all while sending data to second node, mark second local write as erasable once second node confirms." Throw in the the fact that this allows for write coalescing in high transaction environments, or vendors like SimpliVity that do inline deduplication and compression - thus are only sending change blocks between nodes, because everything is deduped and compressed before being committed - and you realize that there are a half dozen schemes to drop data volume between servers while preserving write integrity.

        Also: the costs on server SANs are dropping dramatically. Look at Scale Computing or Maxta. The downwards pressure has begun in earnest. What's more, as they manage to drive down their CPU/memory usage requirements the toll on your virtual infrastructure is far less. To the point that I seriously doubt you'll get the same amount of storage and the same IOPS with the same latencies from centralized storage vendors. And I can pretty much guarantee you won't 5 years from now, as server SANs commoditize storage for good.

        Also also: server SANs are starting to address the issue of CPU usage for storage. A great example of this is SimpliVity's FPGA for inline deduplication and compression. It works, it works well.

        Additionally, this statement: "Anything the server SAN guys say to the contrary is from their "testing" which ignores data consistency issues completely in favour of better stats. EMC, NetApp, HP, HDS never ignore data consistency for their tier 1 systems even in testing, hence the apparent difference to the layman." is pure FUD. Not only is it FUD, it's insulting FUD. I absolutely agree that one of the server SAN vendors - and a prominent one - has this problem. The rest emphatically do not.

        More to the point, having devoted two years of my life to learning every facet of these systems, I do not appreciate being called "a layman". I promise you, I know more about server SANs than you do...and based on your level of interest and usage of FUD, probably more than you will in the next five years.

        The thing about server SANs is that they are not "one size fits all". They can be configured differently for different requirements. Different balances can be struck with them, and tradeoffs consciously made.

        Also: "As for using volatile memory for storage, the same is true - yes it's quicker, but only in the same way as strapping solid fuel rockets to your car. Survival rates are considerably lower in exchange for a faster ride."

        This is an rare configuration, at least for writes. (Though there is one vendor in particular I know advocates this and insists on calling themselves a "server SAN" when they're nothing of the sort...)

        I do see it in server SAN configurations tweaked for VDI. Ones where the node in question will not be storing the golden master or differencing disks, and they are obsessed with cramming every last VM in there. I don't agree with it, but I do know the vendors that do it and they are very up front about the risks.

        Long story short: you're working on a whole lot of FUD. If there is one valid concern in the whole lot it is that no single server SAN vendor has yet addressed all of these issues in a single product offering "off the shelf". (The major stumbling block being that most of them choose to stick to Ethernet for simplicity reasons...but that's changing, and I've seen deployments using infiniband from most vendors...and several are looking into PCI-E interconnects for 2015.)

        That said, I happen to know of at least four different models that are in development from different vendors that will address everything you raised (and a few other issues) in 2015.

        Centralized storage - especially centralized storage costing $virgins from the majors - is simply non-requisite. There are far cheaper alternatives available today, and they are selling like hotcakes. I highly recommend you put down the vendor "war cards" and take some of the high end server SAN offerings for a spin. You'll be pleasantly surprised.

        1. Lusty

          The fact that you don't agree doesn't make what I'm saying FUD, and what I'm saying is based on considerably more than a couple of years experience.

          To make storage highly available you need to either have two controllers able to see the same blocks, or you need two separate copies of the blocks, one for each HA controller. Currently it's pretty much only SAN hardware which can do the former since SCSI died off, SAS switches are beginning to become available which will eventually allow multiple hosts to see the same disks for a reasonable cost. Until these become cheaper than SAN hardware then the alternative is the two copies approach which absolutely requires a round trip over the network which raises latency. This is a physics thing. If you genuinely believe your data is in a good place with only a single controller having access to up to date info then fair enough, our opinions differ perhaps based on the types of solution we work on. Again that doesn't make what I say FUD, it just means my customers value their data and its consistency more than yours do.

          1. Trevor_Pott Gold badge
            Pint

            Two controllers are required for uptime, not data integrity. Remember: server SANs use object storage, not RAID. So when they do a double local and N remote they aren't going through a RAID controller presenting two LUNs, they're writing to two separate entities.

            Oh, and, just by the by, two PCI-E flash cards, which is typically where the initial double local goes do count as two controllers.

            Of course, if what you want is is to have your double local and N remote all confirmed committed before you reporting that write back to the guest OS then you'll have to sent that across the network first...but all you need back is a confirmation that it's written on the remote node, not a full copy of the data. Even then, the advanced stuff is doing RDMA writes to things like memory channel flash, which is going to provide you lower latency than a tier 1 storage array.

            The thing is, with server SANs, you just have more options than you do with traditional SANs. I can have a highly latency sensitive application running on a node and choose to run it in a "double local + N remote" setup where "N remote" writes are write coalesced and lag behind the double local by a few milliseconds. But I would probably not run that in HA, because I know there's the chance the remote copy isn't crash consistent.

            Being a server SAN, however, I have lots of choices. I can pull the disks/cards from the crashed server and bung them into another one, let it pick them up and light the VM up from the crash consistent state. Or, if the original server is a total loss, I can pick up from the copy that's a few milliseconds behind.

            Or I could accept the latency of RDMA-to-PCI-E-or-MCS-flash and just run my N remote crash consistent with my 2 local. I've got lots of options. Including ones that allow me to get way better latency than your typical tier 1 array, and ones that let me get way better redundancy. Or, if I build it right (PCI-E interconnects with RDMA-to-MCS), both.

            It all depends on what that particular workload's data is worth. And holy shit, would you believe it, I can even set about defining this as a policy for different workload classes, treating different workloads differently without having to set up different storage arrays, or fuck with LUNs ever again.

            It's goddamned magical.

            As for your "what I'm saying is based on considerably more than a couple of years experience", that's cute. I have "considerably more than a couple of years experience" in storage as well, but server SANs haven't been worth consideration for more than a couple of years, and thus experience with them specifically really can only date back that far. From the sounds of it, however, you don't actually have an experience with server SANs. Maybe that's what's got your Irish up.

            But hey, cheers. If you want to feel like you're the top dog, your penis is longer, and you've won the argument, off to sail into the future, I'll let you have 'er. Here's a beer icon conceding my defeat, and I'll not reply to whatever you post after. I've said my piece. You can sit tall astride the internet mountain.

            Beer -->

            1. Lusty

              Two PCIe cards do not count as a highly available solution. My entire point was that the performance claims by these companies are entirely at the cost of other things which anyone who actually understands storage refuses to do without. Now clearly you've spent two years boning up on marketing nonsense from these vendors, but those of us in the real storage world with real workloads such as databases can only accept low latency if it comes with security. That still means dual controllers in a proper SAN whether you like to accept it or not.

              You've mentioned RDMA several times. I'm amazed you're seeing so much uptake of this outside of marketing since literally none of our thousands of clients are currently using the technology. Loads of people have been using the term since MS used it in SMB3 material, but I have yet to see a single RoCE adapter in the wild and IB is beyond the means of even quite large companies unless they have specific needs such as with Violin arrays.

              It's not a willy waving contest, and I'd appreciate if you could avoid trying to make this personal - you seem to have some kind of problem with me as evidenced by your past posts. If you genuinely believe in your arguments then please just make coherent points and let them speak for you.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like