back to article Diskicide – the death of disk

How might disk death in the data center come about? These are the notes used at a Tech Unplugged event in Amsterdam on September 24, and they present a scenario in which data center disks could stop spinning. Is it a sensible scenario? Read on. I try and connect the dots in the industry, the dots being the myriad individual …

  1. Anonymous Coward
    Anonymous Coward

    Gawd Awful Expensive, at first, perhaps a long time.

    Chris you keep bringing up the point about the flash manufacturers are at a hard wall in the amount of flash they can crank out. Now, if you are stacking about bunch of them together, exactly how are you going to meet demand? Sure, using a more relaxed geometry should improve yield but the number of dies that are produced is relatively fixed.

    I get it. "Spinning rust" is dead, except for the fact that we've found a way to produce less flash product packages. The next two years, perhaps three, will give us more information about options similar to flash (XPoint, PCM, ...). But until you can put lots of packages in the sweaty hands of system builders/integrators, I'm reserving judgement. Oh, and I have two 150 GB and one 600 GB enterprise 2.5" drives which are extremely wonderful so long as streaming in a serial data set (or media file if that's your thing) and then back out at the end of the run. Otherwise, it's wall to wall flash in six packs.

    [I suppose being one of the last to learn how to operate a card-sorter, remote job entry station, and gawd help me paper-tape to boot the computer, might have to do with the fact I still find ISAM useful in certain cases.]

  2. frank ly

    Apples and Oranges??

    "... post-dedupe flash cost/GB is the same as disk."

    Is that post-dedupe disk and post-compression disk? Or is it raw disk? At what size of storage array is that comparison made?

    1. Paul Crawford Silver badge

      Re: "post-dedupe"

      As soon as folk start talking about compression or de-dupe, they are up to something, and that something is usually a lie.

      Compare RAID-protected capacity & cost. Note the IOPS difference, then decide.

      Not all work loads benefit from compression or de-dupe to make the extra CPU load and/or RAM usage worthwhile, so leave that to the customer to see if there is some advantage.

      1. theOtherJT Silver badge

        Re: "post-dedupe"

        "As soon as folk start talking about compression or de-dupe, they are up to something, and that something is usually a lie."

        Absolutely.

        "Our product is just as fast as our competitors, and provides the same storage* but only half the cost"

        and the * always says "Compressed capacity" which is nonsense if your data is already compressed.

        You can get FANTASTIC read/write speed out of big spinning rust ZFS arrays with compression on - I know, we do - but only if what you have is cleanly compressible and doesn't arrive down the network at a speed where you overflow the ability of the server to compress it in time to get it onto the platters in an orderly fashion before the next TXG commit.

    2. Bronek Kozicki

      Re: Apples and Oranges??

      there will be time when, without looking at dedup (which is expensive), SSD will be actually cheaper than HDD per GB. This time is not now. Although of course, all-flash-vendors would say that it's already not more expensive, wouldn't they?

  3. David 132 Silver badge
    Headmaster

    1930s?

    From the fine article:

    "I'm going to begin back in the 1930s, before most of us were alive...

    Remember punched cards; ..."

    I'm sure it wasn't the intention, but that sentence seems to imply that punched cards per se date back to the 1930s. If so, try looking up "Jacquard Loom", and be impressed at the ingenuity of the early 1800s - I know I was, wot wiv me being of poor edumacation an' all.

    But yes, you're right, the 1930s is about when the technology was repurposed for control of digital data-processing devices - via Hollerith in the 1890s and so on!

  4. Ben Liddicott

    Helium: It's not about friction, it's about reducing turbulence and head vibration.

    Helium is more viscous than air not less. It's also lighter. These two reasons mean laminar flow is possible at higher velocities, andturbulence is reduced, and the mass of the gas is reduced, all factors reducing head vibration. Turbulence != friction.

    (While I am a physicist by training this is not my area of expertise, but the first thing I did was look it up).

  5. Ben Liddicott

    The boy with his finger in the dyke succeeded.

    That's the point of the story: The hole got larger and he put in his arm then his whole body. Eventually they sent out a search party, found him, and patched up the dyke.

    Moral: the little we are able to do may be enough.

    1. Anonymous Coward
      Anonymous Coward

      Re: The boy with his finger in the dyke succeeded.

      Similarly the story of King Cnut (Canute). He was deliberately teaching his sycophantic court that his powers did not extend to breaking the Laws of Physics.

      1. P. Lee

        Re: The boy with his finger in the dyke succeeded.

        >Similarly the story of King Cnut (Canute). He was deliberately teaching his sycophantic court that his powers did not extend to breaking the Laws of Physics.

        A pertinent cautionary tale to those attempting an excessive embiggening of flash. As is The Emperor's New Clothes.

        It's a great product, but sometimes we are quite happy trading lower speeds for higher capacities - it will depend on your application. You are spec'ing for the application, not just following a fad right? If the app doesn't demand it, why would you do it? Is AWS' Glacier ok in the cloud but not in the DC?

  6. elan

    flash-disk-flash

    the substitution process will be evolutionary and in most cases run by life cycle considerations.

    VSI/VDI/OLTP/DSS clear thing. I am not so sure about backup/archive - media break as an insurance for your software defined storage (what are the flash solutions at the end of the day) is an argument---

  7. foxyshadis

    I love that last graph

    It nicely sums up how content-free the article is (after first explaining the basics of HDD and flash). It's mostly either an extended exercise in wish-fulfillment or pulled right from the nether regions. I hope flash succeeds and comes downmarket too, I'd like to be able to afford more, but even at QLC 16nm I don't see how it's likely to fully compete with a combination of iterative SMR (and eventually HAMR) tech, and potential future flash competitors (memristors, if they ever get off the ground?).

    Flash has probably reached the point where it can perform nearly all of most consumers' needs, but businesses' need for storage will seemingly expand to fill as much as they can afford.

    1. Anonymous Coward
      Anonymous Coward

      Re: I love that last graph

      "cost is heading towards parity with disk" <sup>[CitationRequired]</sup>

      At the moment the cheapest 960GB *consumer* SSD sets you back about £250 (inc VAT), whereas a 3TB/3.5" consumer hard drive is about £75. That's a ten-to-one price advantage in favour of the hard drive. I suspect the enterprise differential is similar.

      The SSD is admittedly faster and smaller (although you can get 1TB/2.5" HD for under £50, which is still five-to-one cheaper)

      For bulk cloud and archiving applications, the cost-per-byte is king; business models are made or broken on this. SSD has a *long* way to go, and I don't see any evidence that the costs of silicon fabrication are plummetting.

      It's inevitable that mechanical storage will be finished sooner or later, but that is likely to be due to some completely new technology which doesn't depend on etching a billion perfectly-formed semiconductor gates on a single piece of silicon.

      1. Charles 9

        Re: I love that last graph

        "It's inevitable that mechanical storage will be finished sooner or later, but that is likely to be due to some completely new technology which doesn't depend on etching a billion perfectly-formed semiconductor gates on a single piece of silicon."

        What makes you think they won't find a way to do exactly as you describe much easier? Is there some physical limitation that precludes improving the process?

  8. simpfeld

    Trend Looks Correct

    It certainly looks like innovation is happening mainly in the SSD side (XPoint, 16TB single drive etc). Also worth mentioning that it looks like we are rapidly running out of HDD vendors, three left, WD plus Seagate with speculation that Toshiba might want out.

    HDD seem to be heading quickly towards being more specialist high volume devices, and no longer general purpose i.e. high capacity low spin rate (5400 RPM) sold as NAS drives and SMR which is just awful for random writes. HDD may very soon purely be used for high capacity long term storage and tiered storage system and just not be useful for anything else. Becoming what tapes used to be.

    The may issue stopping us fully heading here is that it seems to be struggling to hit a decent price point for storage of a 1-3 TB (not even parity just reasonable). And the vendors seem to have less interest in this area as presumably, higher storage and speed targeting the 15K drive sector is where the money is. Obviously this mid-ground moves with time.

    SSD do well at the few hundred GB SSD's for boot and really fast higher capacity storage disks (money is no object solutions).

    I think SSD will win, but it's going to be a very long transition period.

  9. Riku

    It might always have a nice, but...

    "But Nimble Storage says its hybrid arrays have replaced an IBM all-flash RamSan – because they had better sequential data access performance." - this is an important point I think. While a large part of the enterprise storage busioness is focussed on IOPs, there is a segment of storage folks (myself among them) for whom IOps matter not much but for whom massive sequential performance is what is required). Now while I appreciate all the DB, VM, et al folks out there who love their IOPs and represent the bulk of the cash to splash, it would be nice to see The Register (or any major publication) take a long, hard serious look at sequential storage workloads. For sequential workloads, disk still has its place, for now, certainly in terms of bang for buck.

    As for "...flash could kick spinning rust out of the data center and disk become the new tape..." surely that begs they question, "Why does that make any sense?" Tape is already good at that role. Is that just disk fanboiism? If I'm buying bulk, a tape cart that is essentially, plastic tape, a plastic box, a spring and a plastic cog, is *always* going to have a lower cost per unit than a far more complicated to manufacture disk drive. Even factoring in the costy of the tape drives the overall solution cost shoul still be lower. Then there's the opex, which for archive will always be lower. Oh sure there's the whole MAID thing in archive, but Copan never went anywhere whith that and nobody I've yet encountered trusts disks to reliably spin up and down like that.

    It will certainly be interesting to watch though.

    1. Anonymous Coward
      Anonymous Coward

      Re: It might always have a nice, but...

      " If I'm buying bulk, a tape cart that is essentially, plastic tape, a plastic box, a spring and a plastic cog, is *always* going to have a lower cost per unit than a far more complicated to manufacture disk drive."

      But at the same time, it is slow as molasses, even with LTO-7 tech. It'll depend on just how much data you need to back up and how quickly you need it done. At some point, tape can't keep up which is when you'll switch to HDD.

  10. Paul Shirley

    feeling old...

    ...because I remember using punch cards, paper tape, mag tape and toggle switches to key in the paper tape bootloader! Nothing as strange as the oscilloscope output and rotary phone dial for input on an ancient mainframe though ;)

    Yes, completely OT :)

    1. Citizen99

      Re: feeling old...

      That 'rang a bell' with me ..."...because I remember using punch cards, paper tape, mag tape and toggle switches to key in the paper tape bootloader!"

      I do remember seeing "... the oscilloscope output ..." on a school trip to see the English Electric DEUCE computer in the 1950s.

      "... rotary phone dial for input on an ancient mainframe ..." Love the idea :-)

  11. bed

    Yes, but...

    While this pean to flash may justify the salary, there are other technologies over the horizon, memristor perhaps, which will (hopefully) provide access to data at bus speeds and probably render obsolete current computer architectures which haven't really changed in concept since the stored program digital computer was initially developed.

  12. Steve Chalmers

    Good Thinking! But pay attention to...

    Good reasoning on flash displacing spinning media for certain use cases in the data center!

    I'd encourage some thought on:

    1. The server-side software stack for accessing storage. The stack we've used for 50 years, or even in its current form as the SCSI stack for 30 years or the Fibre Channel stack for 20, consumes far too many CPU cycles, far too much time to match up with what flash and emerging solid state storage technologies can do. NVMe and NVMe over Fabric are a major step here, but by no means the end of the story.

    2. In the data center, the interfaces used for a storage infrastructure change glacially. An IT shop which committed to Fibre Channel 15 years ago will probably be using at least some Fibre Channel 15 years from now, even if they decide today to start phasing it out. Which leads to...

    3. A lot of these new storage technologies, whether it's Object or some creative use of server integrated flash, are great for (certain) new applications, but legacy apps would have to be ported at a minimum and rearchitected/rewritten at the extreme, to use these new storage models. That means the legacy apps will be on something which emulates traditional storage for a long time.

    4. "Requiescat in Pace" (sorry, the 4 years of Latin I took 40 years ago stuck, have to call the typo)

    @FStevenChalmers

  13. LeoP

    Failure rate

    Simple stat from last year: % of storage devices becoming catastrophically defective within a year of deployment:

    Disk: less than 1% (1 Disk)

    SSD: 15%+

    Vendors: Come back, when you have finished your homework.

  14. Rob Commins

    Stick to the fundamentals - we've been here before

    Isn't it better to acknowledge that there will always be storage media that are optimized for performance/reliability and will always carry a cost premium just as there will always be capacity optimized media that will be relatively inexpensive? It is the job of the systems vendor and IT leader to strike the right balance for that particular environment and have the ability to change over time as requirements change? Twenty-some years ago we advanced from 3,600 to 5.400 to 7,200 to 10,000 to 15,000 RPM drives. Now were going through a similar phase with different grades of flash over time.

  15. Mk4

    I actually stopped caring about media about a year ago

    Great - flash this, "spinning rust" that, compression, dedupe, blah, blah, blah. I really don't care any more. What I want is the major attributes of the storage presented to the user able to be independently modified without effecting any of the other attributes. On the fly. Using an automation interface. Performance (IOPS and MB/s), availability, protection level (data redundancy), versions (snapshots) frequency and retention, locking data with guaranteed integrity, encrypting data, off-site copy of data, off-line copy of data, geo-distribution of data, access permissions, metadata creation/modification, etc., etc. All of them, no exceptions and no mealy mouthed marketing bollocks.

    Compare and contrast with compute. If we were having this discussion about compute it would be about the number of CPU cores, memory technology and bus speeds. The dicussions on compute are actually about containers vs. H/W virtualisation, devops, continuous integration, etc. It is time storage got out of the stone age and joined the rest of the world. Discuss.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like