back to article SanDisk flash holds secret flash sauce till after Christmas

SanDisk is soon to announce new technology needed for its solid state drives to take on hard disk drive storage. SSDs are appearing in netbooks and notebooks and in enterprise storage arrays where they provide accelerated I/O, either as a separate tier of flash storage or as a flash cache accelerating the array controller's …

COMMENTS

This topic is closed for new posts.
  1. Alex
    Coat

    ExtremeFFS

    Best name ever?

    Mine's the one with the mountain dew can in the pocket.

  2. Anonymous Coward
    Flame

    I don't know...

    I did at one point posit Chaos Undead Ninja Trolls. Went down like a cup of cold sick...

  3. BlueGreen

    There's something unconvincing here

    " SSDs are appearing in [...] enterprise storage arrays where they provide accelerated I/O, either as a separate tier of flash storage or as a flash cache accelerating the array controller's operations "

    Doesn't make sense.

    For reads you can just bung in buckets of ram which the DB will use to cache data. Gigs of ram are cheap enough these days, and price is almost irrelevant to enterprise kit buyers who'll pay tens of thousands for kit without blinking. Plus RAM is very mature and *very* fast.

    As for writes, well you traditionally have a disk controller (typically a raid card) with buckets of ram on it anyway to absorb large bursty writes. If you replace that with SSD then what does it gain you...? SSD is slower than ram and has limited write cycles (unlike ram), which is a very big deal the way a DB writes.

    Write cycles are a worry. A DB writing will write the data twice, once into the transaction log then into the actual DB, so halving the SSD's apparent life , and more importantly - in fact crucial for data integrity - the data must be written in specific order with certain other guarantees, and flushed to a persistent store (either the disk, or a disk controller card with a ton of ram backed up by on-card batteries). Flocke Kroes wrote (<http://www.theregister.co.uk/2008/11/07/sandisk_extremeffs_dram_buffer/comments/>, and thanks, answered some questions!) about the how SSD architecture differs from disk behaviour in a way that may break those write-orders and other guarantees. Until DBs 'understand' SSD write behaviour I'd be very nervous about running one on t'other.

    Incidentally the data written to a transaction log tends to be very sequential which matches hard disk writing well, so if you set aside a disk for the log, that disk tends not to be the bottleneck.

    Altogether, SSDs don't look like a good fit. IMO anyway.

  4. James O'Brien
    Joke

    "SanDisk flash holds secret flash sauce till after Christmas"

    So is this the "secret sauce" found in this book?

    http://www.lulu.com/content/4956212

    /mines the one with the mayo in the pocket

  5. Anonymous Coward
    Anonymous Coward

    @Bluegreen

    You'd think so but actually locality of data has a very big impact on cache performance. Adding more cache is empirically not always the right thing to do as the performance improvement follows the law of diminishing returns - adding 2x more RAM in cache may actually only give you 10% performance at best.

    I agree entirely with your concern about SSDs and databases but databases probably won't change - SSDs have to evolve so that their life expectancy is slightly more that a fruit fly.

  6. Dave Page

    @BlueGreen

    > Until DBs 'understand' SSD write behaviour I'd be very nervous about running one on t'other.

    The major issue (at least for PostgreSQL which I hack on) is not an issue of safety, as long as SSDs don't lie about when written data has been flushed to NV storage as some cheap commodity class drives do. Essentially, the server doesn't assume anything is safe until the it's been fsync'd to the transaction log.

    The bigger problem for us is one of tuning. DBMSs have been refined and tuned over many years as hard disk technology has matured, and we've learnt and adapted to changes in random vs. sequential IO costs etc. In PostgreSQL, these characteristics are used (along with others) by the query planner when considering the cost of performing an index lookup vs. a sequential scan, and as such can have a significant effect on the query plans that are generated. SSDs throw all that knowledge and experience out of the window by giving us entirely different IO characteristics to work with - ones that are likely to change rapidly in the coming couple of years as products mature rapidly and become more and more mainstream.

  7. Steven Jones

    @BlueGreen The trouble with cache...

    Speaking as somebody who deals regularly with high volume transactional databases measured in the many TB range on servers and enterprise arrays with many 10s of GB of cache then I can tell you what the real problems are or are not. Firstly log writes - the requirement here is minimal latency. Enterprise arrays with non-volatile write cache deal very nicely with that with sub ms write times. As the stage-out is asynchronous and sequential in nature then a RAID-5 arrangement works just fine (write two copies to two places for that extra level of confidence). SSDs don't offer much here. In general even random writes aren't too much of an issue as they are all cached and the stage-out is asynch (except in the extreme where the random write rate exceeds the IOP capability of the database which you've naturally striped far and wide).

    Generally speaking hard disks are acceptable for sequential reads too - arrays and DBs with read-ahead, parallel access and so on mean that (generally) hard disks will do the job.

    However, where hard disks do fail is on random reads. Latency is around 6-7ms for a random, physical read. Youe can keep throwing cache at the problem (and the right place to do that is the database cache, not the storage array), but at a certain point you get into the law of diminishing returns. We have OLTP databases where, even with a 99.8% cache hit rate, random read I/O is responsible for about 70% of DB transaction time. Throwing more cache at it has very little effect after a certain point because the locality of reference is too broad. It's still not cost effective to hold a 5TB database in physical memory - quite apart from the cost of the memory, all that RAM uses a great deal of power. There is also a more fundamental problem - the time taken to populate that cache at application/DB startup. What you will find in the early stages of starting your DB and application is that performance will be very poor as you have to do physical I/Os to populate that database. You can therefore get very lengthy startup times indeed - if it's bad enough that performance is hit you have to stage users in over a long time.

    This is where SSDs are a potential saviour. If random read time can be brought down to sub-ms times over an FC SAN (about 0.5ms is about the limit at the moment due to latency in switch and fibre transitions) then it will lead to a huge increase in performance of some types of applications.

    As for lifetime issues - well it's not as if hard drives last for ever. We have RAID systems to provide continual availability for storage based on physical disks. There are plenty of ways that SSDs could be configured to cope with "bit rot". For those that point to hard disks having a lifetime measured at half a million (or more) hours or 60+ years, then they do not. That's a common mistake as people mistake MTTF figures (meantime to fail) with device lifetime. MTTF is a statistical measure of the average number of operating hours between single failures for a given set of devices. Crucially this only applies within the rated lifetime of a device. After that failure rates increase rapidly. It is perfectly possible (indeed it is very often the case) that MTTF figures can be much greater than the rated operational lifetime of a device.

    Nothing lasts for ever - SSDs won't, but neither do hard drives.

  8. Max

    Performance

    "but customers aren't going to buy the SSDs unless performance is overall way better than hard drives"

    Not me! If I get a significant battery life boost from using flash with the same cost as HDD, even with worse performance, then I will gladly pony up for it.

    For now I wouldn't be using this in a desktop/server but in a notebook it would be perfect. Hopefully though one day the performance will be up to par and it can replace HDD to make us more "green".

  9. Mikel
    Thumb Up

    IOPS

    SLC flash has upside potential for databases and transaction based operations. At over 80 times the I/O transactions per second related to the fastest spinning drives, the price increment is not a consideration if you need the I/Os. Flash has the potential to meet the I/Os of RAM in a technology that retains its data under a loss of power. For some applications this is crucial. For the same amount of storage it also costs less. It should help drive down the cost of larger sticks of RAM as well.

    For laptops, not so much. MLCs will find common use here. Even now the storage capacity is not as much an issue as the cost of the chips. A standard 2.5" drive form factor can easily hold far more Flash chips than is marketable right now. Current offerings contains a good deal of wasted space. Flash drives will always have a power consumption advantage over dynamic RAM, and as the data size grows this difference becomes ever more material. As we seek the maximum number of bits that can be stored in a cell, we'll fight to find the balance between density and reliability.

    For the real high end we're getting away from drive emulation as an interface now. PCIe direct interface is here, and for the extreme high end Infiniband. An IO chip that simulates cylinders, heads and sectors is redundant once you realize that the essential part of that abstraction is that when you put a block and then request it back, you get the data that you put. Where exactly in the device the data is actually stored for access speed and wear levelling is not relevant - only that the block put is the same block retrieved. Now that the engineers building these devices grasp the essence of the abstraction, the interface itself is irrelevant. Sometime soon we may begin to see Flash units that support variable block size.

    The new products are interesting in that they promise much more potential than is yet exploited.

  10. waldo kitty
    Pirate

    so, what's wrong with this picture?

    so why not use "standard" memory chips with a battery on the card instead of relying on capacitance to hold the charge? standard memory chips are quite fast so why not add in a (re)chargable (via USB since that seems to be wher all the id10t corps are going) battery and go from there? sure, the batteries have a 5-7 year life-cycle but we're already having to deal with that on many other fronts :? :? :( :(

    looks to me like yet another bean counter "conflaguration" :( :( :(

  11. Youvegottobe Joking

    @ Steven Jones

    Correct on all points. Almost all proper storage arrays these days have large amounts of mirrored cache and redundant disks so data is always secure. Proper enterprise arrays will fetch the data you want plus quite a large chunk of the surrounding data but this method of prefetching is defeated with largely random reads. Any time there is a cache miss where data has to be fetched from disk(s) then you will experience significant slowdown.

    SSD's sound like they would be ideal for that, if they could only get more relaible (anyone want to rebuild a raid5 SSD disk - yikes) and speed up the writes.... maybe writes are not as important as data written to the array is going to sit in cache for a while and could be written in the background. Leaving written data in cache for a while is probably a good idea, fresh data would likely be read many times shortly after its been written...

This topic is closed for new posts.

Other stories you might like