"Enterprise-class single-level cell SSDs exhibit sequential read and write I/O bandwidth of 300MBps and 360MBps"
is is really the case that write bandwidth is 20% higher than read?
If you are building systems using solid state drives (SSDs), you need rock-solid reliability and performance – and you won't get it from consumer-grade flash. But how do you know if the drives you choose are enterprise-grade? A supplier may say its SSD is enterprise quality but can you be sure this marketing claim is true? You …
This is an example:
Enterprise Performance SSD:
HP 400GB 6G SAS SLC SFF (2.5-inch) SC (part #653082-B21), and it details around $12k (US).
Enterprise Mainstream SSD:
HP 800GB 6G SAS MLC SFF (2.5-inch) SC (part #653109-B21), and it details aroung $12k (US) also.
Some monkey downvoted you. You deserved it.
You needed downvoting for providing a helpful answer to a question.
Okay, would the ape responsible please haul themself into an upright position and elucidate the cause of this virtual turd-throw, so I can confirm my suspicion that they merely lusted after one but got angry at the price.
You'd need to have a zip up the back of your head to spend $12K on a 400GB SSD. HP pretty much stick their label on someone else's drives and add a zero to the end of the price.
The author comes across as elitist and not particularly well versed in the technology behind SSDs. AnandTech did an interesting test, running their web site off of Intel consumer grade SSDs (SSD 320 - 120s) and concluded that they's be good for 1.5-2.5 years in a database server. The bigger 300 & 600GB drives double and quadruple that.
SandForce based controllers have extra tricks to throw into mix. They de-dup data, and compress it before writing to flash. The result is write amplification of less than 1.0 (as low as 0.14 in some cases).
What you need definitely depends on the workload you expect. Consumer grade drives are fine in some circumstances, and not in others.
> HP pretty much stick their label on someone else's drives and add a zero to the end of the price.
I think you exaggerate, suspect they triple it like 'server class' HDs.
Anyway, that's irrelevant. The respondent gave the price. Whether it's inflated by manufacturer is neither here nor there.
> The author comes across as elitist and not particularly well versed in the technology behind SSDs
I'd agree with the latter half (not too well versed) but not elitist; a journo who did his homework as best he could but is not familiar with the tradeoffs that you and I (esp. you I suspect) are familiar with, so missed some important ones. I can pick holes in his story (my needs tend to be read-heavy, write-light which is a fault in his taxonomy several have pointed out). BUT he is a journo not a techie and quite frankly mr. Mellor has impressed me with the effort he's put into his stories (ymmv but compare with others on this site before you disagree).
> AnandTech did an interesting test, running their web site off of Intel consumer grade SSDs
Jeez, as others have pointed out, a website is predominantly reads. If SSDs major weakness is writes... ahem? Also a link would be nice to check what exactly they did.
> SandForce based controllers have extra tricks to throw into mix...
can't see the relevance.
> What you need definitely depends on the workload you expect. Consumer grade drives are fine in some circumstances, and not in others.
spot on. This is where the author slipped up. It happens. Even supposedly experienced techs make similar mistakes. Shall I tell you the story about a company I worked at being very nearly wiped out because we had no backups because "we have raid"? I'm not joking. My tech mates to the rescue at 2am...
As a separate question to you and others, would avoiding flash be possible by having a set of suitably RAIDed drives (mirrored stripes?) with decent quality controller (plently of battery-backed ram) to absorb write bursts? I mean, that's pretty well the config of many decent storage devices, and absorbs burst writes and it should have throughput of N disks (which aggregated should be arbitrarily high) so what does flash offer here? Is it just that the cost of such an array would exceed that of comparable SSD?
The article seems to assume that "system builder" is synonymous with "enterprise server builder" which in turn is synonymous with "ultra high end pricey kit vendor". I'm not convinced of any of those.
I have one system, built five or more years ago, which has an SSD system drive. Ultra-high performance? No, it needs an SSD because it's mounted on a 100g centrifuge; a spinning-rust drive would very quickly be reduced to non-spinning pieces.
For plenty of "server" applications the skewed write performance is more than acceptable given the price differential. An application or deduplicated VDI disk image server might not see a write operation from one day to the next, just getting pounded with tens of thousands of reads a second when a lab powers up: anyone wanting to blow a fortune "optimising" write operations there should be unemployed pronto. A mail spool, particularly one with single instance store, might well have a similar usage pattern (non-critical batched writes the user doesn't even normally see, then a hundred or thousand scattered reads as their client fetches mail while they wait and curse the slow system).
Perhaps instead of blindly reciting the kind of marketdroid propaganda ("blah! enterprise! shiny! fast!") that would have have FedEx trying to deliver parcels by Ferrari, try understanding the actual workload and requirements? Money splurged on ultra-fast write facilities might well just be money down the drain - which, frankly, should equate to job down the drain for whoever wasted it.
Well, your use of SSDs isn't what I would call "mainstream". And if you were using SSD five years ago, you were ahead the wave of the current SSDs by a lot. So no, the article isn't fail, but really good in fact. Don't confuse your particular use with the rest of the server industry TODAY.
Back to the article, as to determine which SSDs can be called "Enterprise Class", you'll quickly see the difference on the price tag too. Enterprise SSDs for backend servers that answers what is written in the article are calculated in "grands" per 100GB of storage. A single enterprise SSD drive can be more expensive than the whole server itself.
You might also want to ask how often vendors provide firmware updates. I had an SSD that failed at home but was brought back to life by a firmware udate.
"Correct a condition where an incorrect response to a SMART counter will cause the m4 drive to become unresponsive after 5184 hours of Power-on time. The drive will recover after a power cycle, however, this failure will repeat once per hour after reaching this point. The condition will allow the end user to successfully update firmware, and poses no risk to user or system data stored on the drive."
This was a Crucial M4 SSD.
"If you are building systems using solid state drives (SSDs), you need rock-solid reliability and performance – and you won't get it from consumer-grade flash."
Not sure thats true. I've been running a web server business on Intel x-25 SSDs for 3 years now 24/7 with no problems at all. Over 100 machines not one failure yet.
I have several banks of mainstream SSDs hosting VMs. Blazingly fast, no problems. 3/1 read/write ratio. Our workstations (also SSDs) have even better read/write ratios. You'll find that it is rare ($100k+/server data center situations) that you require 10+ full writes of the drive per day. In those environments, you're likely running a very large SAN with 40Gbps links and have loads of money to toss at these SSDs. Your data likely requires it. However, even for mid-level enterprise, setting up a RAID10 with mainstream drives, even with the need to pop drives once a year, the savings are enormous. It would be far easier to simply buy 512GB drives when you were only planning 200GB drives, and buying twice as many at that, and solve any potential "speed" issues or "data resiliency" issues using common methods: "spindle count" and RAID-type methods. Think your SandForce 2xxx series 500MB/s read/write is slow? Buy 2 and RAID1 them. Still too slow? Make a RAID10 out of a bank of 24. Leave a few disks out as hot spares even. Bump them to 512GB rather than 240GB. A bank of 24 240GB Vertex3s run $9600 (give or take sales/promotions). Assuming, 20 live drives in a RAID10, running even just SATA2, and assuming the RAID controller can even handle it, you have potentially 3.2GB/s (GigaBYTES, not bits) of read throughput, same for writes. Also, 2.3TB of usable space.
Either that set up or a single 400GB enterprise SSD sitting on a SAS channel in a server....hrm, I don't think it will be a hard choice for me.
"Unlike disk media, flash media wears out".
Ah, yes! Explains why I never came across an HDD with a bad sector, much less a failed HDD. Indeed, I don't know where I came up with the term "bad sector" as magnetic media never fails, as the most-learned author pointed out. I guess that's just my poor mind giving out under the stress of all the awesomeness of this most magnificent article...
You know, RAID stands for Redundant Array of *Inexpensive* Disks. As I see it, if the disks you're using cost an arm and/or a leg, then you're doing it wrong. Any and all disks you use will, after all, fail at some point. So you might as well spend the money on proper redundancy and backups.
Sure, enterprise class stuff has its use. But only after you've covered all the other angles and spent all the money needed on all the other bits and bobs and are still left with either a pile of cash still to spend or a performance/reliability hole still to fill.
Even for my own "light consumer" use, I would be shocked and dismayed if I shelled out a ton of money for an allegedly faster SSD device and only got 75MB/s of sequential writes out of it.
Clearly, you can't just blindly trust the logo on the box and just assume that it will deliver on it's promises.
"""Even for my own "light consumer" use, I would be shocked and dismayed if I shelled out a ton of money for an allegedly faster SSD device and only got 75MB/s of sequential writes out of it."""
Well, if you were to buy a drive that came out a few years ago (the C300 quoted in the article,) this is what you should expect. If you were to get the C400, which came out about a year ago (judging by the timestamps on my benchmark logs,) you could easily manage 250+MB sequential write (for the 256GB model - performance varies with size in SSDs) until you run out of available erased blocks.
I've got to say, there are a few kinds of 'enterprise' application out there. For instance, the one that I work with, which is (somewhat) latency sensitive, and writes reasonably large amounts of data, which works just great with (certain) consumer SSDs.
The key is to have a benchmark that models your application, and then see how a given drive performs over a suitably long period of time. Disregard all of the data from early in the test, when the drive looks good, and then assume you'll be operating in the messy zone, where the drive is scrambling to come up with enough blocks to cope with your reads. And then start killing power randomly, and make sure all the data that you think you've written (and sync'd) is still there.
If your application performs alright when the drive is saturated, and your data doesn't disappear, then you're probably better off with a handful of consumer SSDs than a single enterprise drive. An elegant redundancy system (something nicer than boring old RAID, that is) makes drive failure somewhat more tolerable.
That's like saying:
"until electricity matures to a point where electrocution is not an issue I wont be buying,"
complete with the horrid capitalization and punctuation errors, mixed with the whole "I AM DIFFERENT HEAR ME ROAR" fallacy, as well as the awful attempt to string a rational thought into a sequence of words and letters.
I'm a Luddite: aren't SSDs advantages overstated for general purpose workloads? Their price/performance and lifetime per dollar ratios are far worse than magnetic. No one is going to buy a server with a single SSD that costs $10K and dies in five years** when they could buy one with four HDDs configured for RAID for half the price. So SSDs only make sense as accelerators, caching frequently used but non-critical (rebuildable) data, like database indexes, or in vast arrays of servers where the loss of one individual machine doesn't impact service (e.g. as used in the googleplex). The former makes sense but the latter doesn't offer compelling cost/benefit advantages over spinning rust. Why pay the overhead?
**Or maybe two: a typical spec at http://h18000.www1.hp.com/products/quickspecs/13415_div/13415_div.pdf isn't very encouraging.
Really. Bought a personal 128GB Kingston v+100, and got permission to clone my dual core centrino work box (4GB Ram.) Switched off page file, windows search, and defrag.
Then did a side by side comparison with one of my Acer 8943G home boxes, 8 way Core i7, with 16Gig of Ram, and twin 600Gig HDDs. Then after watching my work box start from dead in 20secs, wrote a simple read write tester in c#.
r/w
1 Job - Beast - 26.7 MB/s - Work Weenie - 85.2 Meg/Sec == > 3 times as fast
11 Jobs - Beast - 23.0 Meg/sec - Work Weenie 157.4 MBytes/sec == > 6 times as fast.
time to complete
1 Job - Beast - 79.2 secs - Work Weenie - 23.6 Sec
11 Jobs - Beast - > switched off after quarter of an hour - Work Weenie 139 seconds.
The cost of this improvement? 160 quid from Amazon. I'm migrating my home boxes. Worldspan's in tolworth, doing 512GB for 6o-odd pence a gig plus vat.
I suppose I can't speak for the OP, but I'd be willing to bet he'll just buy another SSD and restore his lost data from back-up.
He gave a price of $160 for the sort of speed-up that money just can't buy outside the SSD market, and the hidden cost is that he might have to use his backups once every few years. That sounds like a pretty good deal.
+1. I think the speedup of SSDs is, if anything, understated.
Most people aren't doing giant writes all the time, but they are doing random reads all the time, and SSDs are massively, hugely, indescribably faster at that. But hey, SSDs are faster at *everything else* too. Just not a lot faster.
Upgrading to SSDs has been the single most noticeable performance upgrade I've done on any of my systems for five years or so. CPU boosts, RAM upgrades, not in the same league. SSDs have made just about every operation I do all the time hugely faster. It really is not a subtle difference.
I got bitten by the bugs with early 3rd-gen Sandforce devices, which was annoying, but eh. Switched to a Crucial m4 for my desktop and it's been totally fine (though I should look into that firmware bug mentioned above). I would absolutely hate to have to go back to hard disks.
As far as reliability goes - any data you have only one copy of is data you don't care about. My _important_ data lives on a spinning disk in a server machine and is regularly backed up to a RAID array of other spinning disks. But the machines I actually work on use SSDs and get a huge speed boost from it. They can explode at any time and I don't lose anything I care about. Having said that, neither of them has. I've had the m4 in my desktop for a year or so now, and my laptop is a Vaio Z with dual SSDs configured as a RAID-0 array - yes, this is how Sony does things out of the box, I love Sony... - and that's been running a couple of years without problems.