* Posts by Barry Burke

16 publicly visible posts • joined 6 Jun 2008

Micron streaks away with PCIe flash

Barry Burke

Ask Fusion-IO

The drivers for Fusion-IO cards are notorious for placing a heavy burden on the hosting CPU - that is probably why it is mentioned...

Tales from the storage frontier: What's next for flash, disk and tape

Barry Burke
Alert

SSDs don't generate I/Os

A common misconception - fill an array with SSDs and you'll overpower it with I/Os.

Fact: the maximum I/Os an array can handle is unrelated to the disk or solid state drive technology. You can simply reach this limit with fewer drives using Solid State. A well-designed array won't "fall over" if the drives support more I/Os than the array does...and indeed, well-designed SSDs will respond to every I/O significantly faster than will disk drives.

Also, I believe the term "flash" is being used as a pseudonym for Solid State drives here...as you note, there are many other technologies coming down the pike...PCM looks to be next, with cost and performance advantages over NAND that will undoubtedly accelerate adoption...

Barry Burke
Alert

Power down those disk archives

Another common misconception is that disk drives *must* always be spinning, and that is used here as an argument against tape.

So...why not just power down those drives that you've copied your archive to? Spin them up from time to time to verify that they still work and the data hasn't corrupted, and viola! Cheaper than tape to buy AND cheaper than tape to maintain.

FWIW, the EMC Data Domain Archiver announced earlier this year does just that...

Want your archive off-site? Easy - bury the Archiver in a mountain, and replicate your archives to it...again, much safer than shipping tapes, and much faster, too!

Are SPEC file benchmarks broken?

Barry Burke
Megaphone

Vive' le Debate

The REAL value of these benchmarks?

The debate.

Many will remember that I have repeatedly challenged the value of benchmarks over my entire career, often making many of the same arguments being made here. Each time, I would be chastised that I had no justification for my position, since EMC did not participate in the practice of benchmarketing.

Well, now they do, with a vengeance.

And still my arguments against benchmarks remain the same - I have not changed my position. Just now, I have a bona-fide soapbox from which to proclaim them.

Glad to see there are so many fewer Deaf Ears in the anti-benchmarking community.

Welcome aboard!

HDS first past VAAI post

Barry Burke
Boffin

reality check - vsp vaai support

http://thestorageanarchist.typepad.com/weblog/2011/02/3020-reality-check-vsp-vaai-support.html

Barry Burke
WTF?

EMC has been shipping VAAI since 2010

VAAI support has been available for all VMAX and CLARiiON arrays since last year, and is also standard for the new VNX arrays.

EMC boosts revenues 20% in Q3

Barry Burke

The "VMAX Effect"

Cute phrase, Chris - almost sounds like a compliment :-)

Matt -

From the 2009 Q3 Earnings press release:

"Third-quarter (2009) consolidated revenue of $3.52 billion declined 5% compared with the year-ago period. " (2009Q3 press release is here: http://www.emc.com/about/news/press/2009/20091022-earnings.htm )

Retrospectively, EMC wasn't doing so badly during the dip. And with management predicting the highest revenues in the company's history (~$16.9B), this year's numbers are clearly more than a simple recovery from that dip.

(Disclosure: I work for EMC, in the Symmetrix organization...one of the minions behind The VMAX Effect).

NetApp slims for Storage Foundation

Barry Burke

Actually, standards are near

Symantec uses a SCSI command extension known as WRITE_SAME with an UNMAP flag, which T10 has (or will soon) ratify. T10 has also defined the UNMAP command which affords a bit more cooperation between initiator and target, this too is ratified (or will be soon).

Most of the industry waited until the two standards stabilized before implementing, and we should soon start seeing more file systems, volume managers and hypervisors using one or both of these APIs.

FWIW, these APIs have value far beyond just freeing up space in thin devices - Symmetrix will support these APIs for non-thin devices as well, interpreting them to mean that the UNMAPped blocks need no longer be replicated (for example).

Also, as with the SATA TRIM command, some Flash drives will support these APIs as a means to improve performance by freeing up blocks that are no longer needed...

STEC on a roll for enterprise SSDs

Barry Burke
Stop

SSD cheaper than HDD?

Not!

You postulate an STEC ZeusIOPS at $220 - that's less than most 15K rpm HDDs these days. I can't provide actual pricing, but I can confirm that a ZeusIOPS costs a multiple of the cost of an 15K HDD, not a fraction. You might want to adjust and revisit your maths.

NetApp says no one wants to buy NetApp

Barry Burke
Dead Vulture

Not a new crusade

NetApp has been gunning for EMC for years. Confusing the market with 2 (or is it THREE) independent and incompatible de-dupe solutions isn't necessarily a winning strategy, especially for a company that touts "Unified Storage" as their differentiator.

Nor will be a half-hearted OnTap cluster-based array...V-Max has already obsoleted that notion before it ever saw the light of day. Scale-out requires a time-proven SW infrastructure: Enginuity has been scale-out for over 18 years.

Big Blue bows to Zeus

Barry Burke
Boffin

IBM not first with STEC SSDs in servers

Check with the folks over at Sun - they've been using STEC SSDs in their servers for a while (well, if Thumper qualifies as a Server, that is).

Where will EMC's restructuring axe fall?

Barry Burke
Stop

Solutions vs. Products

Storagezilla is correct - the responsibility (and staffing) for these products indeed fall under the domain of Dave Donatelli's Storage Division, and not CMA.

EMC.com intentionally presents EMC products under multiple categories, even if they are "made" by different EMC divisions. For example, if you look around, you'll find Centera under several different categories in addition to "Archiving", including "Backup and Recovery," "Enterprise Content Management," and "Storage" ( http://www.emc.com/products/category/storage.htm ).

Is flash a cache or pretend disk drive?

Barry Burke
Boffin

Oh, and I almost forgot

David, you're going to have to help me understand why I need a device that can deliver more IOPS than the CPU can handle?

When I see that, immediately I start thinking - what I REALLLY need is the ability to share all these IOPS across multiple different CPUs and applications, instead of wasting them buried inside a single server.

Answer: put all the IOPS in an external storage array and amortize the cost and performance across multiple applications!

BRILLIANT!!!

Barry Burke
Heart

But it's not quite that simple

Without getting into the interface battles (PCI vs. FC/SAS/IB/etc), consider a few things:

PCI/x (et al) is limited in the number of devices it can support and the distances they can be from one another, while FC (et al) are far easier to extend over distance and can support far more addressable devices.

Performance-sensitive datasets frequently exceed the size of even a few disk drives. Given the cost of NAND in the short term, the most economical use could well be as an L3 cache instead of a tier 0, at least for some applications.

There are scarce few applications whose access density is hundreds of thousands of IOPS against a few hundred GB of data - more typically high-IOP applications require far more storage capacity (if they didn't, they'd just load it all into SDRAM).

As dynamic cache, NAND isn't significantly more dense than DDRx SDRAM, while it is significantly slower (particularly for writes). And while cheaper per GB, the $ IO/response time is that different from SDRAM either.

The default I/O size for most database engines is 8KBytes - locality of reference studies show that 8K is better than 4K for caching (yeou basically get the extra 4K for "free"). Other studies have shown that modern databases perform better with 128KB read granularity. Smaller isn't necessarily better, unless you're truly operating at memory buss speeds - NAND is too slow for that.

As permanent storage, NAND needs added layers of data integrity protection, and not just within the device itself - mirroring or RAID is required to protect against the inevitable device failure. This adds a not insignificant amount of performance-sapping overhead - it is doubtful that the Quicksilver configuration would have come anywhere close to 1M IOPS had error detection, correction and prevention been included.

Although counter-intuitive, customers frequently find that applications perform better with external SDRAM SSDs and even large-cached storage arrays than they do with internal caches OF THE SAME SIZE - there's a significant advantage in cache algorithms that are disk-aware (seek & rotational positioning, etc.)

And the most important consideration is this - the folks arguing for "in the server" either build servers or build PCI-bus products; the ones arguing for "in the storage" build arrays or drive-form-factor devices. But both servers and storage today use SDRAM and spinning rust - there's no reason that NAND (or any other technology) has to be an Either-Or discussion.

The reality will be both will thrive, and most likely within the same I/O path. The debate is truly pointless!

- the storage anarchist

Battle of the SSD strategies: Sun vs. EMC

Barry Burke

Not so sure about the need for end-to-end integration

The implication that the "best" flash solution requires end-to-end integration of servers and storage has yet to be proven. Especially since storage arrays today are capable of delivering performance that is better than what even server-embedded flash can deliver.

And while performance is one benefit that drives demand for external storage, it's not the only one, nor necessarily the most important one. Consolidation, resource sharing, multi-platform/multi-application replication consistency, back-up offloading, parallel development using snapshots of "live" data, thin provisioning, data mobility are among other reasons people use enternal storage. Many of these cannot be adequately addressed by embedded storage approaches.

I suspect the end-game will look more like it does today than anything radically integrated - with flash playing a role on both sides of the SAN links. And if there does come some neat new way of leveraging Flash that requires cooperation between the host and the storage, inevitably customers will demand it be implemented using well-defined standards so as to avoid any threat of end-to-end vendor lock-in.

Barry Burke
Go

Complimentary solutions

That should be Sun AND EMC - there's no reason for these to be thought of as competitive alternatives...except that EMC is shipping Flash drives already, while Sun has made more of a statement of direction.

A cached disk array already delivers I/O to servers much, much faster than disk drives alone can - and thus servers have already been "optimized" for faster-than-disk (and even faster-than-flash) I/O performance when dealing with SAN-based storage. And since RAM is still so much faster than NAND FLash, cached arrays still add significant benefit to I/O performance even for Flash-based storage.

As to how the inherent characteristics of NAND Flash are handled in the devices EMC is offering today, I've covered much of this in my blog:

http://thestorageanarchist.typepad.com/weblog/2008/01/0059-bold-fast.html

Net-net: customers and applications simply treat the devices as they would any SAN-based disk storage - all the management is handled within the drive and with the cooperation/assistance of the storage operating system. NBD.

Unfortunately, as I pointed out in my blog yesterday none of the other vendors have come forward to explain their approaches yet.

http://thestorageanarchist.typepad.com/weblog/2008/06/1010-sun-adds-c.html

They have no choice but to address these characteristics, but they've been mum on the details so far...