back to article NetApp's 50 per cent guarantee

NetApp will guarantee customers they will use half as much NetApp storage in virtual server and desktop environments compared to 'traditional storage', but won't make comparisons to specific products from competing suppliers. NetApp's chief marketing officer, Jay Kidd, said: "The pressure for cost reduction has led to the …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    3PAR VDS

    3PAR does VDS using snapshots mainly. hundreds of read-write snapshots running simultaneously with thin copy on write technology(tracks changes only once regardless of the number of snapshots).

    http://www.3par.com/solutions/utility_computing/vmware_vdi.html

    I don't plan to deploy VDS myself but I will be getting a 150TB 3PAR T400 in a few weeks that I'm very excited about! woohoo!

  2. Anonymous Coward
    Thumb Down

    lame

    so netapp's raid-6 config will save a ton of disk compared to anyone else's raid-10 config huh?

    boy the netapp marketing team must have really worked round the clock on that one...

    they must be hurting to come up with such pathetic comparisons.

  3. Anonymous Coward
    Thumb Down

    De-Dup...But Not In A Production Environment....

    I love the fact that Netapp state that de-dup can be used on their arrays, but it clearly states in their documentation (whitepapers and tech reports) that their de-dup (which is a 3rd party bolt on) should not be used in a production environment and is completely unsupported because of this....

  4. Anonymous Coward
    Anonymous Coward

    ummmm... no.

    Not sure where you get your information. NetApp de-dup is not a 3rd party bolt on and is supported in production environments.

    http://www.netapp.com/us/products/platform-os/dedupe.html

    Got an axe to grind?

  5. Anonymous Coward
    Anonymous Coward

    Oh dear

    This is no more than a cyncical attempt to divert attention away from the recent debacle around Netapp's rather pathetic raw to usable capacity figures. There was a time that EMC was the bully boy of the industry and having mellowed somewhat it now seems Netapp's want the mantle. Netapps increasingly aggresive attitude has come to the fore recently with a very coordinated blogging community combined with equally aggressive and swift put downs to any form of dissent regarding their product or positioning.

  6. Anonymous Coward
    Black Helicopters

    More Spin

    Just like their triple disk protection arguement, "I know we can't do proper RAID 10 so lets just make it sound like doing a software mirroring of RAID 4" is a really good idea - genius! They are the masters of using spin to turn competitive weaknesses into advantages.

    If you look at most vendors TCO studies you can soon spot flaws when you see the configurations used for their "independant" comparissons done by the supposedly independant analyst (read as Stooge). The NetApp one is a classic example, in that they always use RAID 6/DP versus RAID 1, use space efficient snapshots and assume the other vendors use full copies (when most also have space efficient),

    They do have some great software and features but I lose respect for them as a company when they try this kind of BS

  7. Rob Foster
    Thumb Down

    Creative math...

    Researching NetApp - They seem to recommend 8 (data) + 2(raid) for their RAID-DP model and recommend no more than 60% space utilization before you risk degrading performance. (Once you deep dive into how RAID-DP actually works, this makes sense).

    Immediately one sees Available storage = 0.8 x 0.6 x Raw Storage = 0.48 raw storage. This is worse than Raid 1+0.

    And then for dedup. , NetApp does post-write deduplication which means you need to write data to disk first *before* cleaning up replicated data. So you need to allow around a .75 multiplier to allow for deduplication space and then if you use snapshots there is some overhead so multiply by .8 again and you are at .48 * .75 * .8 = .27 utilization of raw storage. Call it .25 after allowing for metadata, etc. (Yes, I know you don't really need to do snapshots but then why pay for it?)

    Going back to their magic 10.75TB value, you get 2.7TB usable. Getting the next 50% free puts you to 4.0TB usable before you start paying again (assuming you are willing to face your cheesed off financial controller, cap in hand for more storage budget.)

    NetApp uses the same math as the American banking industry.

  8. Nick Triantos

    Guarantee

    Rob,

    NetApp's RAID-DP practices recommend 14+2 RAID Groups not 8+2. Furthermore, there's no such recommendation for 60% utilization. In fact, fo dedup we recommend 3-5% available space in the volume because we need the space to store the fingerprint file and some temp files we use for sorting the contents of the fingerprint file prior to doing the comparisson of the generated MD5 hashes which reside in the fingerprint (byte to byte comparisson occurs if 2 MD5 hashes match). Now, if we recommended 60% maximum volume utilization it would be pretty silly to recommend 3-5% available space in the volume, wouldn't it?

    As far your 60% before degrading performance comment goes my comment would be that it is upon those who are making the claim to prove it. At least that's how the legal system works. I can easily claim that anyone who buys a system other than netapp will get sick and suffer from severe diarrhea. I can argue that until I'm blue but i can't prove it. Neither can you.

    Just because NetApp's dedup is post processing that does not mean you have to have ALL the space up front. There are 2 ways to skin this cat. One way is shotguning it and move everything at one time. However, that's not how people deploy server virtualization. It's an evolution, not a revolution. So what you do is you allocate some space up front, you deploy, you dedup and then you use the freed up blocks to deploy more. If you follow a staged approach then you don't have to have the entire space up front.

    Now the questions here are these:

    What does everybody else guarantees on their dime?

    What do customers who participate in the program have to lose when dedup is FREE and the provisions are in place to address those who may not benefit from it?

    why don't you pay a visit to StorageMojo's Blog and take a look in the comment section the space savings output posted from a netapp customer aleady doing it. he not the only one.

    http://storagemojo.com/2008/09/30/de-duplicating-primary-storage/#comments

  9. Nick Triantos

    Guarantee

    Anonymous, you make it sound as RAID10 provides a triple disk protection? It doesn't. It does provide double protection failure, although selectively depending on which pair fails.

    RAID4 = Tolerant of Up to 1 disk loss

    RAID-DP = Tolerant of Up to 2 disk loss

    SynMirror + RAID4 = Tolerant of Up to 3 disks (any 3)

    SyncMirror + RAIDDP = Tolerant of Up to 5 (any 5)

    Show me how a RAID configuration provides this level of protection and the associated cost. Btw...people deploy this stuff.

    Of course we'd compare RAID-DP against RAID 1. RAID-DP is more resilient than RAID1 or RAID10. And from a performance standpoint we proved we can beat RAID10 using an industry accepted, industry written, cache hostile audited benchmark. Anyone who has had questions or qualms about it had 45 days to respond and didn't...In fact they still haven't responded.

  10. Anonymous Coward
    Thumb Down

    Putdowns !

    >Anyone who has had questions or qualms about it had 45 days to respond and didn't...In fact >they still haven't responded.

    The only people who would have had an interest in responding to this were EMC and "Just to set the record straight, only SPC members can challenge SPC results. EMC is not a member of the SPC."

    The point here is that there are so many veiled pre conditions attached to this offer, that the 50% guarantee headline is extremely misleading.

    * New FAS systems must be purchased for primary storage only. V-Series, S line, and VTL are excluded.

    * The program is not applicable to N series from IBM.

    * Can be using any one or more of the following protocols: FC, iSCSI, and NFS.

    * Must be running Data ONTAP® 7.3 or later. Data ONTAP 10 is excluded.

    * Capacity on the system supporting the virtual environment must be at least 14 drives.

    * Must agree to have the following features enabled:

    o AutoSupport

    o RAID-DP®

    o Thin provisioning without LUN reservation

    o Deduplication

    o NetApp SnapshotTM

    * Must follow the NetApp best practices described in the following technical reports:

    o TR 3428: NetApp and VMware VI3 Storage Best Practices

    o TR 3505: Deduplication Implementation and Best Practices

    o Whitepaper: 50% Virtualization Guarantee Program Technical Guide

    * The following services are required to help with the implementation. Must purchase a minimum level of Professional Services deployment and implementation services as follows:

    o NetApp Installation and Deployment

    o NetApp VMware Implementation Service

    * No more than 10% of the following data types under the Program: images and graphics, XML, database data, exchange data, and encrypted data. This also means that large database exchange deployments are excluded from this Program. These data types are deduplicated at a lower rate.

    * Must have at least 10 similar virtual machines per flexible volume, so that deduplication can work properly to realize the capacity savings.

    * Excludes workloads with high performance requirements that require spindles; to be determined by SE/PS during sizing.

  11. Nick Triantos

    response

    This particular SPC1 test didn't affect just EMC. It also affected Dell which resells this particular Clariion array used for the benchmark. So EMC could have still challenged the result via its Dell relationship because Dell IS a member of the SPC. In fact, the timeframe was not 45 days as I had previously stated, but 60.

    Furthermore, while the rules were written assuming that challenges would come from members, the SPC was aware of the unique nature of that particular event and would have permitted EMC to challenge even if they chose to maintain their non-member status. Like I said before, the SPC auditor notified EMC of the SPC's intention to allow them to respond. In fact the auditor has stated the following to an techtarget interview:

    "Baker said EMC has not challenged yet. “Absolutely not–and they have been notified, because I spoke with them myself,” he said. He added, “as the auditor I feel the result produced by NetApp is representative.”

    Of all the things listed, following best practices according to published TR, snapshots, dedup, Autosupport, TP and grouping VMs of same OS type to get good dedup ratios are things that netapp has been doing for years. They were not invented for the purposes of this Program.

    Furthermore, since we're on the hook for this, i see no problem having our PS organization participate in the design and deployment process. Sounds like a no brainer to me. It should also be evident to those that understand dedup that certain types of *primary* data just don't dedup well at all (images, encrypted data etc) so you can't guarantee dedup of this types of data no matter what the dedup algorithm is.This is about deduping *primary* data not backups and archives only.

This topic is closed for new posts.

Other stories you might like