back to article NetApp accused of short-stroking its new hardness

NetApp's bombshell NFS benchmark record has generated accusations that it is artificially boosting performance by short-stroking disks behind the scenes and scaling up rather than out. Short-stroking is the technique of accelerating data transfer rates by only accessing a small part of each disk's surface, massively wasting …

COMMENTS

This topic is closed for new posts.
  1. Changeyourgamenow
    Go

    Love it!

    Two short stroking masters accuse each other! Remember the VG gateway with 4 VNX and symm in the backend. EMC gets it's own Medicine back. 2.5B paid for Isilon and EMC will want to make it work desperately. NetApp on the other hand was quite for 6 months and now wants it's mojo back in the game. Keep going both!

    Unified scale-out is key for EMC to stop the NetApp train. Cheap games like painting pavement and accusing does not help.

  2. Anonymous Coward
    Anonymous Coward

    Either way...

    No one would buy netapp for performance. It's just not there. They're getting better. But still suck for performance.

    And all the benchmarks are on systems setup to get the best #'s for that test. All the vendors do it.

  3. Anonymous Coward
    Anonymous Coward

    Sounds like...

    ...and EMC sales pitch.

  4. dikrek
    Stop

    Isilon short-stroked more

    D from NetApp here...

    Please everyone - you can go ahead to the spec.org site and read both submissions. Tons of detail without he-said-she-said theatricals.

    NetApp:http://bit.ly/utDOQR

    Isilon: http://bit.ly/s1IFH6

    Isilon used about 1/7th of all available space (counting over 800TB of space).

    NetApp used about 1/3rd of all available space (counting over 500TB of space).

    All clearly stated in the official submissions.

    Do the math.

    Anyway, the point of the NetApp architecture is primarily that it's a general-purpose unified storage system, with the ability to have up to 24 nodes clustered together.

    it's not a niche architecture like the scale-out NAS vendors'.

    As a result, there are a couple of things the niche scale-out NAS architectures can do that NetApp can't, and about 100 things NetApp can do that the niche scale-out NAS vendors can't.

    Deciding on what you need for your business depends on what features you need.

    Read here for detailed analysis: http://bit.ly/uuK8tG

    Thx

    D

  5. StorageJunkie

    Storagezilla has its own interpretation of scale-out

    Add Storagezilla to the mix of those storage vendors who have their own interpretation of scale-out.

  6. Alex McDonald 1

    Alex McDonald of NetApp here.

    Chris, my apologies; I promised you some reasoned arguments and background information as to why EMC/Isilon appear to be misunderstanding the specSFS benchmarks. Since you've published, I'm replying here.

    Twomey of EMC makes one valid point; "Scale-out means different things to Isilon, [IBM] SONAS, [HP] IBRIX and NetApp." But this isn't about definitions or about what we each mean by scale-out or scale-up or scale-anything; it's about scale -- full stop -- and a benchmark which is tightly defined (and where we spanked EMC). The rest of his arguments are, as usual, diversionary nonsense. What's eating Twomey is the fact that NetApp's submission was smaller, cheaper and faster.

    But I am surprised at Peglar, the America's CTO (Chief Technology Officer) of Isilon, because he betrays a serious misunderstanding of the benchmark, and I'm surprised that he isn't better informed. Here's what he should know.

    The specSFS benchmark creates 120MB of dataset for every requested NFS operation. You can't control how much space the benchmark is going to use -- in fact, the usual complaint is how big the SFS dataset size is. We (NetApp) chose a volume size of 12TB for each volume giving 288TB. The main number to look at for the benchmark is the file set size created which was 176176GB (176TB) for the 24 node test. We could have created much bigger volumes and could have exported the capacity of the entire system at 777TB. Which would have not made a difference to the results; since the fileset size created would *still* have been 176TB.

    Isilon exported all the usable capacity. 864TB. The benchmark dataset size for them was 128889GB (129TB).

    So, on inspection, it took Isilon 3,360 10K rpm disk drives (plus 42TB of flash SSDs) to service 129TB of data. NetApp took 1,728 15k rpm disk drives (plus 12TB of flash cache) to service 176TB of data.

    Now who's short stroking?

    There are two arguments un-informed arguments we hear about benchmarks all the time, and I thought Peglar would have understood them and why they aren't relevant.

    Argument 1: If one doesn't touch every byte of the exported capacity then the system is being gamed, so as to short stroke the disks and gain an unfair advantage.

    Response 2: There will never be any real world workload that touches *every single byte* of all available capacity. That is not the way systems have, or will ever be used. Benchmarks model a realistic workload and measure systems under that load, not bizarre edge cases.

    Argument 2: Creating LUNs that are smaller than the maximum capacity is creating short stroking and an unfair advantage.

    Response 2: Modern filesystems no longer couple the data layout with the exported capacity. Thus, there is no performance advantage that is related to LUN size or the exported capacity. As long as the same amount of data is accessed across systems then the equal performance comparison is valid; or, as in the NetApp submission, where a *lot* more data is being accessed, the benchmark demonstrates it's a much better performer. If you are seeing a difference in performance that is coupled to exported capacity, you might want to consider a NetApp system that does not have such an antiquated data layout mechanism.

    Summary: The total exported capacity is the combined capacity of the volumes created. It does not have any bearing on the performance obtained.

    The argument Peglar makes would seem to indicate that Isilon may have one of those old, steam-driven data layouts. But, of course, an Isilon system doesn't, so why he's making the points he does is beyond me. There are only a couple of reasons that EMC/Isilon could present an invalid premise for an argument; (1) they don't understand the subject material, and lack experience in debating these issues, or (2) they fully understand the subject material and believe that the person they are trying to convince does not.

    I'll let you guess as to which I think is the case.

  7. Anonymous Coward
    Trollface

    I'm gonna need popcorn, a LOT of popcorn

  8. Anonymous Coward
    Anonymous Coward

    I called Jeremy Clarkson

    We could do with some help from the Top Gear team on this. At least we get an injection of real humor!

    Of course, what we have is hype thinly disguised as science, just as in the Bugatti-Ferrari war. I can't help but wonder if we could have really aced this test with white box nodes filled with SSD, and done it cheaper. Software might be a bit challenging, but Symantec might oblige, with or without Huawei!

    I can see the contest now, Jeremy driving Netapp and boasting a lot, James running the slow, but sure, EMC and apologising all the time for the delays, and Richard running the white box, cackling every time he beats them on a run.

    Delicious!

This topic is closed for new posts.

Other stories you might like