back to article Sepaton plans to crush mid-range boxes

Sepaton, the large enterprise deduping storage vendor, reckons its big boxes can replace lots of those pesky, seemingly continuously accumulating Data Domain mid-range boxes with a single system that does the job properly. There are a few stand-alone survivors from the deduplicating array start boom of 10 to 15 years ago. …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Hmm...

    I think he may well be comparing apples and oranges - the StoreOnce units, for instance, are designed to be installed in branch offices and replicate to central datacentres, as well as providing enterprise datacentre functionality. There are some very large StoreOnce boxes which have (IIRC) 500ish TB of back end disk which equates to knocking on for 6-8PB of back end after dedupe. There are also some 3TB little units for branch offices.

    What I would say is the big issue in VTLs these days is if they support OST rather than just plain old tape emulation and why it's only Symantec who are trying to make a break from virtualised tape.

    1. Eric from SEPATON

      .Re: Hmm...

      I’d like to reply to “anonymous coward” with some additional technical info in order to highlight the differences between these two data protection solutions:

      ? You should look at the best practices guides for inline hash-based systems which require that customers turn off any software compression on the source system and furthermore systems such as the D2D have trouble dealing with multiplexed data (data from multiple client systems interleaved in a single backup stream). Now if you look at what’s under the hood … The "very large StoreOnce boxes which have 500ish TB of back end disk" are actually collections of much smaller boxes managed through a single interface. You should also check out the configuration guide and will understand that the Enterprise StoreOnce Backup system is eight separate nodes, joined in “couplets” with failover between nodes and backups must be targeted at the correct couplet/node to ensure even capacity utilization. The dedupe domain is a library within a couplet and the maximum storage capacity of a couplet is 192TB raw, or 128TB usable with a performance hit above 64TB. The maximum throughput of a couplet is 6TB/hr.

      ? SEPATON can ingest at 43TB/hr in ONE 8-node system, irrespective of data change rate, and handle all that data in a single storage pool (a dedupe domain) if required.

      ? Please let’s not confuse SMB requirements with Enterprise requirements. SEPATON's architecture (basis for HP's VLS family) is specifically designed for the unique and challenging requirements of large enterprises and is offered by HP as a solution for large enterprises along with their D2D solutions.

  2. Is it me?

    I Think you'll find ....

    that it's "Kiw the wabbit"

    1. Red Sceptic
      Thumb Up

      You're quite right ...

      http://www.youtube.com/watch?v=C2VMqQ6XnmI

      "How will you do it, might I enquire to ask?"

      "I will do it with my spear and magic helmet!"

      They don't write 'em like that any more ...

  3. Anonymous Coward
    Anonymous Coward

    I thought

    That this is pretty much what Sun (RiP) and IBM had been saying for years, a big box can do more, more cheaply that a small box.

    Missing the point that the incremental change costs for big boxes are far more difficult to swallow than buying another small unit. Ask your FD if he'll spend 100,000 a year over 10 years or 1,000,000 to last 10, and he'll go for the first, even if you do have a really good business case.

  4. Anonymous Coward
    Anonymous Coward

    That they can scale-up on doesn't change the fact that their dedupe is old-school post-compression - you need space for all the old deduped data + latest version + the new incoming data = this means that overall dedupe ratio is significantly lower than on the inline appliances since from the start the Sepaton array needs significantly more disk on the backend.

    Only plus-side is that restores regardless of the size of the data-set is very fast - but for the bulk of the market then the most interesting fact in a backup solution is cost/GB for their backup application.

    I think the HP D2D lineup looks very strong - especially since the launch of the B6200 - which also scales up (up to 10.2 PB at 20:1 ratio also goes a long way for most customers needs)

This topic is closed for new posts.