back to article Will Hitachi's high-end refresh be more of the same?

Substantial refreshes of high-end storage boxes by the big storage beasts are rare, but they do happen. With EMC having revamped Symmetrix, attention is turning to the others. What is Hitachi Data Systems going to do? The five big storage hardware players are EMC, HDS, HP, IBM and NetApp. HP resells HDS' USP-V as its XP, and …

COMMENTS

This topic is closed for new posts.
  1. Nate Amsden

    unifed architecture?

    The big question is will they be able to unify their architecture, utilizing the same operating system across all of their storage product lines, and the same basic designs as far as controller types/interconnects etc. This is what can really drive costs down, coming up with a scalable architecture that you can start small and cost effective with(say under $100k) but grow really big($2M+).

    You see this increasingly in the network space, with companies using more and more shared components across all of their product lines. And the better ones are able to use the same operating system from the smallest edge switch to the largest core switch and/or router.

    3PAR does this to a great extent as well. When we were talking to HDS last year about their AMS2k which had just launched I was quizzing them on various things, while their responses fell far off the mark they did indicate that at least presently their software lines between the various tiers of storage are significantly different, one example was they were working on "porting" the thin provisioning technology from USP down to the AMS. Hopefully they'll address that with their next refresh, but I'm not holding my breath.

    EMC had an opportunity to unify their architecture but failed to do so with the V-MAX (the entry price is just too high). And they show no signs of ridding themselves of the older CX architectures in favor of the newer more scalable one (because it only scales up, it can't scale down). The V-MAX architecture on paper at least is very impressive, there's just no way my company could afford to buy it, and I'm sure that applies to the bulk of companies out there, otherwise they would commit themselves to adapting the architecture down to the lowest end CX array.

    NetApp seems to have a fairly unified architecture for one, I believe the same OS runs on all of their products, and they have the same basic design just different scales. Not quite as effective as 3PAR or V-MAX, but is certainly more cost effective at the low end vs V-MAX.

    And with NetApp I keep hearing/seeing people talk about how they don't really play in the high end/scalable arena(at least they are not competitive), I think OnTap 8 addresses some of the shortcomings of their previous generation of products(biggest one I can think of is the aggregate size and the # of spindles you can stripe a volume across, also file system size), but it is interesting to note that OnTap 8 still has two "modes" one is the legacy mode, the other is the cluster mode. Which tells me they are still having a hard time integrating the clustering technology that they acquired from whatever that company's name was years ago that turned into their GX product line. From a high level marketing standpoint they can perhaps claim that the stuff has been merged/integrated but once you get down to the details it falls short. They still have some good technology though, and I don't think anyone has more software features than they do. I think at least for small-mid range installations they are probably the best fit for VMware.

    I *think* NetApp's legacy mode still restricts a particular file system to one controller at a time, so if you want active-active you need at least two controllers, two file systems and you need to load balance manually.

    Compellent has some nice software features with regards to block level migrations, though their architecture in their current products is not scalable contrary to what their site claims, it appears to still max out at two controllers, to me that seems like there is a lot of potential that is untapped in their design. It could be far more scalable, but so far they haven't made anything public to indicate they are headed in that direction. And I don't know if I'm reading it wrong but it says their controller cache maxes out at 3.5GB on the high end(per controller), so what 7GB of cache for their high end? vs 96GB for 3PAR? and something like 8 bazillion GB for V-MAX? The 24GB of cache in my current 3PAR goes a really long way(and I can double it to 48GB easily), far further than the 8GB that my smaller 3PAR had at my previous company.

    The large cache is the reason the system is able to maintain an average of 3 millisecond response time on the front(controller->servers) end for disk writes, vs 60 milliseconds on the back end (controller->disks). When I tell people that often times their jaws just drop. And I have the performance stats to prove it, sampled every 60 seconds.

  2. Anonymous Coward
    Anonymous Coward

    Intel??

    I don't think HDS will use intel cpus. They already have efficient low power, low clock speed chips in their arrays. They love to pitch this as a reason for high availability.

    And aren't most of the IBM arrays rebrands of LSI arrays?

  3. Anonymous Coward
    Thumb Down

    RE: Will Hitachi's high-end refresh be more of the same?

    You would have been better off asking if anybody really cares?!

  4. Chris Mellor 1

    IBM high-end refresh

    From a financial analyst (AAron Rakers of Stifel Nicolaus):-

    "IBM had stated that it would have a complete new line-up of high-end storage systems based on P6 processors in 2010."

    Chris.

  5. cmaurand

    They gotta get their drives to last

    Before they do the high end, they need to make their drives more reliable. I have several dead hitachi drives.

This topic is closed for new posts.