back to article NetApp's missing bits: Don't we need a switch infrastructure?

NetApp has announced dynamic scale-out, a feature of ONTAP 8 that has multiple heads for its filers. But what wasn't announced was any description of a head and filer interconnect, nor the ability for multiple filers to connect to the multiple heads. So how can that work, then? Existing highly scalable file storage systems, …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Boffin

    Or just bundle another vendors switch

    When deploying cluster-mode configurations you'll be using switches supplied by NetApp that are made by well known networking vendor(s).

    One private network used by a cluster to share state is pretty much the same as any other. Probably not worth an entire article over...

  2. Anonymous Coward
    FAIL

    It's called Ethernet...

    The "missing" interconnect is Gigabit Ethernet, with 10 Gbit Ethernet either available now or "real soon now" for GX. There is an internal Infiniband interconnect between HA node pairs, however this is not used for the cluster traffic. The main GX interconnect (and now for "GX Mode" in DoT 8) among node pairs has always been Ethernet. The disks / shelves themselves are FCAL (looped 4 GB Fibre Channel), the shelves are FC cabled to 2 HA heads each for redundancy. For the Ethernet interconnects, slot in Cisco switches, or your own preferred / supported switch vendor. NetApp resells them as well I believe.

    I would love to see an Infiniband interconnect option, as this interconnect is currently used on 2-node DoT 7G & now "7 mode" type HA clusters, as well as the HA pairs in GX (though solely for replaying NVRAM contents & taking over shelf ownership during node-down failovers), but so far this is not available... it may only be a matter of time, as well. The latency would definitely improve with IB, even over 10GbE, however the PCI-E bus on the controller heads may still be a bottleneck. Right now you have to use an IB-Ethernet gateway, if your compute cluster is IB interconnected, which IMHO is a total kludge. As soon as IB native node interconnect and IB native connect to client is available, I think GX mode will really begin to shine.

    Maybe that's why no new SPEC SFS2008 numbers are published yet for DoT 8 in GX mode.... though the old GX system still broke 1 million IOPS (on the older SFSv3) with 24 nodes on GbE; I'd expect much better cluster results to come out "real soon now" unless there are some serious unforeseen issues with the performance of the new DoT 8 GX mode.

  3. AndrewG
    FAIL

    So havn't heard of OnTap 8 in GX mode then?

    Netapps next big version (OnTap 8) will finally integrate technology they bought from Spinaker storage which is a proper clusered stoage fabric. They are already installing NFS only solutions like this called OnTap GX but OnTap 8.x will also have CIFS and LUN's.

    They are also providing switched multisite clusters (metrocluster)

    Epic fail for this whole article

  4. peter 62

    @andrewG - cifs already available

    CIFS is already available on GX 10 - no need to wait for DoT 8 cluster mode

  5. Chris Mellor 1

    Private Ethernet network

    Sent to me by John Rollason, Solutions Marketing Manager, NetApp:

    “ONTAP 8 cluster-mode as with its predecessor ONTAP GX uses a private Ethernet network to connect controllers in the cluster (head units). With ONTAP 8 we are supplying as part of the cluster configuration a self-managed private network using multiple 10 Gigabit DCB (Data Centre Bridging) switches. Customer management of this network infrastructure is minimal as it is purely designed for cluster controller interconnections and not hosts or user connectivity. Each controller has multiple connections to redundant switches to ensure high throughput and resiliency.”

    Chris.

This topic is closed for new posts.

Other stories you might like