back to article Xsigo rejiggers virtual I/O director for Ethernet

Xsigo Systems, one of the pioneers of virtual I/O for server networks and their links to storage, is beside itself with excitement at the VMworld virtualization and cloud extravaganza in San Francisco now that it has finally brought a product to market that is designed for enthusiastic uptake by IT shops that like their Ethernet …

COMMENTS

This topic is closed for new posts.
  1. Magellan

    What are the protocols?

    The InfiniBand version of Xsigo used proprietary protocols for server IP and storage access (Xsigo vNIC and Xsigo vHBA). They submitted these protocols to the Open Fabrics organization, but I do not recall any mention of the Xsigo protocols in the latest Open Fabrics Sonoma Conference.

    This does not appear to be an FCoE solution. It appears to be an Ethernet iSCSI front-end kludge to an InfiniBand bridge. So I assume storage goes from GigE iSCSI, to an encapsulated proprietary Xsigo InfiniBand storage protocol, back to native Fibre Channel. This could be done via a standardized iSER (iSCSI over RDMA) protocol. IP traffic could either go from native Ethernet to an encapsulated proprietary Xsigo InfiniBand IP protocol, or via a standardized IPoIB (IP over InfiniBand) protocol.

    I assume Xsigo would use its proprietary protocols, as its central management of vNICs and vHBAs are what differentiates Xsigo's solution.

    Either way, if indeed it is iSCSI to the host, this is little more than an iSCSI to FC router, with some added management intelligence. But the iSCSI router industry died a decade ago, once native iSCSI storage came along.

    1. dyernj
      Thumb Up

      NOT Proprietary

      Hi Magellan, just wanted to clarify a couple of things for you here.

      Xsigo is an open standards company, there is no proprietary lock in protocols at play here. All connections / Ports etc are standard "off the shelf" IO cards like Broadcom/QLogic etc, and use standard SFP+, CX4 & QSFP Connections (for 10GbE, 20Gb and 40Gb Infiniband, respectively).

      The protocols at play here are indeed RDMA, so FC and Ethernet packets are rapped in the standard RDMA protocol and pushed up to the dumb card inside the server (the card has no intelligence, and is a standard 1 or 10Gb Ethernet cards, or Host Channel Adapters from Mellanox for the IB). IP over IB, as i'm sure you're aware, is an extremely advantageous standard protocol with super low latency over a much fatter pipe. For Ethernet we use CEE.

      All intelligence is done in the Xsigo directors, which is completely different from FCoE, as the vHBAs and vNICs are created and live on the CNA card itself - which makes live migrations of IO profiles, NICs and HBAs a very challenging feat. As we don't use FCoE (which in itself isn't a fully ratified and standardised protocol), we can use "off the shelf" technology, meaning our costs are typically 1/3 the cost of an FCoE solution, but we provide higher bandwidth, more capabilities like vNIC and vHBA QoS and Network Disaster Recovery (for example).

      I'm happy to discuss this further on mail or via a meeting to show you the technology at play.

      rgds

      Nick Dyer

      EMEA Systems Engineer - Xsigo Systems

This topic is closed for new posts.