back to article Microsoft, Google et al form club to push 25/50 Gbps Ethernet

A new consortium of vendors is firing up to give 25 Gbps and 50 Gbps Ethernet a push, in an effort to get the kind of performance demanded in cloud data centres. While lab demonstrations have shown that Ethernet will one day be able to reach 400 Gbps, you need standards to turn speed into systems, and that's what Microsoft, …

  1. A Non e-mouse Silver badge

    Why?

    10 & 40Gb/s are already in the market. The 25Gigabit alliance website doesn't really say much (Other than a press release)

    Not sure how far off 100Gb/s is.

    1. -tim

      Re: Why?

      100 Gbs is so far off, Google has replaced most of its internal switches with the stuff but there are some steep royalties with the 10G, 40G and 100G stuff and I expect they don't like the idea of paying more for the optics section of a motherboard than the CPU+memory. I expect this reads like short run copper as well but that will be clear when they stop talking about a "top of rack switch" and start talking about "middle of the rack switch".

    2. Voland's right hand Silver badge

      Re: Why?

      10G adapter cost per port is still quite significant. I develop on an 8 core Athlon, which costs 450. The extra 10G Ethernet costs north of 500£ alone. This sounds like a rehash of "direct attach" cabling at higher rate and using a customer MAC layer (probably to avoid some royalties).

      As far as 40 being already in the market the incremental cost between 10 and 40 is such that there is a market demand for something in-between.

      1. localzuk Silver badge

        Re: Why?

        You should shop around a bit more if you pay £500 for a 10Gbit NIC. An Intel dual port 10GbE card costs £320 inc VAT. Shop around a bit and you could probably get it for less.

        1. Voland's right hand Silver badge

          Re: Why?

          You should shop around a bit more if you pay £500

          Actually, no I do not. In order for a 10G NIC to push 10G in real life you need multiqueue, adapter based hashing or flow matching and hash control at the driver so you can tell the OS to map consistently flows onto their matching consuming cores.

          The 320£ or so NICs do only a fraction of this functionality. So while theyconnect at 10G pushing them to 10G in a useful manner is not feasible (or will cost you double the CPU compared to a decent NIC). So you spend 180£ less on a NIC and have to shell that in cost of extra cores in your box (or even double that).

          The lowest 10G NIC worth buying on the open market at present are the Broadcom bnx2x series which depending on the actual type of 10G interface in them retail in the 500-560 area. You also have to tune them to your job too. It is nowhere near automagic on any OS.

          Even that does not do all I would like to do as there is no deterministic flow matching. For that you need to check for a few more 100s in your wallet.

  2. James 100

    25 and 50?

    Odd speed points to aim at - I'd have liked to see a cheaper 10G option, then you could have a quad-10G link to give a more cost-effective 40G option as well. That would fit in better with the current speed points (10/40/100) - though I suppose something like "same cost as 10G, but 2.5x faster" isn't a bad offering either.

    From what I've read about some of the 100G Ethernet ports, they can be four 25G links bonded together internally: does that make it easier for this group to repurpose existing 25G components into something more cost-effective than 10G ones?

    It's a shame Ethernet doesn't seem to be following its old pattern. I remember 10M hubs, with switches being pricey server-room kit; the move to 100M, then to 1G, with each of them becoming affordable enough to wipe out the previous speed point after a few years, so motherboards could come with a 1G port (or two) as standard for a trivial price. Why is 10G still so expensive, a decade after we could buy a gigabit switch for small change?

    1. Suricou Raven

      Re: 25 and 50?

      You can get 8Gb/s Infiniband for less than the cost of 10gig ethernet. It's not quite as fast, but the RDMA makes up for it.

    2. kedmison

      Re: 25 and 50?

      The speed points suggest to me an influence from PCIe lanes and bandwidths: 25Gbps should fit nicely in a PCIe 2.0 x8 or PCIe 3.0 x4 and 50Gbps should fit in a PCIe 3.0 x8. Based on that, I surmise that this effort is targeted at the server-to-switch links, and that inter-switch links would likely run at higher rates.

      As an aside: If PCIe is an influence, then I wonder how strongly it is influencing the design. Thunderbolt, for example can carry several lanes of PCIe to devices outside of the computer. It would be interesting to think about using the Thunderbolt approach to extend a server's PCIe lanes, carrying network traffic, directly into the TOR switch. It would likely result in lower overall cost plus lower latency.

  3. Glen Turner 666

    Why 25/50 when 40Gbps exists?

    To answer some questions above, 40Gbps is implemented as four 10Gbps channels.

    The cost of four lasers within a QSFP is obviously four times the cost of the one laser. But worse, where each laser is run over its own fibre (as must be the case for multimode fibre) the MPO/MTP connectors are expensive, fragile and almost impossible to clean and test in the field. Using 40Gbps ethernet has a high operational cost.

    Using 25Gbps channels rather than 10Gbps channels halves the amount of cabling whilst remaining economic. Note that this is being promoted as a top of switch technology, so losing the benefit of 40Gbps over single mode of being able to be optically multiplexed by ITU-compliant 10Gbps passive WDM systems isn't a worry.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like