back to article Roll over Beethoven: HPE Synergy compositions oughta get Meg singing for joy

HPE's Synergy is, it thinks, the next great advance in servers and is far more capable than hyper-converged infrastructure systems, being able to provision bare metal as well as servers for virtual workloads as well as containerised ones. Getting a grip on this beast is tricky. Is it a form of dynamically reconfigurable …

  1. Preston Munchensonton
    Stop

    Definitely wrong

    This chassis can take a mix of server and storage frames which slot in from the front and has a set of network nodes at the back, with a single master node. These nodes obviate the need for a separate top of rack networking switch.

    This last statement is patently false though potentially irrelevant. Having a set of network nodes only precludes the need for external switching in the case of inter-node communication. Any communication with other, non-node infrastructure would still require an external access or aggregation switch of some kind. Where ToR is used, that would be ToR. Where end-of-row (EoR)/middle-of-row(MoR) switches are used, then those EoR/MoR switches would be used.

    If the server nodes will operate completely self-contained, then the statement could be true. But I doubt it's true in the vast majority of deployment scenarios.

    1. dedmonst

      Re: Definitely wrong

      Is an End of Row / Middle of Row switch a Top of Rack switch? We could argue semantics on that all day, but here's the point...

      Put 4 fully-loaded C7000 blade enclosures in a rack right now and depending on your throughput requirements you will need at least 2 uplinks from each enclosure, which means a minimum of 8 high throughput ports (realistic minimum 10Gb) of Ethernet, FC, or FCoE on your ToR switch... even though most of the traffic is probably east-west between the enclosures. On top of that, any traffic between enclosures is going to be bottle-necked on the uplink performance.

      With an equivalent Synergy configuration of 4 frames, you could do all that with just 2 uplinks across all 4 frames, and also have no over-subscription on inter-frame traffic. My ToR/EoR/MoR density just dropped from 8 to 2... important if you happen to be using a switch vendor who has a pricing model where the cost is tied to your port count.

      And these are just minimums - most installations I see have at least 4 uplinks per enclosure.

      So for a lot of customers this is going to be the difference between deploying ToR switches and just connecting straight into the core.

      1. Preston Munchensonton

        Re: Definitely wrong

        And these are just minimums - most installations I see have at least 4 uplinks per enclosure.

        Hence, my original point. The article reads like you would never need any external connectivity of any kind, which may be true for very specific builds (like HPC and the like), but doubtful for most workloads.

        As it stands, highly converged designs rarely have a switch in each cabinet anymore, instead having a clustered pair stashed in a set of three cabinets (or more) to conserve on cabling. Your count of 2 or 4 going back to the core or aggregation isn't for every design either, especially with so much focus on 40Gbps or 100Gbps now.

        1. Mark Hahn

          Re: Definitely wrong

          "like HPC"!?! HPC is precisely where the ideal is every node connected to a fully non-blocking fabric: a megachassis like this would need a big bundle of uplinks.

  2. Anonymous Coward
    Anonymous Coward

    Makes me miss Seamicro

    Quite a shame that AMD dumped them. Seamicro had some great ideas and IP.

    Still hopeful they come back in some way.

  3. Anonymous Coward
    Anonymous Coward

    so far, I think it is a lot of hype

    As far as I could tell from the discover conference (and much asking of questions since we have a major investment in c-class hardware), there wasn't a heck of a lot here with synergy (other than new hardware/switches). Assuming you already use software(s) to provision your systems, it is unclear to me what the templating really gives as an advantage (since in-the-end, it is still just a server, not true pooling of cpu/memory or anything like that) other than lock-in to the openview way of doing things. As for c-class, we learned the hardway that linking your enclosures (per early HP notions) to limit your uplinks is generally a bad idea unless you think those switch firmware updates will always go in perfectly/always (which is just not going to ever be true).

    1. Anonymous Coward
      Anonymous Coward

      Re: so far, I think it is a lot of hype

      My take on it is that HPE are admitting the c-class solution was way too hard to use, which it was, and that the c-class networking was a bit crap, which it was. HPE are also saying they'd very much like it if you see your c-class as legacy and replace it all with ever so slightly different blades rather than doing the right thing and moving to AWS or Azure. Please don't go to AWS or Azure, we beg you....Hello? Are you still there? Hello? Bueller? Oh, too little, too late.

  4. luis river

    HPE firts class

    Great work, the HPE división product development make great achievements, the envy of competiveness

  5. simon_c

    HPE Struggle for relevance ?

    I first heard about this technology at SaltConf earlier this year.

    My initial thoughts were that it was HPE struggling for relevance in a world where at cloud scale cheap, simple boxes rule, and the hypervisor just routes around failures.

  6. bladesmadesimple
    Alert

    Concerns with Synergy

    Synergy has been announced for 8 months now, but it's not shipping. In addition, with the next Intel server CPUs 9 months away, would customers actually buy a Synergy system knowing that the compute nodes they buy will become obsolete in less than a year?

    Another thing that concerns me is the design aspect of connecting multiple chassis together and having few networking links out. Today, this is known as "stacking" and MANY of the global customers I talk to don't like it because it puts a higher risk factor into the design. I hope that Synergy offers traditional networking connectivity, like in today's blade architecture, but at this point, it's hard to tell.

    Since Synergy is so different than the traditional ProLiant blade design, I think it's going to open up the door for consumers to look at other options in the market.

    1. dedmonst

      Re: Concerns with Synergy

      "with the next Intel server CPUs 9 months away, would customers actually buy a Synergy system knowing that the compute nodes they buy will become obsolete in less than a year?"

      And how is this different for ANY x86 server vendor? That's the x86 market... it's never a good time to buy, and it's always a good time to buy.

      "I hope that Synergy offers traditional networking connectivity, like in today's blade architecture"

      Yes it does - if you want to do things the way you have in the past, you can.

    2. Korev Silver badge

      Re: Concerns with Synergy

      One of HPE's rivals sales engineers has told us that the forthcoming Intel chips will require an entire rebuild of their blades too. I expect HPE have this in mind.

      1. Matt Bryant Silver badge
        Facepalm

        Re: Korev Re: Concerns with Synergy

        ".....the forthcoming Intel chips will require an entire rebuild of their blades too....." Well, duh! If he's referring to "Purley"-based servers, they will need a new chipset, so everyones' blades will have to change. I suggest you ask for less FUD, more facts with your next sales briefing.

    3. Billy7766

      Re: Concerns with Synergy

      Actually it definietley is shipping, and indeed running - we had a system up in 30 minutes from lights on which was pretty cool, and so far so good

  7. Matt Bryant Silver badge
    WTF?

    Yawn

    So, effectively just a polish of the blades concept with some clever software? I suppose it counts as "differentiation" and is where HP has won with blades before.

    "....Synergy frames can scale out, with four per rack, if your floors can stand the weight...." One of the laughs we had when HP launched the original C-class (and for a long while after) was the brochure insistence that it was a super-dense solution because you could squeeze four chassis into one 42U rack. The problem with that was that if you fully-populated all four chassis you would exceed the weight limit for HP's standard 42U racks even if your floor could take it! Another laugh was most datacenter's simply didn't have enough power to each rack to run four C-class chassis, so we usually ended up running two per rack anyway.

    1. dedmonst

      Re: Yawn

      The 42U 10K series racks (introduced before BladeSystem) and all the newer descendants have always been able to take up to 2000lbs of load (so about 907Kg). A fully loaded blade enclosure has a max weight of about 218Kg (that's all 8 interconnects installed - not many people have that) . So 4 blade enclosures in a rack has always been possible with a little left over for your power distribution infrastructure. Of course you DC floor and power might not be able to cope with that, but that's your problem, not HP's!

      1. Matt Bryant Silver badge

        Re: dedmonst Re: Yawn

        "The 42U 10K series racks (introduced before BladeSystem) and all the newer descendants have always been able to take up to 2000lbs of load....." You can go argue the toss with HP's own configurator tool. I think you're forgetting that four fully-configured C7000s means you have twenty-four C20 power sockets to feed, which means lots of hefty 32A PDUs to stick in your 10642 rack. Things got worse if you needed intelligent PDUs (which most companies I worked with did), which were even heavier again. HP's own configurator tool used to baulk at the idea. The first time I saw it do so was the first time an HP presales came out to sell us on C7000s and he tried putting together some four-chassis rack builds - it all went fine until we cloned in the fourth full chassis and the configurator said it was exceeding the rack limits, which was a surprise to the HP presales. The only time I ever used four fully-stocked C7000s in a rack was when we were using non-HP racks.

        1. dedmonst

          Re: dedmonst Yawn

          >> you can go argue the toss with HP's own configurator tool.

          I use it every day - I can build this configuration no problem - of course the 11K series racks in the config tools these days can hold up to 3000lbs static or 2500lbs rolling, so that's not surprising. It would be *close* with an older 10K series rack but still possible.

          >> I think you're forgetting that four fully-configured C7000s means you have twenty-four C20 power sockets to feed, which means lots of hefty 32A PDUs to stick in your 10642 rack. Things got worse if you needed intelligent PDUs (which most companies I worked with did), which were even heavier again.

          No, I'm not - I mentioned the "power distribution infrastructure" in my previous post. 24 x C19s = 4 x PDUs with 6 x C19 sockets - these come in at about 8Kg ea. for standard and 9Kg ea. for Intelligent.

          Long and short of it - yes it was a tight squeeze, but always possible - I don't doubt that the config tools have said "not possible" on occasion, but they do make a bunch of assumptions - if you sit down and work all this stuff out by hand you can do it.

          1. Matt Bryant Silver badge
            Facepalm

            Re: dedmonst Re: dedmonst Yawn

            "I use it every day....I don't doubt that the config tools have said "not possible" on occasion..... if you sit down and work all this stuff out by hand you can do it." If you use the HP configuration tool to build orders "every day" then you'd also know the fun of trying to get a factory over-ride on a rejected build. Sure, it may not be a problem with the newer racks, but it definitely was with the older models.

            Synergy sounds like HP are pushing their blades development in the right direction, and I'll be interested to see how it manages with other vendors' switches and storage. I might even try four chassis in the newer racks.

  8. Anonymous Coward
    Anonymous Coward

    Does dedmonst work for HPE I wonder?

    Do we know how much these are going to cost? That will be the clincher. Pretty much all our workloads are virtualised and I can't see anything better at the moment than a standard 2U server full of SSDs. If Synergy comes in cheaper then it might take off. Can't see it though.

  9. rraghav

    Synergy is not special

    In my opinion, Synergy will never cut it for HPE. It will be just another Moonshot and will end up as a replacement for the aging c7000 Enclosure. And when are they shipping it? Have been hearing about it for almost 2 years and still there is no news on a shipping date.

    There are two possible reasons for the delay. First, they have serious issues with the software around it. Second, they simply want to sustain the hype a bit longer.

  10. Mark Hahn

    Just another blade chassis, no?

    The article doesn't make clear what's actually new about this: it appears to be just another blade chassis with the expected built-in san/lan networking.

    What really puzzles me is why this sort of thing persistently appeals to vendors, when it's not at all clear that customers actually need it (let alone want it).

    Obviously camp followers of the industry (like the Reg) need something to write about, but dis-aggregation of servers is, at this point, laughable. QPI is the fastest coherent fabric achievable right now, and it's not clear that Si photonics will change it in any way: latency is what matters, not bandwidth, and Si-p doesn't help there. PCIe is the fastest you can make a socket-oriented non-coherent fabric, and again, its main problem is latency, not bandwidth (though a blade chassis whose backplane was a giant PCIe switch might be interesting, but not require Si-p). 100Gb IB or Eth are the fastest scalable fabrics, but they don't really enter into this picture (they're certainly not fast enough to connected dis-aggregated cpus/memory/storage.)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like