back to article IBM tweaks rack and blade servers

Today is IBM's big storage announcement day, as we report elsewhere, but the company also tweaked a bunch of servers and associated switching options. The System x3250 M3 rack machine that is enhanced today, as well as the iDataPlex dx360 M3 rack-blade hybrid machine, were both announced back in the spring. But today they get …

COMMENTS

This topic is closed for new posts.
  1. The Cube
    Unhappy

    More stuff than with standard racks??

    From the article;

    (The iDataPlex rack is not a normal 42U rack that is a little more than three feet deep, but is twice as wide and half as deep and allowing for IBM to be more efficient about cooling and therefore allowing it to cram more stuff into a data center than it can do with standard racks and servers.)

    I am having some trouble understanding the basis of that claim...

    Whilst it is true that installing high power density equipment such as blade servers is one of the best ways to bugger up an otherwise functional data centre and that most of the current blade chassis designs can't cool themselves properly without refrigerated supply air etc there is a missing justification for the claim.

    Very few data centres are actually space constrained, most are constrained on either power or cooling capacity, putting the same power density into a smaller or wider but less deep box doesn't change the power or cooling demands at all, just how much you are going to pay the vendor for those commodity servers in a special box. Also, remember all those empty racks that you won't have power or cooling capacity for.

    Perhaps the claim is that unlike standard blade servers this generation don't waste a huge chunk of their power on high flow fans to maintain the cooling tornado that the unnecessary power density requires to avoid the lots of servers in the small box cooking themselves? If so then that would allow a small fraction more servers to be installed as you would be spending the power and cooling on the servers and not the bit of the cooling system inside the server.

    Alternatively they could be claiming that the shallower design has fewer components being cooled in series by the same air so they can survive higher intake temperatures and therefore you don't need to refrigerate them as much (many existing blade designs drag the air over hot things before cooking other hot things which wanted to be cooled).

    Or none of the above?

  2. Tom Samplonius
    FAIL

    Generaly blade servers are easier to cool

    "...and that most of the current blade chassis designs can't cool themselves properly without refrigerated supply air etc there is a missing justification for the claim..."

    That is definitely not the case with the Bladecenter, having several of these in a data centre. Blades in generally can sustain a much higher delta T (temperature increase through the server) than any other type of server. This is a huge benefit to cooling. So you can take in 22C air, and put out 40C air. That means per unit of air, you are carrying much more heat that with 1U or 2U servers.

    BladeCenters from IBM only have two fans per 14 blades. You should read up on fan law. Each fan motor is 120W, at 100% speed, but when running at normal 30% or 50% speed, ituses much less power. And certainly much less power than the 4 to 6 fans per 1U or 2U server.

    A data centre that can't handle blades, was buggered up already. You should probably invest in some CFD software, so you can fix the problems. TileFlow is a good option. Blaming the servers for exposing the issues is pretty transparent.

    1. The Cube

      Blades

      22C air in is refrigerated air in most places. The high exhaust temperature is a good start but generally no help unless you have contained air flow, in a non contained data centre there is no real advantage to having high exhaust temperatures from only some of the equipment. With containment yes, higher return temps and lower air volumes are a real bonus.

      As for CFD, no, fixing the air flow problem by managing it properly (contained hot / cold) is a solution worth investing in, fiddling about with moving floor tiles and CFD is just more wasted time, particularly as modern kit gets far smarter about variable fan speeds and variable heat output. In most of the CFD software a single set heat output and airflow is assumed for each device, this is no longer a reasonable assumption in most data centres with power management, VMotion and other tools coming into play. One would need another data centre just to run the CFD solves for all the power states in the data centre being fixed.

      Fan speed control is obviously an important part of it, larger fans are generally more efficient and as you say, the cube law applies across a large part of the operating range and the ridiculous little fans in 1U servers are never going to be efficient.

      However, there are plenty of data centres which were designed to low power densities such as 2kW per rack where operators have trouble installing blades without generating areas where intake air temperature is a problem to other equipment. Of course, proper air flow management would be a good start for many of these problems.

      In any case, none of this answers the question of why the new IBM wider, shallower rack claims to increase the cooling capacity. (not suggesting it doesn't, just wondering what the argument is)

This topic is closed for new posts.

Other stories you might like