back to article Japanese nuke lab erects 200 teraflop super

Server maker Fujitsu has announced that the Japan Atomic Energy Agency will be building a 200 teraflops cluster based on Intel's 'Nehalem EP' Xeon 5500 processors and Fujitsu's blade form factor. JAEA is also buying two Sparc-based clusters, foundations for even larger petaflops-scale supers that Fujitsu plans to build using its …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Alien

    The lengths

    some people go to just to be able to run "Aero"...

  2. Kev K
    Black Helicopters

    But does it run

    [whatever is the new version of crysis]

  3. Kevin 6
    Joke

    @Gobot

    Yea right with Aero active its performance will still probably still drop to that of a 386 running windows 95.

  4. amanfromMars 1 Silver badge

    Japan hits ITs Purple Patch with Higher Performance Code.*

    Finally ...... Japan realising that in Order to get things done right, one has to be able to do everything on one's own, although that does not necessarily mean that one has to do IT alone. Bravo.

    And in SPARC HPC AIR&dD Circles, there are Complementary Parallel Fields which Increase Operating Systems Penetrations Testing and MetaData Acquisition and Spontaneous Production/Creation Rates, Exponentially, which Leads to Rapid Dominant SMARTer Control with PreProcessed, Chain Linked Power Supplies aka Source Lode Generators** . .....from and for NEUKlearer HyperRadioProActive IT and Intellectual Property Power Units.

    * For Virtual SuperPowers.

    ** New Fangled Entangling Lead Strings for Thread Weaving/Novel Application.

  5. Anonymous Coward
    Go

    Rumours of SPARC's death...

    ...have been greatly exaggerated!!

  6. Francis Vaughan

    Interesting

    There are probably a number of technical issues we don't get to hear about that will also come into play. The grunt per unit power of the new Sparc will be interesting, and may make it a clear winner over an x64. SGI's ccNUMA machines may be a hard habit to break depending upon the simulation paradigms used. For many real codes the efficiency of coding in a single shared address space can mean the machine is much faster than a clustered system, even if both have similar Linpack benchmarks. Vastly nicer to code in too.

    But scalability isn't easy at these sizes, and it is likely that simple porting of codes is the least of their problems, as the relative patterns of data, communication, and compute can shift significantly as the fidelity of the simulation changes. This may lead to significant structural changes in the simulation simply to avoid quite awful slowdowns in the code. In principle, it is these patterns that should drive the choice of architecture.

    But, as noted in the article, at this level, politics plays a very big part in the answer.

  7. Peter Gathercole Silver badge
    Thumb Up

    @AC on SPARC's death

    SPARC always was a published architectural standard. That's what Sun wanted when it created the original SPARC (I still have the launch blurb in a box somewhere). It never really wanted to be in the chip foundry business, but to use partners to license, develop and produce the silicon. Never quite worked out as they intended, but I believe that this fundamental way of working still exists.

    I expect SPARC to out-survive Sun by some considerable time, especially if Oracle decide to cut the SPARC standard free, which is what it deserves.

    Open-source hardware. What an interesting idea...

This topic is closed for new posts.

Other stories you might like