back to article Japan takes the Top 500 lead with K super

Japan needs a little good news these days, and it comes from the International Super Computing 2011 conference in Hamburg, Germany, as the K supercomputer, a massively parallel Sparc-based cluster built by Fujitsu, has taken the lead in the number-crunching race as gauged by the June 2011 edition of the Top 500 supercomputer …

COMMENTS

This topic is closed for new posts.
  1. Dom 3

    And the ones not on the list?

    I wonder where GCHQ's systems fit in.

    1. NomNomNom

      uh

      its a 486 running windows 3.1

  2. ToddRundgren
    Paris Hilton

    Why why why

    Why oh why Linpack? What a waste of time, but good for people who do nothing productive, but like to measure electronic dicks.

    At least Paris prefers flesh n blood

  3. Anonymous Coward
    WTF?

    Last para

    Is interesting, but what on earth is the US doing with all those systems ? OK, the NSA will need quite a few to analyse all those phone conversations and trawl through all that internet traffic, but they have a hell of a lot...

    1. Hungry Sean

      re: last para

      Most of the US's top supercomputers are owned by the Dept. of Energy (DoE) who happen to be tasked with monitoring our nuclear stockpile-- I guess a lot of the oomph goes into simulating nuclear decay and the effects of bombardment on electronics so we know when various bombs need to be decommissioned. In their spare cycles, they cary out various complex simulations for understanding the effects of different sorts of war events and modeling new kinds of weapons.

      Wikipedia probably has good (and more detailed) entries on the primary missions of each of our big supers.

    2. Asgard
      Big Brother

      @"but what on earth is the US doing with all those systems"

      Here's a clue (the icon).

      The land of the free is becoming far from free. The US authorities have everyone so whipped up into such fear that most people can't see the growing tyranny appearing all around them.

      The irony is one of the so called "Founding Fathers of the United States" said it best when he said "Anyone who trades liberty for security deserves neither liberty nor security" - Benjamin Franklin

    3. Captain Thyratron

      It's the nukes.

      The thing that convinces the government to fund them has historically been nuclear weapons research. We don't get to test bombs anymore, so wouldn't it be nice if we could simulate every conceivable aspect of them? That's pretty much what they want and, really, it's the only sensible thing to do if you've got a nuclear arsenal and aren't allowed to pop the things off now and then. It's no coincidence that the biggest ones usually end up getting built at places like LANL, although universities are starting to acquire them too.

      However, once they're built, there's usually a lot of spare capacity, and that goes into everything from biophysical simulations to designing antennas. A great deal of American scientific work--even totally innocent stuff like cancer research and figuring out ways to clean up toxic things--benefits from the defense budget, and this is one of the ways. Any computer this powerful will have people lining up to use it, and many of them aren't even weapons engineeers.

  4. Anonymous Coward
    Coat

    The "ceepies-geepies"...

    ... can run Crysis, at 32 megapixel and 60fps *for everyone that bought the game at the same time*. Cheers.

    Finally, someone can run Crysis... er no, wait, does it run in x86 too?

    Do they sell those with 10MW power plants too?

  5. Anonymous Coward
    Anonymous Coward

    Last fall?

    When's that?

  6. John 62

    Interconnects and efficiency

    How does the efficiency of a machine like the Tianhe-1A scale over the number of nodes used? I know it's a proprietary interconnect, but from what is known, could the efficiency per nodes used be inferred? e.g. does the efficiency go up when using only a portion of the nodes, like if the system was used more for multiple jobs with multiple partitions than single monolithic jobs.

    [PS I think Roadrunner said "meep, meep"]

  7. Anonymous Coward
    Anonymous Coward

    @"but what on earth is the US doing with all those systems"

    No: NSA/DoD systems don't show up here (hush hush). DoE has two parts: NNSA (roadrunner, cielo) overseas the nuclear stockpile, and does some related research, Office of Science (jaguar, hopper) does energy-related research (what it says on the tin). For instance, you can see some of the projects for Jaguar below.

    http://www.doeleadershipcomputing.org/wp-content/uploads/2011/01/2011INCITEFactSheets.pdf

    Also, most of the US machines (by count) aren't government, so they are probably doing drug discovery or trading stock.

    http://www.top500.org/charts/list/37/segments

  8. trstooge
    Linux

    Like everything in the top 10, it's running Linux

    Linux is the amazingly scalable OS running on that beast!

    1. ToddRundgren
      Flame

      How Scalable?

      Its not a single instance of the OS you know. Its 17,000 (or whatever they said was the # of nodes), instances of Linux, so it scales to ~32 cores, big deal.

      1. Kebabbert

        @Todd Rundgren

        Do you have some links on your claim? I would like to read more, please.

        1. ToddRundgren

          @Kebabbert

          Links, to what?? Cluster computing is a group/cluster of individual machines, which are linked together with an interconnect, (a network). Interconnects are either:

          Ethernet based for capacity computing, think particle physics work, or finance, e.g. QM or

          Capability based, whereby you require very fast, low latency message passing between distributed codes. e.g. CFD, Computational Chemistry codes. Low latency interconnects are: Quadrics, SCI, Myrinet and Infiniband, plus a few proprietary ones from Cray, IBM< and this Fujitsu.

          All compute bodes are standard linux boxes, running a fairly standard Linux OS, although you might have added specialist maths libraries. Hence why the scalable linux statement was silly.

          If you still want links look up "High performance computing".

      2. Anonymous Coward
        Anonymous Coward

        "Its not a single instance of the OS you know"

        I've not seen any information about the detailed arch. Is it a ccNUMA type in which case the number of OS instances could be far fewer than 17000.

    2. trstooge

      Link to the Linux source

      Link? Top 500 websites. Click on every single computer in the top (and about 98% of the entire Top 500 list) and you'll see:

      OS: Linux

      Be it a single OS or thousands of Linuxes, they still need to communicate very effectively to grab 98% of the entire Top 500 list.

      It's not like someone could replace all these Linuxes with, say, the ATARI 512's TOS, and still grab 98% of the entire Top 500 list.

      1. Kebabbert

        @trstooge

        Yes, but Todd Rundgren is correct.

        These super computers are basically a large cluster on a fast switch. You just add a new PC to the network, and voila, you have increased performance. So, it has nothing to do with scalability when we talk about one large SMP computer, such as a IBM POWER795 with as many as 32 POWER7 cpus. Or Oracle Solaris M9000 server with as many as 64 cpus.

        When we talk about one single large SMP computer, Linux is never run on them because Linux scales bad vertically. Linux scales to ~32 cores or so, on one large server.

        Linux scales excellent in a large cluster with lots of PCs (good at horizontal scaling), but scales extremely bad on one single large server (vertical scaling). Linux merits is on a large cluster. Google runs a large cluster with Linux servers. There exist no Linux server with as many as 32 cpus. But there exists large super computers which are basically a cluster, for instance the SGI Altix with 1024 cores - which is just a bunch of blade PCs on a fast network.

        1. Anonymous Coward
          Anonymous Coward

          "which is just a bunch of blade PCs on a fast network."

          Strange then that it runs just one copy of Linux per 2048 cores

          1. Kebabbert

            @A.C

            Sure it runs 2048 cores, just as the ALTIX SGI server does. But it just a bunch of PCs on a switch.

            Let me ask you: have you thought about this?

            IBMs biggest Unix server P795 has 32 cpus

            IBMs biggest Mainframe z196 has 24 cpus

            Oracles biggest Unix server M9000 has 64 cpus

            HPs biggest Unix server (Integrity?) has 64 cpus (I think)

            And they fiercly fight for benchmarks. IBM was so proud over their P595 TPC-C benchmark world record. Why can not IBM simply put in 64 cpus in the P795? Why had IBM to rewrite the old and mature Enterprise AIX that ran on big Unix servers for decades - when P795 was to be released? The P795 has 256 cores, and that was too much for AIX to handle. The earlier P595 Unix server had 128 cores which was manageable by AIX. Why dont IBM put in 64 cpus? Or even 128 cpus? Are there some difficulties when you dont do clusters?

            Why does Linux stutters on SMP servers with 32-48 cores?

            1. Anonymous Coward
              Anonymous Coward

              And yet

              the SGI has a single system image running on 2048 cores not 2048 copies of Linux

  9. Kiralexi

    Why should this be used in commercial chips?

    The oomph of the SPARC64 VIIIfx comes from a custom-designed HPC-oriented vector instruction set called HPC-ACE. The scalar components are similar to slightly tweaked, cache-starved, low-clocked versions of the existing (slow) SPARC64 VII core. An 8-core SPARC64 VII at 2GHz with a smaller cache doesn't exactly sound like the Holy Grail of commercial computing, so I can't see why they would be "fools" as you said not to commercialize it.

  10. Asgard
    Happy

    @"K super consumed 9.89 megawatts"

    I have to say that "K supercomputer at Riken" looks an impressively big room. Although I pity the poor engineers who have to descend into those dark server tunnels between the racks. It would be best to tie a rope around their middle before they go in and then if they pass out from the server heat, then drag them back out again before they cook! ;)

  11. alwarming
    Thumb Up

    "Near Kobe, Japan"

    Must be a Beefy system then.

  12. MMcA
    Black Helicopters

    10 megawatts? Pah...! The NSA's latest will pull 60 MW

    In fact, they have two...

    http://www.theregister.co.uk/2011/05/04/nsa_fort_mead_data_center/

    Will still dwarfed by Charity Engine.

  13. Tom 7

    Try a few multicore Raspberry Pi's

    and a few usb hubs and you could probably do that on a car battery in a couple of years...

  14. Slimster

    Enough juice?

    Do the poor Japanese have permission to turn it on more than once a year? As a result of the recent disasters they are experiencing power-cutbacks. Offices running aircon at 80 deg (at a very hot/humid time of the year), factories are changing their working week to flatten out demand too. Turn this beastie on an half the nation's lights will go out.

This topic is closed for new posts.

Other stories you might like