back to article HP haters: Get ready to rage against THE MACHINE 'next year'

HP is still hard at work at the futuristic computing platform it has dubbed the Machine, and staffers from HP Labs were on hand at the HP Discover conference in Las Vegas this week to give an update on its progress. There was even sample hardware on display, which HP Labs' Jaap Suermondt said was "evidence that we're actually …

  1. Nick Ryan Silver badge

    Initially I was quite (or very) skeptical when I read about this new computing platform. However the way the design has been rotated around to be memory rather than CPU centric is rather clever and really could lend itself to some computing tasks very well. I suspect that even if it works, at first it will be a very niche product but it should scale down as well as up and a small version of it could also be equally innovative. In some ways systems are getting closer to this with multiple core processors, multiple processors (e.g. one low power, one high power) and general purpose GPU systems with hundreds of parallel processors.

    I'd be testing it with DRAM in exactly that way if the core memory tech wasn't ready yet. Good modular design will help the thing in the long run anyway.

  2. Anonymous Coward
    Anonymous Coward

    Zeta will solve all our energy problems by 1970 and we'll be living on Mars by 1999.

    1. beast666
  3. Anonymous Coward
    Anonymous Coward

    is this not glorified ssd? It's all well and good having your storage as the base for your data but it still needs processing be that with GPU or a CPU before it's put in memory or while it's in memory performing tasks.

    Also would it not be better to further integrate the controllers to increase speed rather than messing about changing the architecture?

    If my comments are that of an idiot then if someone could please explain I would be appreciative.

    Thanks.

    1. asdf

      >is this not glorified ssd?

      If by glorified you mean supposedly an order or two of magnitude faster then what is possible with an SSD yes.

      >It's all well and good having your storage as the base for your data but it still needs processing be that with GPU or a CPU before it's put in memory or while it's in memory performing tasks.

      Again supposedly the whole edge to the technology if it comes to market is having ram speeds in permanent storage so you no longer even say if its in memory or on the disk. They are the same.

    2. Anonymous Coward
      Anonymous Coward

      "is this not glorified ssd?"

      Not quite... Yes its a (much faster) non-volatile storage tech similar to flash used in SSDs...

      However memristor technology can, in theory, be used not only to store data but also process it, potentially merging memory and processing functions into a single unit. A technology potentially capable of simulating a biological neural network much more closely/efficiently than current technologies.

      That's a very interesting concept...

  4. Nolveys
    Trollface

    HP has also proposed changes to their PC manufacturing processes that would result in much higher QA. The plan is to go to the end of each assembly line and install a massive dumpster.

    1. Teiwaz

      dumpster??

      Go all in, advise them to employ Oscar the Grouch (address: Sesame Street) as QA Manager.

      Personally, I think it's an interesting concept with a lot of potential should they manage to manifest it.

      Summary from Discovery 2014

      http://www.google.co.uk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&sqi=2&ved=0CCEQtwIwAA&url=http%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Djcmsby8jDKE&ei=Z4lvVeaSL-mv7AbPxYHQDg&usg=AFQjCNHSeQ4tZmRdPrbxrrAS1MzexIq5lw&sig2=UEyL5h-fc6XkJgI5wKlF7A&bvm=bv.94911696,d.ZGU

    2. Anonymous Coward
      Anonymous Coward

      >install a massive dumpster.

      Matches the dumpster fire for a board they have had in their fairly recent past.

    3. Anonymous Coward
      Anonymous Coward

      Stack all the dumpsters and you have a data center. Perfect for the next iCloud expansion.

  5. asdf

    The real bombshell

    >HP is now exploring alternative non-volatile memory technologies for the Machine

    Wow considering I assumed this project was serving somewhat as a PoC for memristor technology in general that 2018 available date (was originally 2014) is looking very optimistic now. Its Itanium all over again except it may never come to market.

  6. PowerMan@thinksis

    IBM POWER8 has this today

    After reading the article and the comments I'm dumbstruck by HP. They failed with Itanium, failed with the cloud, in software, shed their instrument business years ago and now splitting into two companies all while developing a brand new server, compute and memory technology running a new OS - on their own! Why for God's sake?

    IBM's POWER8 servers using the CAPI http://www-304.ibm.com/webapp/set2/sas/f/capi/home.html interface attached to their FlashSystem 840 storage takes a 48 TB 2U Flash array and after using RAID with a Hot Spare they create a 40 TB RAID protect memory extender from the servers base 2 TB. Fast and resilient in 2015. The solution is called IBM Data Engine for NoSQL based on the Redis Labs NoSQL product. http://www-03.ibm.com/systems/power/solutions/bigdata-analytics/data-engine-nosql/

  7. Anonymous Coward
    Anonymous Coward

    That's a fail. They are stalling. Obviously the engineering samples from their silicon foundry have not turned out as planned. No Memristor memory, just DRAM now. Then it is not anything that other people aren't doing. You can buy a very nice 32 Tbyte (DRAM) server from Oracle for only one million dollars. That's not much of a markup in that segment. If you want to do memory based AI research it is a bargain.

    1. asdf
      Trollface

      dafuq?

      You said Oracle and bargain in the same context. Hahahaha good one.

  8. Christian Berger

    How will that work....

    ... with normal operating systems? Are they trying to re-invent that, too? If they do, kudos to them, but I doubt they will make it as only a small fraction of designs for "operating systems" are actually successful.

    1. Hans 1

      Re: How will that work....

      Linux and the BSD's will be ported, if it works out to be worthwhile ...

      The thing that puzzles me is the architecture. You can reboot the CPU's without touching memory, great, but what does that mean ? A reboot becomes a sleep/wake cycle ? If so, why reboot in the first place ?

      I do not get it ... :-(

    2. Epobirs

      Re: How will that work....

      It won't. Part of the project is the development of an OS designed from the ground up for a completely NVM environment. They claim they will have a Linux based simulator for this available long before the hardware ships, so that developers can wrap their heads around the changes and get some application work underway before The Machine is a shipping product. We'll see.

      Meanwhile, there are so many new technologies being incorporated into this it seems impossible that anything like the intended product will ever come to pass. But that isn't necessarily failure. Many of these technologies, taken on their own, would be huge money makers. Most obvious is the memristor memory. Even lacking an NVM oriented OS, a memristor product with performance and density competitive with DD4 could get away with being a bit more costly due to the value of its non-volatile nature. The advantage to mobile devices alone would be a very profitable market.

      How serious HP is about all of this stuff happening in one box is hard to see. They may be delusional or it may be purely PR. But at least they're trying to bring something new to the world. Even if it kills the company the work will be there for somebody else to pick up the pieces.

  9. James Cane

    Rebooting the CPU?

    Sounds really exciting. Computation has been basically the same since it got going so seeing a fundamentally new architecture is encouraging. The kludge between local storage and RAM was always effectively a cost and technology workaround anyway.

    That said, what does "rebooting a CPU" even mean? Is that just clearing the registers and starting a new stack, or am I missing something?

  10. wolfetone Silver badge
    Coat

    Will it run Fallout 4?

  11. Roj Blake Silver badge

    Obligatory Person of Interest Comment

    According to documentary I saw recently, the Machine has already been built by a reclusive billionaire with a limp.

  12. Matthew Taylor

    HP's new moonshot is.... a SERVER! Really going for it with the old moonshots, there...

  13. auburnman

    I'll stick with volatile memory for now

    Sometimes you just have to suck it up, pull the plug and start over as the only way to get back to a usable system. I don't know if I want a system that will try to go back to whatever shit state it was in before the power cycle.

    The secure from the ground up and ludicrous data transfer speeds sound much more promising.

    1. asdf

      Re: I'll stick with volatile memory for now

      Must not be a big ZFS or VM guy or the first word that would come to mind is snapshots. They are your friend and would become vital for the reasons you list for this technology.

  14. Andy 73 Silver badge

    Not server++

    Maybe I'm optimistic, but those posters complaining on the grounds that this won't run their web service any faster (if at all, with it's new OS) have missed the point.

    In the data crunching corner of the world, most of the innovation is around describing 'non traditional' processing tasks and then mapping those (very painfully) on to traditional hardware. Everyone will tell you that adding another off the shelf node to a compute cluster is cheap and you can expand to build a cluster capable of handling the large loads that big data, inference and graph compute problems throw up. The problem is that big clusters do not scale linearly when it comes to reliability, and network and disk effects mean that at least half of a cluster's energy goes into overcoming the dead weight of having compute resources that don't match the task. We don't actually want a vaster, more manageable cluster of Linux boxes, we want a compute resource matched to the process description.

    Soooo.. as Mars shots go, this could make some sense. We're starting to describe processing in terms of directed graphs of actions, which can be mapped to both batch and real time work loads. An architecture that starts with the premise of many actors consuming a vast store of messages in a robust and scalable way would potentially outperform today's clusters by orders of magnitude. Given the cost of provisioning and maintaining a modern cluster, the exotic nature of the Machine may be a small price to pay.

    I'm reminded of the early introduction of NUMA machines, which suddenly introduced capabilities that allowed tasks that used to be done by a building full of mainframes to be done by a box that sat under your desk. This architecture could potentially do the same to clusters - and not by virtualising thousands of machines into one box.

    1. asdf

      Re: Not server++

      >Maybe I'm optimistic,

      For HP changing the world with memristor technology basically ever, yes you are. For the general concept of merging all storage including memory in the future perhaps not.

  15. Ken Moorhouse Silver badge

    Makes sense to me

    But doesn't there still have to be hierarchies?

    It would be wasteful for the memory to be homogenous in terms of speed. Access for graphics purposes has to be faster than for storage purposes, so you're still going to segregate memory for particular functions.

    Also, in booting up the memory stack there needs to be a priority hierarchy there, otherwise you will end up waiting ten minutes for the entire stack to warm up when just a subset will be adequate for the target bootstrap process. I appreciate though that once it's booted up it's never taken off-line.

    1. asdf

      Re: Makes sense to me

      >It would be wasteful for the memory to be homogenous in terms of speed.

      Wasteful for what? If the technology is eventually faster than any use we can find for it (bottleneck somwhere else) super low power, dirt cheap in enormous volume what is getting wasted? Not saying we are going to get there anytime soon but insisting their has to be hierarchy seems to be a mistake.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like