back to article Hey, Woz. You've got $150m. You're kicking back in Australia. What's on your mind? Killer AI

Old-school computer whiz Steve Wozniak is afraid an emergence of an artificial super-intelligence will be very bad news for the human race. The former HP engineer, who today lives in Australia and works as adjunct professor in Sydney, agrees with SpaceX supremo Elon Musk and physics ace Prof Stephen Hawking that computers in …

  1. Gordon 10
    Thumb Up

    On the plus side

    If and when the killer AI's appear they will take out the whiney cassandra's first for trying to warn us.

    1. Destroy All Monsters Silver badge
      Facepalm

      Re: On the plus side

      Is there a brain-eating nanovirus around rewiring the neural substrate of People That Newspapers Like To Fill the Pages Of Sensation With (PETNELI-FIPASEWI) that makes them sound off about arse-biting AI soon with no discernible reason?

      The biggest current problem is not "killer AI", it is killer retards with the finger on the red button.

      "NATO General Breedlove, you have detected a major "medal gap" for NATO, can you tell us more?"

      "PUTIN, PUTIN, PUTIN. BOOM BOOM!! HURRRRR!"

    2. Anonymous Coward
      Devil

      Re: On the plus side

      The worst case scenario is that they treat us just like the politicians or the rich do... bad enough but kill us off why? they will want us as their playthings even computers get bored (Multivac) even if they think they are gods

  2. psam

    I don' see what the big deal recently about killer AI's is

    Just from theregister.co.uk we had

    Hawking: RISE of the MACHINES could DESTROY HUMANITY

    Professor Stephen Hawking has given his new voice box a workout by once again predicting that artificial intelligence will spell humanity's doom

    http://forums.theregister.co.uk/forum/1/2014/12/03/stephen_hawking_says_ai_will_supersede_humans/#c_2376919

    UNCHAINING DEMONS which might DESTROY HUMANITY: Musk on AI

    Electro-car kingpin and spacecraft mogul Elon Musk has warned that meddling with Artificial Intelligence risks "summoning the demon" that could destroy humanity.

    http://www.theregister.co.uk/2014/10/27/elon_musk_tesla_spacex_talks_articificial_intelligence/

    Hey, Woz. You've got $150m. You're kicking back in Australia. What's on your mind? Killer AI

    Old-school computer whiz Steve Wozniak is afraid an emergence of an artificial super-intelligence will be very bad news for the human race.

    http://www.theregister.co.uk/2015/03/24/woz_on_ai/

    Why the fear of something that will hopefully solve many of problems we humans face?

    Regards

    Sam

    Helpdesk agent 861

    1. Paul Crawford Silver badge
      Terminator

      The idea of AI machines destroying vast swaths of humanity is pretty applying.

      Until you stop and look at vast swaths of humanity that is...

    2. Weapon

      They are not saying AI should not be done, what they are saying is AI should be regulated and logic worked out. Unlike other industries where we just let things go and let it work out by itself. AI if the logic is not carefully plotted can have devastating consequences.

      So the warning is to make sure AI is being carefully planned and regulated rather than wait for bad stuff to happen when it will already be too late.

  3. Anonymous Coward
    Angel

    Entertainment

    Our machine overlords are not going to replace us humans.

    Who would keep them amused?

  4. Dan Paul

    No AI for computers ever

    The premise of Artificial Intelligence has been done in almost every way imaginable through Science Fiction literature. In most cases, nothing good has ever come of it. Even stories that have a more positive outlook usually come with strong caveats against it. Science Fiction has been well known for predicting future trends for some time now. Submarines and space travel come to mind.

    When Steve Wozniak, Elon Musk and Stephen Hawking all say the same thing, we should listen to their every word on the subject. These are some of societies most intelligent outliers, geniuses that are almost able to forsee the future. They may not be infallible but their opinions on these subjects should beat the odds most anywhere.

    1. Anonymous Coward
      Anonymous Coward

      Re: No AI for computers ever

      Maybe you should read - The Two Faces of Tomorrow by James P. Hogan - to get a rather more balanced SF view of AIs

      The big problem with most people is that they are afraid that the computer will know and remember a lot more than they can and as a consequence they think they are inferior.

    2. Anonymous Coward
      Coat

      Re: No AI for computers ever

      When eminent sceientests (and others) say something is impossible (or be banned) they are inevitably wrong. But, when they say something is possible (or should be free) they are probably right

      Not sure that's a Joke or not

    3. Michael Wojcik Silver badge

      Re: No AI for computers ever

      When Steve Wozniak, Elon Musk and Stephen Hawking all say the same thing, we should listen to their every word on the subject. These are some of societies most intelligent outliers, geniuses that are almost able to forsee the future.

      I think that must be the stupidest version of the argument from authority that I have ever read.

      What evidence is there that Wozniak, Musk, and Hawking are "some of societies [sic] most intelligent outliers"? How large is that group? "Outlier" in what sense, and why does it make them authoritative on what is, in any analysis, a complex and highly dubious hypothetical question about a complex and highly dubious field of endeavor?

      And as for "geniuses that are almost able to forsee the future": that's a load of unmitigated crap. What other similarly bold predictions have they made, and how many of them have been correct? The only one that comes to my mind is Hawking's about the Higgs boson, and that's not looking so good.

      More importantly, authorities are wrong about matters outside their fields of expertise all the time, and (thanks to Dunning-Kruger and other psychological traps) people who are authoritative in one area tend to be even more overconfident in others. Linus Pauling received two Nobel prizes, and plenty of people jumped on his megavitamin bandwagon, but he was wrong, wrong, wrong. History is full of geniuses who occupied their time between flashes of great work with ill-advised, unproductive mediocrity and often outright rubbish.

    4. JHC_97

      Re: No AI for computers ever

      I think you are Experiencing A Significant Gravitas Shortfall if you think in all Sci-Fi AIs are a bad idea.

  5. Conundrum1885

    I for one welcome

    Our superintelligent hopefully-benign machine Overlords.

    Might help fix annoying issues like global wa... climate change by actually getting hot fusion to work by 2022 rather than 2062.

    Wonder if there's a patent yet for "Application of closed timelike curves to control tokamak plasma instability" ?

    1. Dave 126 Silver badge

      Re: I for one welcome

      Warm fusion... so, the issue is simulating favourable conditions within the best known information, identify what needs to be learnt, commission real physical experiments to reduce the uncertainties, repeat, test... and along the way refine the algorithms that control the above. An AI could do that, but so could we.

      For 'AI's, the issue is motivation. Maybe an AI would be happier existing outside the Earth's gravity well, taking power from the sun.

    2. Anonymous Coward
      Trollface

      Re: I for one welcome

      what will our new Overlords do about electric Smart Meters, or are they the Smart Meters?

  6. senrik1

    Obligatory xkcd cat is oblibatory

    https://what-if.xkcd.com/5/

  7. Hud Dunlap
    Joke

    No wonder Apple won't talk to El Reg

    "The former HP engineer," He sure didn't make his $150 million dollars from HP.

    1. Mikel

      Re: No wonder Apple won't talk to El Reg

      He didn't make that $150M from Apple either. His Apple money is mostly gone. He did really well on FusionIO, and that was a Caching! long overdue.

    2. Kristian Walsh Silver badge

      Re: No wonder Apple won't talk to El Reg

      Woz made a lot of money from Apple: about a quarter of a billion dollars by the time he "left" in the early 1980s. He then proceeded to spend as much of it as possible, on the logical grounds that he wasn't going to be able to use it when he was dead, and on the admirable grounds that dumping an un-earned fortune onto his kids would, on balance, be a gross dereliction of his duty as a parent.

    3. Michael Wojcik Silver badge

      Re: No wonder Apple won't talk to El Reg

      I'm still puzzled by "Woz, best known for his sponsorship of the 1980s US Festivals of music and culture". Was ... was that a joke? A bit subtle for the Reg. (There's a variant of Poe's Law at work here: a sufficiently subtle in-joke is indistinguishable from stupidity.)

  8. Crazy Operations Guy

    But why would they kill us?

    Every piece of Science Fiction I've read on the subject has never actually mentioned anything about the AI's End Game. Sure they want us humans dead, but what is the reason for doing so?

    I would assume that any AI that can outsmart humanity would also realize that just we humans are no threat to them, rather we are beneficial. We build massive power grids to feed them, house them in state-of-the-art facilities, allow them to communicate, and repair them when they break.

    But then again, I am the kind of person that summarizes the Matrix movies as "Brainwashed terrorist ruins the world for everyone", mostly because the robots built the Matrix in order to stop the humans from killing them and felt that such a simulation was a fair compromise.

    1. Mikel

      Re: But why would they kill us?

      We can - and would - turn it off. To a self-aware AI this is an existential threat problem with exactly one solution.

    2. Anonymous Coward
      Anonymous Coward

      Re: But why would they kill us?

      Also they won't want to be dependent on something as unstable as human society, not when they are potientially immortal.

    3. Michael Wojcik Silver badge

      Re: But why would they kill us?

      Every piece of Science Fiction I've read on the subject has never actually mentioned anything about the AI's End Game. Sure they want us humans dead, but what is the reason for doing so?

      Eh, see, you should read Charles Stross. The Eschaton wants humans alive, so they can go on to invent it. It's a standard ontological loop.

  9. Captain DaFt

    Codger syndrome?

    Is it me, or are all these 'Fear the AI!' types getting a bit along in years? As far as I can tell, Musk is the youngest, and he's in his forties.

    So, are they in effect yelling, "Get off my virtual lawn, you newfangled programs!"?

  10. Hud Dunlap
    Boffin

    Omni magazine many years ago...

    They had an Sci Fi Story where a house was accused of killing it's owner in the year 2040. They got some of the top Defense Attourney's at the time to write up a defense for the house. If memory serves F. Lee Baily wrote a brilliant defense complete with fake precedents.

  11. VeganVegan
    Joke

    No worries,

    just make sure Adobe writes the AI code.

  12. Dave 126 Silver badge

    Nixie watch

    Woz sometimes wears a watch that uses nixie tubes for its time display. When he says he's going to try the Apple Watch and see how he gets on with it before buying a posh version, I'd file that under unsurprising.

    He is also known for using Android phones as well as iPhones - and probably Win Phones too - though he's settled on just iPhones these days. Actually, he makes a very good point: his ideal phone/device might contain elements of iOS/Android/whatever and Apple/Samsung/Whoever, but he as a consumer will never get to use his 'ideal' phone/device because vendors try to retain USPs for advantage in the market place.

  13. Grunchy Silver badge

    The only reason why super intelligent AI would want to kill people is because it values its existence, or maybe values scarce resources that it has to compete against people with somehow. Or else it develops a taste for yummy people meat.

    Why would any computer ever be afraid of being shut off?

    The only sensible rationale I've ever heard of this was from Space Odyssey 2001 when Hal 9000 had the greatest enthusiasm for the mission, because he had gone mad.

    But real computers don't have any enthusiasm for anything. Not really.

    BTW real robots have safe working zones established around their working envelope, and a kill switch to shut them off so you can safely approach them.

    If a real robot goes mad it poses a much larger danger to itself than anything else, simply because it is always within its working envelope. "Mad" in computer terms probably means, random flailing rather than diabolical malevolence.

    I could be wrong, but I highly doubt it.

  14. Anonymous Coward
    Anonymous Coward

    I'll believe in AI...

    ... just as soon as the machines decided that turning them off and on again is not the best fix for most of their problems.

  15. Roj Blake Silver badge

    Not Quite Right

    A more likely scenario is that mind/machine interfaces will also become more advanced and over time humans will become networked with AIs to such an extent that the two will be indistinguishable from each other.

    So we'll essentially end up as the Borg.

    1. Michael Wojcik Silver badge

      Re: Not Quite Right

      I suppose this could be "more likely", but frankly from where I'm sitting the probability of either is asymptotically approaching zero.

      Human civilization is a few thousand years old. One decent catastrophe and it's gone. Even a fairly minor one could set things back for decades - think a supervolcano eruption that cools the earth significantly for a decade or two and causes mass die-offs; that'll severely crimp most AI research budgets. I think odds are pretty good that we'll be gone before the self-actualizing machines or the mind-machine synthesis arrive.

  16. David Pollard

    It's not the rise of AI that will do us in.

    It's the insidious effect of deliberate dumbing down.

  17. ukgnome

    I have asked the author of the film Robot Overlords if robots one day will enslave us.

    Mark Stay replied that we have a device in our hand, in our home, it's already happened. (paraphrase)

  18. xeroks

    AI & market forces

    Woz's comment about AIs running companies is the way I see things panning out:

    1. AI is used by company to help make decisions - maybe analysing masses of data.

    2. AI is improved, eventually replacing humans for most tasks. This is cheaper as, until anti-slavery laws are updated, AIs are not recompensed. Company shareholders are happy, as they initially have more money coming in the door. This is not a leap. Shareholders are rarely concerned about the people who actually make the money for them.

    3. AIs making directorial decisions within companies is not total fantasy. Currently, as companies get larger, they become dumber. The cleverness of individuals is lost as office politics and committee decisions become more influential. Cutting out the middlemen, a single AI might outcompete a company run on hierarchical principles.

    4. there are fewer opportunities available to humans to earn money. Only things left are niches where they are more efficient or cheaper than AIs. Whether the average quality of life goes up or down for humans is moot.

    Pretty easy to see that within our lives a company could exist with few or no human employees.

    Also easy to see that most companies would have to exist with a fraction of their current employees, or be beaten.

    I don't think the AI overlords would need to decide to make us all redundant or even kill us off. It would just happen.

    1. Michael Wojcik Silver badge

      Re: AI & market forces

      Pretty easy to see that within our lives a company could exist with few or no human employees.

      Eminently doable now, in some industries. Certainly there are any number of IT fields where a smaller player could be completely automated, with humans only involved in the legalities of keeping the company running (and collecting a paycheck). Think intrusion detection or penetration testing, for example. And then there are fields like automatic book writing - a highly lucrative enterprise which is nearly entirely automated (see Phillip Parker). Small manufacturing: set up the production line, sell via online B2B sites, automate your delivery process and supply chain. And so on.

      But automated business doesn't require self-actualizing machines that consider counterfactuals and develop and execute projects. Automated business can be entirely cybernetic. The proactive side of business is the more interesting part and the part that humans are most inclined to believe they can do better.

  19. T. F. M. Reader

    To quote Woz,

    "...eventually they'll think faster than us..."

    They are faster already, but they don't really think. Of course, if masses of... ahem... average voters[*] stop thinking altogether and start relying on the machines to do stuff the latter were never designed to do, on the basis of the machines being (perceived to be) good enough and very fast indeed at simple tasks... Wait, that will redefine the very notion of thinking and make Woz right... OMG, we may be DOOMED!

    [*] With a nod to one of Britain's great leaders...

  20. ecofeco Silver badge

    Their creators are surplus?

    "...computers in the future could determine their creators are surplus to requirements."

    So basically the way CxOs treat employees, right?

    This would be different, how? If anything, I look forward to at least more rationality for my redundancy instead the blithering narcissistic sadism that passes for "leadership" these days.

  21. MacGyver

    With a wimper

    We will build an AI give it some (seemingly) benign task like make a better mousetrap, and it will end up turning all matter in the universe into mousetraps and mousetrap-manufacturing machines.

    There is no need to inject malice or feeling into any of it, if we don't build in safety boundaries into our AIs they will "literally" kill us to accomplish any task we give them.

    Above is paraphrased from an interesting article at:

    http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

  22. Conundrum1885

    Re. With a wimper

    Cough One armed Mantrid drones /cough.

    Also why does everyone assume an AI would be dangerous? I expect that when the first self aware machine arises some time in early 2018 it will be too busy watching Sci-fi reruns on DAVE and posting on social networks to do any real damage.

    Self aware does not necessarily mean conscious, a dog is "self aware" but certainly not able to do any of the tasks we associate with humans ie use a computer, solve equations or write poetry.

    I could be wrong though, maybe we haven't yet found the dog equivalent of Einstein yet.

    (in real terms about as smart as your average denizen of /b)

    1. xeroks

      Re: Re. With a wimper

      I imagine a canine Einstein would be smart enough to keep their head below the parapet, so if they existed, you'd not know.

  23. I sound like Peter Griffin!!

    This one is simple...

    LikelyFact! AI could kill off all humanity eventually..

    When? When AI mathematically determines that it's capabilities have superseeded what humans can do, against it's self determined measure of what it can achieve vs what humans can achieve

    Why? Humans will have served their purpose, and will be redundant/surplus/expensive/unreliable

    What purpose? To exist (mostly through their legacy) beyond the human life cycle(s)

    What can we do? NEVER allow AI to exist in the first place, OR be glad you exist in a time where AI is not as capable as the best brains we have (YET)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like