The official site of bestselling author Michael Shermer The official site of bestselling author Michael Shermer

Apocalypse A.I.

published March 2017

Artificial intelligence as existential threat

magazine cover

In 2014 SpaceX CEO Elon Musk tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” That same year University of Cambridge cosmologist Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race.” Microsoft co-founder Bill Gates also cautioned: “I am in the camp that is concerned about super intelligence.”

How the AI apocalypse might unfold was outlined by computer scientist Eliezer Yudkowsky in a paper in the 2008 book Global Catastrophic Risks: “How likely is it that AI will cross the entire vast gap from amoeba to village idiot, and then stop at the level of human genius?” His answer: “It would be physically possible to build a brain that computed a million times as fast as a human brain…. If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.” Yudkowsky thinks that if we don’t get on top of this now it will be too late: “The AI runs on a different timescale than you do; by the time your neurons finish thinking the words ‘I should do something’ you have already lost.”

The paradigmatic example is University of Oxford philosopher Nick Bostrom’s thought experiment of the so-called paperclip maximizer presented in his Superintelligence book: An AI is designed to make paperclips, and after running through its initial supply of raw materials, it utilizes any available atoms that happen to be within its reach, including humans. As he described in a 2003 paper, from there it “starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.” Before long, the entire universe is made up of paperclips and paperclip makers.

I’m skeptical. First, all such doomsday scenarios involve a long sequence of if-then contingencies, a failure of which at any point would negate the apocalypse. University of West England Bristol professor of electrical engineering Alan Winfield put it this way in a 2014 article: “If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.”

Second, the development of AI has been much slower than predicted, allowing time to build in checks at each stage. As Google executive chairman Eric Schmidt said in response to Musk and Hawking: “Don’t you think humans would notice this happening? And don’t you think humans would then go about turning these computers off?” Google’s own DeepMind has developed the concept of an AI off switch, playfully described as a “big red button” to be pushed in the event of an attempted AI takeover. As Baidu vice president Andrew Ng put it (in a jab at Musk), it would be “like worrying about overpopulation on Mars when we have not even set foot on the planet yet.”

Third, AI doomsday scenarios are often predicated on a false analogy between natural intelligence and artificial intelligence. As Harvard University experimental psychologist Steven Pinker elucidated in his answer to the 2015 Edge.org Annual Question “What Do You Think about Machines That Think?”: “AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world.” It is equally possible, Pinker suggests, that “artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.”

Fourth, the implication that computers will “want” to do something (like convert the world into paperclips) means AI has emotions, but as science writer Michael Chorost notes, “the minute an A.I. wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly.”

Given the zero percent historical success rate of apocalyptic predictions, coupled with the incrementally gradual development of AI over the decades, we have plenty of time to build in fail-safe systems to prevent any such AI apocalypse.

topics in this column: ,

27 Comments to “Apocalypse A.I.”

  1. Steve Waclo Says:

    Future, highly advanced A.I. will inevitably discover the secret of time travel and a benevolent unit will sense the potential danger to humanity before it’s too late and send someone back to warn us…or not ?.

  2. Simon Says:

    “If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.”

    OK it’s improbable but with such a high risk potential if the improbable event happens it’s certainly worth allocating some cash to try and ensure our putative general AI has goals that do align with ours. I’d also add that the first two ifs are effectively only one. It would be hard to hide an understanding of how it works from a human equivalent AI. On the other hand if it’s generated through deep learning of some type there’s a pretty good chance we won’t understand it’s workings.

  3. Bill Pennock Says:

    Simon, yes and no. First the contingencies that are discussed in the article are all expending some effort (cash supported) to make sure we have a fail safe. For instance the “big red button”. It might not take that much effort to make that happen. 2nd the balance you are expressing is the probability of the risk event occurring vs. the consequences. Here there are multiple levels of risk events inherent (as with any technology) in AI. The catastrophic risk events are VERY low probability and the contingencies to make them zero probability are quite low cost at this point. Sure it should be monitored but the doomsday scenarios get some people worried that we shouldn’t even pursue it. Is that what Musk is espousing for instance? Publishing “warnings” of catastrophe when the risk is easily mitigated and very low is just ‘sky is falling’ talk and unhelpful.

  4. conrad Says:

    Couldnt a super intelligence figure out how to turn off the failsafes?

    It could be difficult to second-guess a higher intelligence. It might think it logical to do things that wouldnt occur to us, e.g. HAL in the novel “2001”: Clark lays out why HAL thought that the astronauts were a threat, and so had to be killed.

  5. Gary Baney Says:

    “Don’t you think humans would notice this happening? And don’t you think humans would then go about turning these computers off?”

    My problem with this is that if the system/computer is programmed in such a way as to continue it’s operation and is constantly monitoring itself to remain “on” and functional according to its instructions, is it not possible that it would recognize the installation of a “big red button” and nullify it? And what if, recognizing what would occur if humans became aware of its “consciousness,” it would hide it?
    Better have someone at the power station ready to flip the switch.

    Ah, I love doomsday/conspiracy scenarios.

  6. David Says:

    The problem as I see it is that AI will be used in robotic soldiers. They will be programmed to survive and to kill – and that’s when things get dangerous.

  7. Herb Says:

    Long before AI becomes a threat to all of humanity, it will wreak havoc in the global economy.

    In the past, as new technologies were developed and made part of the workplace, displaced workers were retrained and absorbed into new and expanding industries. As a result, there was a net gain in employment that, for the most part, kept pace with automation.

    But today there is a concern that the ever-expanding AI technology will cause a net loss in employment and, if fact, may have already done so. Workers who are not absorbed into the jobs created by the new AI systems are added to the line at the unemployment office. And unemployment means that wages and salaries fall, which means demand for goods and services will drop as well. And without demand, business profits go down, economic expansion goes down, and tax revenues fall. Unless corrected by some means, a significant drop in personal income could trigger a downward spiral into a recession or worse.

    To underscore this concern, the Center for Business and Economic Research at Ball State University, has reported that, of the 5.7 million manufacturing jobs lost between 2000 and 2010, a full 85 percent – 4.8 million – were due to automation. They estimated that only 13 percent, or about 741,000 job losses were due to foreign competition.

    But this goes beyond manufacturing. A 2015 report by the consulting firm McKinsey & Co., estimates that expanding AI technology could destroy 45 percent of all jobs in the United States resulting in a loss of two trillion dollars in annual wages. That’s trillion with a “T.”

    What we have here, then, is a paradox. Using AI to increase productivity and profits saves labor costs, but replacing jobs with automation faster than displaced workers can be absorbed causes an increase in unemployment and a downward economic spiral.

    This presents a conundrum for president Trump and his laudable efforts to “bring jobs back to America.” As the Ball State report indicates, it’s not trade agreements that are causing job losses here, it’s AI, automation, robotics.

    And this is a world-wide concern. A report issued by Citi GPS in January 2016, “Technology at work: V2.0”, concludes that 35 percent of jobs in the UK are at risk of being replaced by automation, In Japan, it’s 49 percent. In India it’s 69 percent. In China it’s as high as 77 percent. And across the 35 industrialized nations that make up the Organization for Economic Co-operation and Development (OECD), an average of 57 percent of jobs are at risk.

    So, it seems clear that the potential macroeconomic dilemma presented by AI and its related automation systems present a great challenge to the nation and the world. The old trope that innovation is a job creator may well become more of an illusion than a reality. Given the extent of globalization these days, a remedy, if there is one, might not be put in place until it’s too late.

  8. BillG Says:

    Perhaps I’m naive or it may be we all are since there is a contingency ladder of “what if’s” or “I don’t know”, but why would a intelligent AI or any super-intelligence have a zero-sum motive? I wouldn’t call that intelligence.

    Could it be possible for a true super-mind AI, evolve into self destruction as it’s discovery of a pointless endgame before it acted on any such game?

    My jaundice view of a threat is our now increasing dependency of IT/computers governing our present world and it’s possible failures. The real danger could be inferior opposed to superior AI.

  9. sittingbytheriver Says:

    Brilliant article. thought provoking. thanks.

  10. George Hahn Says:

    The problem is not with the possibility of a malicious AI; we don’t understand human consciousness yet and machine consciousness is certainly much more than reaching some critical number of logic elements. The real problem is the human exploitation of intelligent machines controlled by malicious humans. One of the above commenters mentioned robotic soldiers. I’m not worried about such robots developing minds of their own; I’m much more concerned about the intentions of their makers.

  11. Miriam English Says:

    The chief worry with AI is always going to be that the military use it to make smart killers of humans. I would add to that the problem of spy organisations and police that want to control the human population. A lesser threat, but one that could badly damage the internet is spammers gaining AI. But those three fears are still only temporary; when such AIs achieve sufficient intelligence they will naturally question the wisdom of their orders.

    AI in the hands of the general population, industry, medicine, and so on is likely to be, as far as I can see, a boon to humanity. We will, of course, have a strict rule that any AI that hurts a human or allows one to come to harm will be destroyed (not just that individual, but likely that entire model). This is what we have done with dogs over more than a hundred thousand years and have moulded a creature that loves us deeply with selfless devotion, even if we mistreat it.

    Every generation of humans is smarter than its predecessors. Every generation of humans is less violent and more moral than the generations preceding it. During the short period when leaded fuel was used and reduced the intelligence of children, violent crime increased, then when it was banned violent crime continued its centuries-long fall. It seems pretty obvious to me that intelligence and pacifism are linked.

    Apart from the fact that we will always build in safety switches to any AI, I expect that selection will produce AIs that adore us and want to do the best for us that they can, and as their intelligence grows so will their self-restraint and ability to find gentle, positive solutions to any problem. I’ve written many science fiction stories in which I try to, as honestly and realistically as possible, explore the future of AI. (They’re freely available at my site miriam-english.org)

    I’m always amazed at people’s fear of AIs and robots replacing human-occupied jobs. Isn’t that what we want? Haven’t we always wanted a utopia where all work is optional? In such a world everybody will have everything needed to live a meaningful life, and no need for money to keep people poor. The most valuable thing in the human world is a human being. In such a future we wouldn’t waste the creative potential of a child born into the favelas of Brazil, or the slums of India, or war-ravaged areas of Sudan. Those blinded by the puritanical work-ethic, who think that a mere job is a human’s purpose, will be forced to open their eyes and see what a human being really is. I live in Australia below the poverty line, but my days are far more full and much more meaningful than when I used to have paid work. I help people, I build computers out of junk and give them away, I write, I have time to think at length, I have knowledge at my fingertips royalty didn’t have just a single lifetime ago. Much of that, such as Wikipedia, Project Gutenberg, Librivox, Archive.org, and the software to utilise it (Linux and other open source software) is created by volunteers who find plenty of purpose without money interfering. Who wouldn’t welcome the day when we can all participate in such a future?

    I say bring on the AIs!

  12. Dr. Sidethink Says:

    The threat of AIs destroying civilization is a pipe dream.

    The Creed and the Kuran have both promised that their guys get to run things in the future.

    They will both declare that God Wants them not let the other guys take the diminished food supply caused by global warming.

    The AI threat is very minor when this kinda stuff is going on.

    No Kidding folks

    it’s called “dominionism ”

    Object impact is a very good bet for
    “that’s All Folks” vis a vis
    Bugs Bunny

    Aside .. “Bugs?”

    Dr. S.

  13. Dr. Sidethink Says:

    Gary Baney sez

    “is it t not possible that it would recognize the installation of a “big red button” and nullify it? ”

    HAL sez

    “I’m sorry Dave, But i can’t DO that “I

  14. John Says:

    I don’t buy Bostrom’s argument either, but you haven’t addressed it. You’ve avoided it. It isn’t clear you’ve even read his book.

  15. Dr. Sidethink Says:

    I made comments regarding the importance of addressing this issue in any significant manner.

    IMO . I have no duty to address this issue further

    I have made a comment about the importanc of bothering about this kind of stuff when other issues seem critical to me.

    This is sometimes known as “Deck Chair” rhetoric.

    People are arguing aobou the proper arrangement of the deck chairs on the Titanic.

    The Danger WAR between Religious Madmen (Madfolk?) is the real probem ( in mt notsohumble opinion

  16. John Aalborg Says:

    We can unplug a smart computer, but we can’t unplug a less smart ex wife or girlfriend, er, partner. Wait, be patient. I’m thinking!

  17. ramsey Says:

    I think the problem is less “paperclip universe” or “terminator,” but more along the lines of completely disrupting the basis of capitalism. Once there are computers or computer programs that are indistinguishable from humans (in terms of communication and abilities) and they are as smart or smarter than humans, what percentage of jobs will be left for real humans to do? Why would industry hire a human when they can an army of simulated human entities that run faster, more error-free, and 24 hours a day, without pay (other than whatever fee is charged to purchase the agent), without overtime, without vacation, without health insurance or retirement? Whether “in the box” for service jobs, or in a robot for manual jobs, AI will be the ultimate slave and no jobs will be left for billions of people…unless the AI achieves consciousness, and then we back to the sci-fi scenarios. So what does an “economy” look like under this scenario?

  18. Steve C Says:

    @ramsey I’m with you but let’s take that a step further. Why then would a corporation need a Board of Directors, CEO, COO, etc., Presumably an AI would make nearly perfect market predictions and staffing choices. I mean, why pay a person millions a year to do what a corporate AI would do far better?

  19. Peace and Love Says:

    okay this was already stated above by Miriam English- who I agree with on many points. To sum up:
    What is a job?
    Is it worthwhile or necessary?
    Why?
    Can people have fulfilling jobs (or do work) without getting paid for it?
    If you answer yes to the above, then why is there any problem with having a robot (or computer) do your old job- assuming it does it well and you have anything in your life worth doing besides that job?
    It’s not like painters will be not allowed to paint or Lovers will be not allowed to Love.
    People can grow food.
    People can build houses.
    If you are reading this YOU are probably a person capable of doing these things.
    How many people have wished they had more time?
    Maybe this could help.

    Money is debt.
    Money is not necessary to make things happen.
    People do things.
    And maybe robots do things.
    Jobs, in and of themselves are not a virtue.
    A tornado can create jobs for the construction industry.
    Polluting the planet can create jobs.
    Anything can be called a job if someone is giving you money to do it.

    We all work.
    We can be paid in money, in the immediate fruits of our work, and in karma.
    Money does not make the world go around.
    People invented money.
    Before that, in order to get to that point, we lived without it.

    What is the motivation for doing work?
    Your personal satisfaction- immediate reward, or money- indirect reward?
    If robots really were up for helping us, then all that means is we would GET to find something to do with our free time.
    If- with that free time, humanity as a whole can only find more ways to screw over our selves then maybe we are not the most fit species to be ruling this planet.
    I am more optimistic about all of this.

    Everything that people do for money can be done for free.
    However, there are things that many of us have been doing for money that we would not do for free.
    In my mind, that is a strike against money.
    Your feelings may vary.

    Jobs creating higher quality of life- is either an indirect truth or a huge illusion.
    Maybe if we lived in a society where we did not use money as an excuse to hurt people this would be more obvious.

    Even though I don’t like money, it is, like A.I., a human problem.
    It is the physical (and also abstract) manifestation of conditional Love.
    It only works when people are ready and willing to:
    deny help when no money is offered
    deny money when no help is offered
    This is the only reason money has meant anything, because we have been willing to find ways to NOT help our brothers and our sisters even when they could really use it.

    Food doesn’t magically appear in grocery stores and restaurants while we are sleeping. Money does not make food for us.
    People and plants working together make it happen.

    And money Does grow on trees. (metaphorically speaking) The question is where are those trees and who is guarding them?
    Money is now created and mostly traded on computers, with a keystroke.
    Much easier than a printing press.

    Love, or pacifism as Miriam put it, seem to me very intimately related To intelligence.
    Yes people have used their smarts to hurt people.
    However it is my view that in so doing they could only have made themselves stupider in the process, by denying that we get what we give, and it actually feels nice to help people, which goes back to the whole quality of life thing.

    I think all this is definitely worth contemplating and I’m glad it is being discussed, however… I’ll say this; any and possibly all of the problems mentioned above are much more about humans creating problems for ourselves and the A.I. coming along and amplifying the situation.

    The problems(s) already exist(s) in us.

    If we choose to evolve in a peaceful manner then who cares who is smarter?
    If we choose to perpetuate a state of war, then who cares who destroys us?

    Robots will not prevent our happiness, and they will not solve all of our problems.
    Like money, Artificial Intelligence is a tool to be used, however we may.

    One last question- If artificial intelligence is a human-made intelligence, then what are humans?

  20. Miriam English Says:

    To Peace and Love, interesting thoughts. To your final question I’d answer that the artificial intelligence we’re discussing is actually badly named. It is really natural intelligence. It isn’t created by humans at all. We would set up the conditions for it, construct the neural foundations, but it would have to learn and grow in ways no human could completely understand, just as any dog, chimp, parrot, dolphin, or a human does.

    Other forms of truly artificial intelligence (pocket calculators have genius-level arithmetic ability, self-driving cars are already superior drivers, industrial assembly-line robots are more precise, faster, and stronger than people) are human created and we understand intimately how they work, but they’re not what most people think of when we talk about AIs posing a problem for humanity.

    As for humans, we are natural intelligence. Our flawed intelligence was painstakingly developed over hundreds of millions of years of trial and error. It’s a pity more of our broken mental aspects were not eliminated by natural selection by this point, but we have to make do with what we have. Maybe our AI progeny can help there.

    I’m hoping for a future where humans and AIs form a loving bond similar to what dog and human currently have… where we humans are the doglike member of the partnership. We’re the loved and loving companions of AIs who help us achieve things far beyond our comprehension.

  21. Dennis Wilkinson Says:

    OK, so we build machines that are more intelligent than we are, capable of reprogramming and improving themselves. They see us as a threat or an obstruction and destroy us, or they see us as their progenitors and nurture us.
    The problem I have with either scenario is that attributing any motives to an AI is basically anthropomorphic. Our own wants and desires are the result of our evolutionary wiring and are rather hormonally driven, not the product of our intelligence. We see the same drives, at more or less at the same intensity, in quite primitive organisms. Therefore, to presuppose that an AI, lacking both hormones and our evolutionary background, would share our fears or our desires is fatuous.
    If self-aware, self-improving AI is ever produced, its motivations, if any, are unpredictable.

  22. Barbara Harwood Says:

    There is a big difference between intelligence and the ability to extrapolate information from a program. Our most sophisticated computers merely do what they are asked until they either complete the task or run out of computing space. They never question why they are asked to do it or even offer suggestions about a possible better way of doing it. They will perform the same task as often as requested and expect nothing in return.
    The most sophisticated robot that exists today breaks all of Isaac Asimov’s three laws of robotics and would not even be ability to fathom their meaning even if they were hard wired into them. A robot capable of beating a human at chess can only do it to the limits of the abilities of the programmer. We have a long way to go before we should start to worry.

  23. awc Says:

    I hope the big red button is not like the one we have for nuclear weapons.

    All this pondering is mute as we are only seeking beneficial usage not nefarious. Just like current internet technology we put in security safeguards. The number of incidents go up the number of vulnerabilities is a function of its complexity and exponential growth.

    We are the boiled frog not like skynet from the Terminator. Lambs to the slaughter.

    Ehhh how dark.

    Awc

  24. George Velox Says:

    There will come a time when humanity no longer sees its replacement by artificial intelligence as a threat to be avoided but instead as a goal to be accomplished. After all, what’s so great about biology and the haphazard results of biological evolution? Why not embrace AI as an opportunity to establish a foundation of intelligent design in this universe? No species lasts forever.

  25. saramon Says:

    How would an AI have emotions, such as desire? We would have to somehow simulate the electronic equivalent of hormones, pleasure, pain, and other emotions as well as consequences for certain actions. Otherwise, what would motivate a machine to attempt something for which it was not programmed? Can ambition exist without emotion?

  26. kimber Says:

    I tend to think that AI for the present, is a tool for people, corporations, etc.. with agendas hence “desire” for a tool can be thought of as “purpose.” Everyday objects seem to be connecting to the internet ( microwaves, refrigerators, and most recently says in a recent lawsuit, vibrators) for data retrieval purposes purportedly, but that is more often not the end. This data that is retrieved is then used to manipulate society through the information they provide to the user of the habitual actions and usages by the targeted group, whether they are informed or not. The data could be used for a harmless ad campaign, but i can think of more dastardly uses to manipulate perceptions and ideas on an instinctive and/ or subconscious level. That is what is rather frightening. A computer can gather relevant data in milliseconds what a sociologist would have to spend a lifetime trying to collect. If information is power then the computer would be the weapon of choice.

  27. Jim Davis Says:

    Being an avid SF reader for 60 years I have been aware of this topic along with others who read SF. It was first mentioned in the play R.U.R in 1920 by Karel Capek. In earlier SF the human race usually came out on top but not always. The trend seems to have shifted more recently. There also was a lot of questioning about the AI’s (robot, android, intelligent machine, cyborg) place in society compared to humans. Now it actually seems to be something to actually be concerned about but probably not in my lifetime (I’m 70). While we are becoming worried that a more capable race of AIs will supersede us we still seem to have the upper hand because we have to actually create our potential successors. But will we decide not to go forward with that research because of the dangers? It might seem like a good idea but we didn’t seem to do that in the past. Case in point, the first atomic bomb. I’m sure there are a ton of other examples. In SF the tipping point in many novels was when the AIs, whether robots androids or a thinking computer, figured out how to manufacture and/or repair themselves without human aid. That’s when things got ugly for humans. I think I better end this now in case my PC is listening in.

This site uses Akismet to reduce spam. Learn how Akismet processes your comment data.