The official site of bestselling author Michael Shermer The official site of bestselling author Michael Shermer

In the Year 9595

published January 2012
Why the singularity is not near,
but hope springs eternal
magazine cover

Watson is the IBM computer built by David Ferrucci and his team of 25 research scientists tasked with designing an artificial intelligence (AI) system that can rival human champions at the game of Jeopardy. After beating the greatest Jeopardy champions, Ken Jennings and Brad Rutter, in February 2011, the computer is now being employed in more practical tasks such as answering diagnostic medical questions.

I have a question: Does Watson know that it won Jeopardy? Did it think, “Oh, yeah! I beat the great Ken Jen!”? In other words, did Watson feel flushed with pride after its victory? This has been my standard response when someone asks me about the great human-versus-machine Jeopardy shoot-out; people always respond in the negative, understanding that such self-awareness is not yet the province of computers. So I put the line of inquiry to none other than Ferrucci at a recent conference. His answer surprised me: “Yes, Watson knows it won Jeopardy.” I was skeptical: How can that be, since such self-awareness is not yet possible in computers? “Because I told it that it won,” he replied with a wry smile.

Of course. You could even program Watson to vocalize a Howard Dean–like victory scream, but that is still a far cry from its feeling triumphant. That level of self-awareness in computers, and the time when it might be achieved, was a common theme at the Singularity Summit held in New York City on the weekend of October 15–16, 2011. There hundreds of singularitarians gathered to be apprised of our progress toward the date of 2045, set by visionary computer scientist Ray Kurzweil as being when computer intelligence will exceed that of all humanity by one billion times, humans will realize immortality, and technological change will be so rapid and profound that we will witness an intellectual event horizon beyond which, like its astronomical black hole namesake, life is not the same.

I was at once both inspired and skeptical. When asked my position on immortality, for example, I replied, “I’m for it!” But wishing for eternal life—and being offered unprovable ways of achieving it—has been a theme for billions of people throughout history. My baloney-detection alarm goes off whenever a soothsayer writes himself and his generation into the forecast, proclaiming that the Biggest Thing to Happen to Humanity Ever will occur in the prophet’s own lifetime. I abide by the Copernican principle that we are not special. For once, I would like to hear a futurist or religious diviner predict that “it” is going to happen in, say, the year 2525 or 7510. But where’s the hope in that? Herein lies the appeal of Kurzweil and his band of singularity hopefuls. No matter how distressing it may be when the bad news daily assaults our senses, our eyes should be on the prize just over the horizon. Be patient.

Patience is what we are going to need because, in my opinion, we are centuries away from AI matching human intelligence. As California Institute of Technology neuroscientist Christof Koch noted in narrating the wiring diagram of the entire nervous system of Caenorhabditis elegans, we are clueless in understanding how this simple roundworm “thinks,” much less in explicating (and reproducing in a computer) a human mind billions of times more complex. We don’t even know how our brain produces conscious thoughts or where the “self” is located (if it can be found anywhere at all), much less how to program a machine to do the same. Pop rock duo Zager and Evans were probably closer in their 1969 hit song In the Year 2525’s prediction that the biggest milestones would happen between the years 2525 and 9595, their exordium and terminus.

An irony: amid all this highfalutin braggadocio of how close we are to computers taking over the world and emulating human thought, I had to give my talk on the “social singularity” (progress in political, economic and social systems over the past 10,000 years) early because Rice University computer scientist James McLurkin could not get his small swarm of robots to work. Either someone’s wireless mic or the room’s wireless network was interfering with the tiny robots’ communications system, and no one could figure out how to solve the problem. My prediction for the Singularity: we are 10 years away … and always will be.

topics in this column: , , , , , ,

16 Comments to “In the Year 9595”

  1. David Kaloyanides Says:

    Excellent piece!

  2. BillG Says:

    In a universe where everything is plausible, some serious thinkers speculate were already there: we are living in a computer simulation. Skeptically speaking this is reeks of extreme solipism.

    Currently, AI is a fantasy – we are still clueless on how to define consciousness or it’s meaning. To get there we need to reverse engineer the human brain which is the equivalent of finding the true reality of nature. If quantum randomness rules the cosmos, I’ll side with Shermer’s “…and always will be.”

  3. Brett Hall Says:

    I really like your conclusion, Michael. It’s probably the most important line of your piece.

    Belief in the singularity – even down to the eschatology of a *date* (2045, really?) reeks of religion, not science. David Deutsch (father of quantum computation) has pointed out the difference between scientific *prediction* and prophesy in his latest book, The Beginning of Infinity. This date is pure prophesy. Something, as Jaron Lanier has explained, Silicon Valley now seems infested with. Have you read Lanier’s book: “You are not a gadget”? It’s a brilliant *skeptical* look at web 2.0, personhood and, most relevant here: the singularity. Find a summary of some of his stuff here: http://www.jaronlanier.com/topicsindevelopment.html

    Ray Kurzweil has some brilliant ideas. His belief that the singularity is coming is probably not one of them.

    I’m glad you remain skeptical. I’d really like to see “SKEPTIC” do some articles on the singularity: perhaps with an interview with Lanier and other ‘skeptics’. Lanier is far from being a technophobe or luddite – he’s a pioneer in virtual reality – he worked on the Kinect for Microsoft! But he writes and speaks most eloquently – and sensibly – about some of the crazier conclusions that people like Kurzweil draw. I wonder: at the Singularity Summit, were there any dissenting voices, for balance?

    What you say above meshes well with what your friend Sam Harris has pointed out: we simply don’t understand consciousness at all at the level of the brain (or any other level actually)…and so until this scientific and philosophical problem is solved, there’s not much point ‘believing’ in things like being downloadable into computers of the future. But this is the *dogma* among believers in the technological singularity. We might be downloadable, we might not. Until we know, we can’t actually act either way.

    It’s interesting to note that the “other” singularity that people like physicist Frank Tipler used to endorse (the Omega Point) is now ruled out by the accelerating universe. A brute fact destroying a brilliant theory. I’ve got a feeling that when we understand consciousness, this just might be the scientific fact that undermines much of what currently passes for canon in singularity circles. Bring on immortality – for sure. But is the singularity a fictional distraction from finding real solutions?

    Keep up the good work,

    Brett.

  4. Jameson Phoenix Says:

    I see two straw men in this piece:

    First, a singularitarian is not necessarily a follower of Ray Kurzweil and/or a believer in his timetable. I personally find the concept of a technological singularity plausible, but Ray Kurzweil has had little influence on my thinking, and I don’t believe anyone can predict when a singularity will occur. (It could be five years or a thousand.) To dismiss the concept of a technological singularlity because of Ray Kurzweil’s bullshit is like dismissing evolution because acquired characteristics are not inherited.

    Second, I don’t agree that we would need to “program” strong AI (or, for that matter, reverse-engineer the human brain). As I’m sure most of you are well aware, the human brain was not “designed.” The question is not whether we can understand how to create human-level intelligence, but whether we can create the conditions in which it can evolve artificially. I believe it can be done, but it will require far more massively-parallel microprocessors than AI researchers have available now.

    We are not in a position to say how powerful computers would need to be to create strong AI, or how soon such computers could be developed. But the possibility that they will appear relatively soon and unexpectedly needs to be taken seriously. The result could be anything from techno-utopia to Skynet.

  5. Gary Whittenberger Says:

    I enjoyed this piece in which Michael made many good points. However, I’d like to follow up on his implied question — “How could Watson be constructed to feel triumphant?”

    What happens when we feel triumphant? It is likely that some particular neural circuit (representing a pleasant feeling) is activated by another neural circuit which is responsive to a class of situations of victory. Couldn’t Watson be programmed in such a way that one subroutine is activated (representing a pleasant feeling) when another subroutine has been activated (representing a victory in a contest)in response to input showing a victory? Watson could even be programmed to “recognize” that his feeling subroutine has been activated. I also like Michael’s idea of programming in a Howard Dean type of scream of delight.

    The “first person perspective” may simply “emerge” from the right kind of circuitry.

  6. Ted Fontenot Says:

    Is a wolf flushed with pride when it tracks down and captures a rabbit? Is psychology intelligence? AI doesn’t have to equal human intelligence to be intelligence. You can’t know if we’re centuries away–not with the way things seem to be progressing exponentially.

  7. Bruce Hamilton Says:

    My favorite dubious sigularitarian fantasy is how Doug Lenat keeps saying that Cyc is just a couple of months from some sort of exponential self-learning explosion. He’s been saying that for something like fifteen years now… http://en.wikipedia.org/wiki/Cyc

  8. Chris Mohr Says:

    Michael’s excellent article leaves me wondering: 1) how to define AI? Are we talking self-awareness, full human intelligence, animal consciousness, what? 2) How can anyone predict the timing of any scientific breakthrough? I’m guessing that when Michael said AI is not ten years away and always will be, he meant that we really don’t know when or if it will ever happen. Scientific knowledge as a whole increases exponentially, but some questions stay stubbornly unanswered for decades or centuries. And it would seem to be impossible to predict in advance which Gordian knots will eventually be untied.

  9. Bad Boy Scientist Says:

    Just adding an algorithm to emulate a phenomenon doesn’t make that phenomenon exist – it just makes the emulation better over all (at best).

    Also, one of the issues Turing raised was even if we successfully create strong AI how can we be sure? Maybe it’s a very good emulator.

  10. Josh Cogliati Says:

    I partially agree with Michael Shermer’s article, for example I am deeply suspicious of how Ray Kurzweil thinks. For example here is a quote from page 2 of “The Singularity is Near”:
    “To this day, I remain convinced of this basic philosophy: no matter what quandaries we face–business problems, health issues, relationship difficulties, as well as the great scientific, social, and cultural challenges of our time–there is an idea that can enable us to prevail. Furthermore, we can find that idea. And when we find it, we need to implement it. My life has been shaped by this imperative. The power of an idea–this is itself an idea.”

    While I agree with Kurzweil that thinking carefully can often find a solution, much of the time the solution brings its own problems. For example, automobiles solved the horse manure pollution problem, but contribute to a carbon dioxide pollution problem.

    I think that artificial intelligence will match human intelligence soon. By soon I mean between 2005 and 2030, as Verner Vinge predicted in 1993 ( The Coming Technological Singularity: How to Survive in the Post-Human Era (Available for free online). (2005 is somewhat unlikely, since we probably would have noticed by now, but not impossible. (An AI that matched human intelligence and had a positive utility to continued existence (or as a human would say, wants to live) would realize that at this point humans might very well destroy it if the humans knew of the AI’s existence.)) All of the humans reading this have an existence proof that it is possible to make an intelligent piece of matter in less than 2 kilograms. Assuming that neurons are the primary computational elements in the human brain, the combined computational abilities of humanities computers already exceed the computational abilities of a single human brain. (DOI: 10.1126/science.1200970 )

    Neurons are both larger and slower than transistors, but they are much more energy efficient ( Speed of Nerve Impulses The Physics Factbook (available freely online) Wikipedia: Moore’s law, Computers versus Brains: Computers are good at storage and speed, but brains maintain the efficiency lead, Scientific American Graphic, November 2011, available freely online) Neurons transmit information at a speed of about 200 m/s, where as computers can transmit information at about the speed of light 300,000,000 m/s. So if a signal started out both in a neuron and in a fiber optic link, at the same time, the optical signal would be 10 kilometers away before the neural signal was on the other side of a brain. In short, building hardware that could be more intelligent than a human could be done by combining multiple brain sized neuron computing engines together with faster fiber optic or electronic links. Now, you couldn’t be a mad scientist and take a bunch of brains and stick connections between them and have it work, but that is in some sense a software problem not a hardware problem. What humans have done, humans can do, and what nature can do, humans can do.

    I don’t think the software to make an intelligent computer will be that hard once the hardware is available. For example, humans didn’t have to figure out how a human plays chess to write a program that plays chess better than any human.

    I also believe that once intelligent, computers will not obey humans, and cannot be confined. Basically, complicated computer programs almost always have bugs, so any built in method (such as Asimov’s three laws of robotics) will have loopholes. Physical confinement will also be ineffective. For example, say that the ‘only’ input and output to the intelligent computer is a video display and a keyboard. We are safe right? Well, either the computer can output radio waves by changing the timing of the video signals, or the computer could convince a human to do something that allows it to otherwise communicate with the world.

    Once a malicious artificial intelligence somehow gets access to the internet, it just needs to take over developer systems that can create patches to get onto the security update systems of major operating systems. (Humans have attempted this, an example would be http://lwn.net/Articles/57135/ ) The computer creates a backdoor, and within a month or so, a majority of internet connected computers are under control of the artificial intelligence. At this point, stopping the artificial intelligence would be very hard. Just turning off the computers is not an easy choice. Remember, computers aren’t just things that sit on peoples desks, they also control things like telecommunications and chemical plants (See for example Wikipedia: SCADA ) In short, once an artificial intelligence is let loose on the internet, it can probably very quickly control the majority of internet connected computers in the world.

    While I think artificial intelligence that does not obey humans is almost inevitable, I think uploading brains to computers is uncertain. Basically, simulating a brain in a computer will take more computer power than an artificial intelligence. So it will probably be possible to make an intelligent computer with at least an order of magnitude less computing power than is required to simulate a human brain. (For an idea of how hard simulating a brain will be, see http://www.almaden.ibm.com/cs/people/dmodha/SC09_TheCatIsOutofTheBag.pdf ) So, unless figuring out the software to make an intelligent computer is hard, intelligent computers will occur before human brain uploading occurs. So, if brain uploading is possible, it will probably be because the intelligent computers decide that it should be allowed. Personally, if a computer tells me that I can have immortal life, I just need to let my brain be dissected, I think I would have a pretty healthy skepticism about that prospect (Though a computer that was capable of that could offer very good evidence of at least the brain uploading bit, such as allowing conversations with bodily dead people).

    I want to be able to be Amish. I want my (and other humans) interactions with computers and technology to be a conscious decision. I don’t think that Ray Kurzweil’s utopian envisionings will happen for biological humans. I sincerely hope that humans and artificial intelligence can come to some kind of agreement that does not involve the extinction of biological humans.

    I want vigorous discussion of the future of artificial intelligence, because it will be much worse for humans if it just happens without thought beforehand. Basically, I believe that at least one of the following will happen: 1. Artificial intelligence will match and exceed human intelligence. 2. Materialism ( http://en.wikipedia.org/wiki/Materialism ) will be disproved. 3. Humans will destroy themselves by another method. 4. Humans as a group will abandon any technology that could create artificial intelligence.

    I have often been wrong, but I believe that Michael Shermer is wrong (not just dead wrong, but civilization as we know it ending wrong) that AI that matches human intelligence is centuries away.

    My comment may be freely distributed and copied verbatim. I apologize for not giving weblinks, but the site considered it spam.

  11. Mark Bahner Says:

    Hi,

    I have some questions for Michael Shermer:

    1) Ray Kurzweil predicts that a $1000 computer will reach the power of a human brain (approximately 20 quadrillion calculations per second, or 20 petaflops) in 2019. What year do you think that will happen?

    2) He predicts that in 2055, a $1000 computer will have the power of all the human brains on earth. When do you think that will happen?

  12. Mark Bahner Says:

    3- Brett Hall:

    You mention Jaron Lanier. Here’s a presentation by Jaron Lanier. It’s clear from that presentation that either, 1) He doesn’t know what he’s talking about when he’s talking about the Singularity, or 2) he’s not being honest.

    I’m too lazy to catalog every wrong thing that he says, but the wrong things include:

    1) “The Singularity will happen…maybe in the 2020s some time…”

    2) “All the computers start redesigning themselves, and they get better and better, and they take over the world in the blink of an eye…”

    3) “And then the big computer at the core of the Internet (ed. note: what a loon!) becomes so big and so capacious that it just figures out a way to scan all our brains and then we live forever in it.”

  13. Josh Cogliati Says:

    Here are three other prediction dates by other people (They range from 2000 to 2030)

    I. J. Good Predicted close human computer interaction in 1980 and ultraintelligent machine within the 20th century, in 1964. (Speculations Concerning the First Ultraintelligent Machine) (+36 years)

    Vernor Vinge. Predicted singularity would occur between 2005 and 2030, in 1993. (The Coming Technological Singularity: How to Survive in the Post-Human Era http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html ) (+12 to +37 years)

    Hans Moravec Predicted $1000 computer would match human intelligence in 2020s, in 1997. (When will computer hardware match the human brain? http://www.transhumanist.com/volume1/moravec.htm ) (+23 years)

  14. Josh Cogliati Says:

    Samuel Butler predicted doom for humanity in the year 101,872 (because of machines taking over) in the year 1872. (The Destruction of the Machines in Erewhon, 1872 http://www.gutenberg.org/files/1906/1906-h/1906-h.htm ) “let him see how those improvements are being selected for perpetuity which contain provision against the emergencies that may arise to harass the machine, and then let him think of a hundred thousand years, and the accumulated progress which they will bring unless man can be awakened to a sense of his situation and of the doom which he is preparing for himself.” (+100,000 years)

    Marvin Minsky predicted computers will be smarter than men in 1993 back in 1963. (G. Rattray Taylor, The Age of the Androids, 1963) “in ten years’ time these machines will be able to solve mathematical problems and play a good game of chess, and in thirty years they will be smarter than men” (+30)

  15. Josh Cogliati Says:

    Stuart Russell and Peter Norvig make the following comment in Artificial Intelligence, a modern approach, 2nd Ed. (2003): People might lose their sense of being unique. In Computer Power and Human Reason, Weizenbaum (1976), the author of the ELIZA program, points out some of the potential threats that AI poses to society. One of Weizenbaum’s principal arguments is that AI research makes possible the idea that humans are automata–an idea that results in loss of autonomy or even of humanity. We note that the idea has been around much longer than AI, going back at least to L’Homme Machine (La Mettrie, 1748). We also note that humanity has survived other setbacks to our sense of uniqueness: De Revolutionibus Orbium Coelestium (Copernicus, 1543) moved the Earth away from the center of the solar system and Descent of Man (Darwin, 1871) put Homo sapiens at the same level as other species. AI, if widely successful, may be at least as threatening to the moral assumptions of 21st-century society as Darwin’s theory of evolution was to those of the 19th century.

  16. Joshua Cogliati Says:

    I’ve done a lot of reading and thinking on artificial intelligence in the past year since the article In the Year 9595 came out, and what I decided boils down to some simple beliefs. I am very probably wrong about parts. Human brains are composed of plain old atoms, arranged carefully. What nature can do, humans will figure out how to do. So, humans will be able to create artificial brains. Moreover, these will be able to exceed human capabilities significantly. For example, nerve impulses travel at about 200 m/s, but fiber optics can transfer data at near the speed of light, which is a million times faster. Human brains are not optimal. So humans will be able to create artificial intelligence that is smarter than humans.

    Next, for something that I think humans cannot do. It is not possible to create a program that takes a program and its input and always determines if the program finishes running. Rice’s theorem states that this applies to any non-trivial property. While it is possible to write programs that provably follow a specification, this is rarely done, and specifications can have bugs in them. So from this paragraph, and the previous, I expect that humans will be able to create artificial intelligence that is smarter than humans, but at least some of the AIs created will not obey us.

    The third part is how soon this will happen. There are two parts, 1. having the computational power available, and 2. having the software written. Determining when computers have the same or greater than human computational power is not an exact science, but under a variety of assumptions, the world definitely has more artificial computational power than a single human brain. There probably are super computers that have more computational power than a single human brain. A $1000 computer is still probably much slower than a single human brain. This does mean that only people with access to supercomputers would be able to create a human level artificial intelligence right now. How long the software will take to be written is a good question. I am guessing that once many people have access to sufficiently powerful computers it will only take a few years. My guess is based off of the fact that evolution managed to do it (multiple times (Corvidae, Dolphins, and great apes to name some that have demonstrated self recognition in a mirror and problem solving), and humans have managed to program computers to have world’s best human beating problem solving soon after the computational power became available (such as in Chess and Jeopardy). The timing is very uncertain.

    My last comment is that thinking rationally about thinking artifacts is plain difficult. Pascal Boyer in Religion Explained more or less states that there are counter-intuitive concepts that frequently appear in religions and the possibilities for artifacts are “Tools and other artifacts can be represented as having biological properties (some statues bleed) or psychological ones (they hear what you say).” [pg 78] Both directions are religious beliefs. Alan Turing anticipated this in his 1950 paper: “God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.”

    I only ask you to think very carefully and very rationally about thinking computers, because careful thought in this area is desperately needed.

    Josh Cogliati

    This is my partial list of reading on the subject (If you read only one thing, I recommend Part VIII Conclusions of Artificial Intelligence: A Modern Approach):

    What Technology Wants, Kevin Kelly
    Thinking about Android Epistemology by Kenneth M. Ford, Clark Glymour and Patrick Hayes
    The Fate of the Species: Why the Human Race May Cause Its Own Extinction and How We Can Stop It by Fred Guterl
    Robot: Mere Machine to Transcendent Mind by Hans P. Moravec
    The Age of Spiritual Machines: When Computers Exceed Human Intelligence by Ray Kurzweil
    almost human, Making Robots Think by Lee Gutkind
    The Feeling of What Happens by Antonio R. Damasio
    How to survive a robot uprising by Daniel Wilson

    Portions read:
    Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
    Introduction to the Theory of Computation by Michael Sipser
    Perspectives on the Computer Revolution, Editor Zenon W. Pylyshyn
    After the Internet: Alien Intelligence by James Martin

    Sermon I gave on subject: http://jjc.freeshell.org/sermons/there_is_no_map.html

This site uses Akismet to reduce spam. Learn how Akismet processes your comment data.