Alan Turing is famous for the Turing Test, described in his landmark paper, Computing, Machinery, and Intelligence. There's probably no other figure more important to the field of computer science. But despite the recent popularity of the movie The Imitation Game, this is not why he is so important to computer scientists, there's another paper which all computer scientists study entitled on computable numbers with an application to the Entscheidungsproblem. In this paper, the 23-year-old Turing raises a fundamental question, What is computation? Computing didn't begin with the invention of the computer, humans had been building computational systems or algorithms all along. Prehistoric humans created an algorithm to convert wheat and water into bread. Ancient Romans created algorithms to determine which men should be hung on a cross. Of course, mathematicians had been creating algorithms for millennia. In 300 B.C., Euclid created an algorithm to compute the greatest common divisor of two numbers. Computing had been done by cooks, judges, and mathematicians since the dawn of civilization, but none of them had bothered to figure out exactly what it is and just as importantly, what it isn't. What Turing proposed is this, computing is whatever a Turing machine can do. A Turing machine is an exceedingly simple machine, it basically consists of a long tape and a moving head that can read or write a zero or one on this tape and move left or right. Amazingly, this simple machine is widely agreed to completely capture our intuitive notion of computation. If a problem can be computed, it could be computed by a Turing machine, and if it cannot be computed by a Turing machine, it cannot be computed at all. In this paper, Turing proves that there is indeed a problem that cannot be computed, this problem, which has come to be known as the halting problem, is to program a computer to determine whether another computer will, in fact, be able to complete its program on a given input. Turing proves that this particular problem cannot be computed. What this means is that there are mathematically well-defined problems that cannot be computed. What about all the problems that human intelligence is able to solve? Perhaps some of these problems are computable and some of them are not. There's no reason to assume that all these problems are computable, so we have no reason to assume that our brains are, in fact, computers. The computer Hal in the movie 2001, A Space Odyssey was the leader of an expedition in space and in general was more intelligent than the human crew members under house control and Hal decided it was best to murder the human crew members because Hal believes he could better fulfill the goals of the mission without them. The movie was made in the 1960s and the film was made with detailed advice from leading AI experts, including the legendary AI pioneer Marvin Minsky. Hal had impressive abilities of conversation, language understanding, face recognition, even lipreading, and in this way represented overly optimistic predictions about the progress of AI. The actual computers in the year 2001 were far inferior to Hal in all these areas, and indeed the same can be said of today's computers, but that's not the biggest difference between Hal and real computers. The biggest difference can be found in the description I just gave where I said Hal decided it was best to murder crew members because it believed it could better fulfill the goals. Hal is an entity capable of making decisions, having beliefs, and pursuing goals, in short, Hal is conscious. The idea of consciousness is difficult to define at the same time if there's one thing we're certain of it is that we humans are conscious beings, we make decisions, have beliefs and pursue goals. This can scarcely be doubted, just as it can scarcely be doubted that physical objects such as tables, chairs, telephones, and computers are not conscious. So why are we so willing to accept the idea of a conscious, intelligent computer, not only in a Space Odyssey but countless other sci-fi fantasies such as Star Wars, The Matrix, Ex Machina, and Halo. These fantasy AI systems are able to match or even exceed specific human abilities in areas like language understanding and visual processing, that is, they exhibit many of the specific abilities we think of as uniquely human. So it's a natural leap to think that an AI system that is able to see and talk the way we do, is also able to think and feel the way we do. This is the most exciting and terrifying thing about AI. AI assistance may soon start to have their own ideas, preferences, and plans about the world. As Elon Musk famously said in 2014, "with AI, we're summoning the demon, " but this is simply a failure of imagination, the only experience we have are beings who are highly skilled and also have their own thoughts and feelings, namely humans. We have experience with animals and inanimate objects that lack these skills and also lack the internal life we have. So we tend to assume that if there is a new system or being that duplicates the skills, it will also naturally have a rich internal life, it will be conscious. But there's no reason whatsoever to assume that and there are no serious arguments that this is a likely possibility. As we have seen, we actually do not have a sound basis to estimate the likelihood that AI systems will develop human-level intelligence in the near future, but even if that does happen, there's no evidence whatsoever that this will lead to any kind of conscious being with its own internal life, the fantasies of super-intelligent, conscious AI systems, whether cute or malevolent or simply that fantasies. There's no basis whatever to expect that this will occur in the near future or ever. What is coming with AI development might be even stranger and more momentous than anything dreamed of in Hollywood.