By Dr. Matthew Cole

October 2019

chip

Introduction

The notion of what constitutes intelligence and therefore what constitutes an intelligent machine has been widely debated throughout the history of Western thought. Descartes’ mind-body dualism, Marx’s humanist distinction between the intentionality of an architect versus the functionality of bee, and Allen Newell and Herbert Simon’s ‘Physical Symbol System’ hypothesis, which argued that any representational system “has the necessary and sufficient means for general intelligent action”, are just a few examples. Stories of something approximating an intelligent machine go back to the eighth century BCE in Homer’s Iliad. These self-moving machines or ‘automata’ were made by Hephaestus, the god of smithing, and were servants “made of gold, which seemed like living maidens. In their hearts there is intelligence, and they have voice and vigour”.[i] In De Motu Animalium, Aristotle essentially conceived of planning as information-processing.[ii] In developing ontology and epistemology he also arguably provided the bases of the representation schemes that have long been central to AI.[iii] The first edition of Russell and Norvig’s famous text Artificial Intelligence: A Modern Approach [iv] even shows the notation of Alice in Wonderland author Lewis Carroll[v] on Aristotle’s theory of the syllogism – the basis for logic-based AI – on the cover.

From Descartes to Turing

The idea that we can test machinic intelligence is nearly as old as the concept of intelligent machines. Writing in 1637, Descartes proposed two differences that distinguish human from machine in a way that is much more demanding than the Turing Test (see below):

 

If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that, they were not real men”.[vi]

 

The first test imagines a machine’s “being” established such that it can “utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs”. However, this machine cannot yet fully produce speech such that it could “reply appropriately to everything that may be said in its presence”. This is essentially the criteria for many contemporary artificial intelligences. The second test concerns situations in which machines can “perform certain things as well as or perhaps better than any of us can do”, yet fall short in others, which means that they did not “act from knowledge”, but rather only from “the disposition of their organs”. An intelligent machine can only pass both of Descartes’ tests if it has a functionality that is beyond a narrowly defined intelligence such that it has the capacity for knowledge. It must understand any given question enough to answer beyond programmed responses. This leads to the conclusion that it is “impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act”[vii].

 

Intelligent machines that approximate human understanding have yet to be produced. However, intelligent machines of a narrower type have existed – first virtually, then in reality – since Charles Babbage’s Analytical Engine of 1834. This machine was designed to use punch cards (an early form of computation) and could perform operations based on the mathematization of first-order logic. The Countess of Lovelace Ada Byron King – popularly known as Ada Lovelace – worked with Babbage and prophesised the implications of the algorithms that underpinned it. We can think of algorithms as a type of virtual machine or an “information-processing system that the programmer has in mind when writing a program, and that people have in mind when using it”[viii]. Ada Lovelace theorised virtual machines that formed the foundations of modern computing, including stored programs, feedback loops and bugs among other things. She also recognised the potential generality of such a machine to represent nearly “all subjects in the universe”, predicting that a machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”, though she could not say how[ix].

 

Advancements in mathematics and logic allowed for a breakthrough in 1936, when Alan Turing showed that every possible computation can in principle be performed by a mathematical system. This is now called a Universal Turing Machine[x]. Turing spent the next decade codebreaking at Bletchley Park during World War II and thinking about how this virtual machine could be turned into an actual physical machine. He helped design the first modern computer, which was completed in Manchester in 1948. Turing is usually credited with providing the theoretical break that led to modern computation and AI. In an unpublished paper from 1947, Turing discusses “intelligent machines”. A few years later Turing publishes his famous paper in which he asks, “Can a machine think?” and argues that machines are capable of intelligence. To make his case, he first constructs an “imitation game” or what is now known as the “Turing Test”, which continues to influence popular debates about AI[xi]. The test involves three people – a man (A) and a woman (B) who communicate through typescript with an interrogator (C) in a separate room. The interrogator aims to determine which of the other two is the man and which is the woman. Turing argue that the question “What will happen when a machine takes the part of A in this game?” should replace the original question “Can a machine think?”. The failure to distinguish between machine and human indicated the intelligence of the machine. Turing then goes on to consider nine different objections which form the classical criticisms of artificial intelligence. One of the most enduring is ‘Lady Lovelace’s Objection’, in which she argues that computers have “no pretensions to originate anything. It can do whatever we know how to order it to perform”[xii]. However contemporary “expert systems” and “evolutionary” AI have reached conclusions unanticipated by their designers[xiii]. Interestingly, a machine with a set of responses that happen to perfectly fit the questions asked by a human would pass a Turing test, but not pass Descartes’ test.

Matt is a post-doctoral research fellow at the Centre for Employment Relations, Innovation and Change (CERIC) at Leeds University Business School. Matt is also a research affiliate of Autonomy, coordinator of the IIPPE Political Economy of Work Group and a member of the British Universities Industrial Relations Association.

Endnotes

[i] Homer, 1924. The Iliad. William Heinemann, London. pp. 417–421

[ii] Aristotle, 1978. Aristotle’s De motu animalium. Princeton University Press, Princeton.

[iii] Glymour, G., 1992. Thinking Things Through. MIT Press, Cambridge, Mass.

[iv] Russell, S.J. and Norvig, P., 2010. Artificial intelligence: a modern approach, 3rd ed. Pearson Education, Upper Saddle River, N.J;Harlow;

[v] Carroll, L., 1958. Symbolic logic, and, The game of logic : (both books bound as one), Mathematical recreations of Lewis Carroll. Dover, New York.

[vi] Descartes, R., 1637, 1931. The philosophical works of Descartes. Cambridge University Press, Cambridge.

[vii] Ibid., p. 116

[viii] Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. p. 4

[ix] Lovelace, A.A., 1989. Notes by the Translator (1843), in: Hyman, R.A. (Ed.), Science and Reform: Selected Works of Charles Babbage. Cambridge University Press, Cambridge, pp. 267–311.

[x] Turing, A.M., 1936. “On Computable Numbers with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, Series 2, 42/3 and 42/4., in: Davis, M. (Ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions. Raven Press, Hewlett, NY, pp. 116–53.

[xi] Nisson, N., 1998. Artificial Intelligence: A New Synthesis. Morgan Kaufmann, San Francisco.

[xii] Lovelace, A.A., 1989. Notes by the Translator (1843), in: Hyman, R.A. (Ed.), Science and Reform: Selected Works of Charles Babbage. Cambridge University Press, Cambridge, pp. 303.

[xiii] See Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. See also Luger, G.F., 1998. Artificial intelligence : structures and strategies for complex problem solving. England, United Kingdom.