By Dr. Matthew Cole
What puts the ‘intelligence’ in Artificial Intelligence?
Today there are many different kinds of intelligent machines, with many different applications. In 1955, the study of intelligent machines is essentially rebranded as “artificial intelligence” via a conference at Dartmouth College during the summer of 1956.[i] In the proposal for the conference, the authors state that “a truly intelligent machine will carry out activities which may best be described as self-improvement”.[ii] However, a single definition of artificial intelligence is difficult to adhere to, especially in a field rife with debate. For perspective, Legg and Hutter provide over seventy different definitions of the term.[iii] It has been variously described as the “art of creating machines that perform functions that require intelligence when performed by people”,[iv] as well as “the branch of computer science that is concerned with the automation of intelligent behaviour”.[v] One of the best definitions comes from the highly influential philosopher and computer scientist Margaret Boden: “Artificial intelligence (AI) seeks to make computers do the sorts of things that minds can do”.[vi] Within this definition, Boden (2016, p. 6) classifies five major types of AI, each with their own variations. The first is classical, or symbolic “Good Old-Fashioned AI” (GOFAI mentioned in a previous post), which can model learning, planning and reasoning based on logic; the second is artificial neural networks or connectionism, which can model aspects of the brain, recognise patterns in data and facilitate “deep learning”; the third type of AI is evolutionary programming, which models biological evolution and brain development; the last two types, cellular automata and dynamical systems, are used to model development in living organisms.
None of these types of AI can currently approximate anything close to human intelligence in terms of general cognitive capacities. A human level of AI is usually referred to as artificial general intelligence or AGI. AGIs should be capable of solving various complex problems in various different domains with the ability of autonomous control with their own thoughts, worries, feelings, strengths, weaknesses and predispositions (Goertzel and Pennachin, 2007). The only AI that exists right now is of a narrower type (often called artificial narrow intelligence or ANI), in that its intelligence is generally limited to the frame in which it is programmed. Some intelligent machines can currently evolve autonomously through deep learning, but these are still a weak form of AI relative to human cognition. In an influential essay from the 1980s, John Searle makes the distinction between “weak” and “strong” AI. This distinction is useful in understanding the current capacities of AI versus AGI. For weak AI, “the principal value of the computer in the study of the mind is that it gives us a very powerful tool”; while for strong AI “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states”.[vii] For strong AI, the programs are not merely tools that enable humans to develop explanations of cognition, the programs themselves are essentially the same as human cognition.
The Prospect of General Intelligence
While we currently do not have AGI, investment in ANI is only increasing and will have a significant impact on scientific and commercial development. These narrow intelligences are very powerful, able to perform a huge number of computations that would in some cases take humans multiple lifetimes. For example, some computers can beat world-champions in popular games of creative reasoning such as chess (IBM’s Deep Blue in 1997), Jeopardy (IBM’s Watson in 2011), and Go (Google’s AlphaGo in 2016). The Organisation for Economic Co-operation and Development [OECD], found that private equity investments in AI start-ups have increased from just 3% in 2011 to roughly 12% worldwide in 2018.[viii] Germany is planning to invest €3 billion in AI research between now and 2025 to help implement its national AI strategy (“AI Made in Germany”), while the UK has a thriving AI startup scene and £1 billion of government support.[ix] The USA had US$5 billion of AI investments by VCs in 2017 and US$8 billion in 2018.[x] The heavy investment in ANI start-ups and the extremely high valuations of some of the leading tech companies funding AGI research might lead to an artificial general intelligence in the coming years.
Achieving an artificial general intelligence could be a watershed moment for humanity and allow for complex problems to be solved at a scale once unimaginable. However, the rise of AGI comes with significant ethical issues and there is a debate as to whether AGI would be a benevolent or malevolent force in relation to humanity. There are also people who fear such developments could lead to an artificial super intelligence (ASI), which would be “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”. [xi] With an increasingly connected world (referred to as the internet of things) artificial super intelligences could potentially “cause human extinction in the course of optimizing the Earth for their goals”.[xii] It is important, therefore, that humans remain in control of our technologies to use them for social good. As Stephen Hawking noted in 2016, “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which”.
Matt is a post-doctoral research fellow at the Centre for Employment Relations, Innovation and Change (CERIC) at Leeds University Business School. Matt is also a research affiliate of Autonomy, coordinator of the IIPPE Political Economy of Work Group and a member of the British Universities Industrial Relations Association.
[i] McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E., 2006. A proposal for the Dartmouth summer research project on artificial intelligence: August 31, 1955. AI Magazine 27, 12.
[ii] Ibid., p 14
[iii] Legg, S., Hutter, M., 2007. Universal Intelligence: A Definition of Machine Intelligence.(Author abstract)(Report). Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science 17, 391. https://doi.org/10.1007/s11023-007-9079-x
[iv] Kurzweil, R., 1990. The age of intelligent machines. MIT Press, London;Cambridge, Mass
[v] Luger, G.F., 1998. Artificial intelligence: structures and strategies for complex problem solving. England; p. 1.
[vi] Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. p.1.
[vii] Searle, J.R., 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3, p. 417. https://doi.org/10.1017/S0140525X00005756
[viii] OECD, 2018. Private Equity Investment in Artificial Intelligence (OECD Going Digital Policy Note). Paris.
[ix] Deloitte, 2019. Future in the balance? How countries are pursuing an AI advantage (Insights from Deloitte’s State of AI in the Enterprise, No. 2nd Edition survey). Deloitte, London.
[xi] Bostrom, N., 2006. How Long Before Superintelligence? Linguistic and Philosophical Investigations 5, p.11.
[xii] Yudkowsky, E., Salamon, A., Shulman, C., Nelson, R., Kaas, S., Rayhawk, S., McCabe, T., 2010. Reducing Long-Term Catastrophic Risks from Artificial Intelligence. Machine Intelligence Research Institute. p. 1