This blog looks at what the current limitations of Artificial Intelligence with respect to Autonomy.
What is Artificial intelligence Good at Now?
Artificial Intelligence is good at some very specific things, like pricing a spare room in your house for Airbnb (see “The secret of airbnbs pricing algorithm”), sequencing your genome from a sample of your DNA, landing a Airbus 380, recognising your face or finger print to authenticate your login to a device or detecting anomalous behaviour in a computer network to defend against unauthorised intrusions.
But not so good at other things, particularly general activities that are effortless for human beings like recognising the face of one individual with exquisite precision out of crowd, understanding the meaning of 17 syllable Japanese Haiku, understanding what is being represented in an image in a split second, getting the meaning of a joke, making a convincing argument whilst getting a point across, caring about a cause and a myriad of other activities that make us human (see “Mastery of AI has been harder than expected”)
For those of us not in the rosie bloom of youth the complexity and interaction of memory, experience and emotion can have a profound emotional impact. For example, many people are familiar with a film that elicits very strong attachments (usually one they saw as a child). When they re-view this film as an adult it can often elicit very strong emotions and they are transported so completely to another time and place that the experience can be overwhelming. Someone close to me weeps from beginning to end when they watch “Sound of Music” even though they have seen it dozens of times and knows the story line off by heart. This reaction can be thought of as sentimentality but the importance in the context of intelligence is that the interpretation of the film is so tightly bound with experience, memory, context and mindset that the intelligence which leads to the experience cannot be isolated. In fact, it can give us a clue as to why we might have as a species developed intelligence far beyond that required to survive in the natural world; because it is enjoyable and rewarding to participate in the rich experiences that intelligence allows us to conduct. To paraphrase Mahatma Gandhi, “Intelligence is its own reward”.
This leads us to an interesting question regarding the synthesizing of Artificial Intelligence systems; should we be considering that these systems should be “enjoying” the act of thinking intelligently and if so how can we do that? But that is a question to be explored at another time.
What is autonomy and why is it so hard?
Autonomy is difficult because so much capability and broad understanding is required to achieve the level of independent existence that autonomy demands. The scope of required capability is indeed expansive and we can just focus on a few to illustrate the difficulty.
Delivery and Repair
In order to have some autonomous artificial intelligent machine it requires a means to come into existence. Lets assume there is somewhere a Robot factory (which has its own Autonomy conundrum to deal with) that produces a robot to be used as a Jackaroo (Australian cowboy / farm hand). Now a human Jackaroo can contact a prospective employer and coordinate their own means of transport to get from their current place of residence to the new job. An autonomous Jackaroo robot (which needs to know about livestock husbandry and cropping) will need additionally to know how to make itself available for employment, negotiate travel using commercial transport, purchase energy to sustain itself and protect itself from interference whilst not in the safety of its new employer. Once it has reached its new place of work it will need to be able to repair itself. This is akin to human healing but whereas humans can heal themselves without understanding biology, an autonomous robot would need to fully understand robotic engineering to repair itself which is very different from the skills required to be a Jackaroo. It will also need access to sophisticated tools to manage this feat.
In the case where it submits itself for repair to some workshop (i.e. visits a robot hospital, maybe using a robot ambulance if it is incapacitated) it will still require some detailed knowledge regarding its own health (a mechanism akin to human pain) to determine when this visit is required.
Context and Motivation
Building a general purpose robot that can perform human jobs that can negotiate the real-world with a human degree of autonomy would required that the machine has the real-world context for its “job” and then it will need a motivation and reward framework to enable it to prioritise the myriad of tasks that it will be confronted with in real-world situations.
Prof. David Mindell discusses the concepts of Autonomy in his book, Our Robots, Ourselves Robotics and the Myths of Autonomy. He conjectures that there are three main myths relating to robots / artificial intelligence.
- The myth of replacement. That a robot can entirely replace a human in a certain type of job with the associated independence and needs.
- The myth of linear progress. There is a natural progression of technology that leads from human tasks, to remote tasks, to full autonomy.
- The myth that full autonomy is the highest level of the technology. That once artificial intelligence is able to act independently in response to the environment that technology will have reached a zenith.
So the answer is that there is far more Artificial Intelligence can’t do now when compared with Human Intelligence but that the scale of difference between Artificial Intelligence and Human Intelligence “smartness” is not really the differentiation, it is more importantly the degree to which the intelligence, human or artificial, is embedded in the real world and the degree of interaction they have with it. Just think of the effort you need to put in to keeping your older devices ( and by that I mean 3 or 4 years old ) such as laptops, tablets and phones going and the level care you need to prevent then from being infected by viruses and other malevolent vectors. For Artificial Intelligence all this is a severe limitation on the level of autonomy that can be realistically achieved now and it is unlikely to be overcome in the near term.