What is Artificial Intelligence?
Last post I introduced my premise that we have not seen anything called Artificial Intelligence with real intelligence. We have just seen machines that can do sequences of simple calculations on large amounts of data very quickly and as a result perform tasks that have hitherto been the sole domain of humans or the animals considered having higher intelligence (there’s that word again). The meaning of word intelligence is at the edge of our ability to define in that you find yourself using the word to describe itself.
If you can break a process down to a series of pre-defined steps that can respond to any relevant stimuli in a robust manner to produce a useful (not necessarily 100%) correct response or reaction then whilst this feels like intelligence, it is really just a program (or Turing Machine or Finite State Automata). The question is, what cannot be broken down in such a manner given our current state of art. Free will? Love? Imagination? Insanity? Creativity? Friendship? Admiration? Hate? All the things that make us humans. If you take away emotion from a human being you end up with … a machine. A machine that can perform complex tasks, reliably and consistently (within physical constraints of time and effort) but without wants, desires, prejudices and all the other drivers that make humans what they are.
Every machine built in history has had a predefined set of constraints. It cannot neither want nor decide to do something that the designers and builders have not carefully and meticulous thought through and scripted in pains-taking detail. This is the only way in which we know how to build machines. Even genetic algorithms and artificial neural networks are totally constrained to teaching goals (or cost functions) they evolve to achieve (minimise). They learn what is in the data and they respond to further new data according to patterns that lead to successful outcomes in past training and optimisation.
If a hypothetical AI is trained on data that includes the committing of a crime and the result of the crime was assigned a positive outcome in the training set then the AI is no more responsible for committing a crime based on this training then a knife is of stabbing someone. The responsible party is the user of the tool who purposely trained the AI to commit a crime.
What is there to be afraid of now?
It is almost guaranteed that an AI will be involved in a crime in the foreseeable future and equally likely that they will will be involved in an accident that causes injury, loss or even death. It is also guaranteed that the circumstance will not include the AI having become self-directing and making a decision not in its original programming or training to perpetrate an illegal or malevolent act. There is simply no basis, other than in science fantasy, that anything built by humans could do this. If, tomorrow, clear evidence of self-conciousness and self-direction could be demonstrated in any artificially intelligent construct then we could start projecting such as possibility but today we may as well be afraid of time machines that can send someone back in time to orchestrate foul play against your grandfather before your mother was born! An entertaining story but with no possibility of occurring outside the realms of imagination.
The recent calls by various luminaries of science (Elon Musk, Bill Gates, Stephen Hawking et al.) to be afraid, very afraid would be laughable it it wasn’t exploiting the visceral fear humans seem to carry. Like the fear of sharks which is irrational in the sense that you are more likely to be killed taking a selfie than being attacked by a shark. In the case of AI, this fear is surely related to the “Uncanny Valley” which is the creep factor humans experience when they encounter animated machines that are close to human looking, but no quite. If you have every been with a 3 or 4 year old at a theme park you would have seen this effect exaggerated as these little ones’ sense of the “Uncanny Valley” is acute and any animatronics sends them over the edge as they cannot assimilate these seemingly alive creatures with their expected (and very limited) model of the universe.
What was there to be afraid of 64 years ago?
In the Fifties, there was a similar rash of what has been coined “AI Panic”, see “Mocking AI Panic”. Alan Turing himself took on this media frenzy in a way that forever elevates him to the level of a great statesman. He gently mocked those gripped by AI Panic and was famously quoted as joking that in the event of an AI run amok, we might be able to “keep the machines in a subservient position, for instance by turning off the power at strategic moments.” Turing was not submissive of the possibility of machine intelligence rivalling or exceeding that of humans: “It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort.”
Of course Turing himself had a vested interest in the concept that human intelligence itself could be described, and modelled, by a sufficiently large machine so he was absolutely not a Luddite in this respect, but he was just not seeing the eventuality of machines rising up and slaying their master’s. Maybe this was due to his rejection of the oppression of the lower classes by the political-economic power structures of the mid-20th Century (and associate with his personal battle of oppression as a gay man). The fear of the revolution was ripe in the anti-communist rhetoric of the time and loose parallels could be drawn between the idea of the Robots uprising against oppression and slaying their human masters in the same way that the Workers may revolt and seize power for themselves.
Fast Forward back to 2015
So why has this AI Panic re-emerged? Cynically, one could look at the recent list of Hollywood movies and just write it off as a PR campaign to garner interest in the subject and get people who don’t tend to follow Sci-Fi into the cinemas on the grounds of relevance to the “current affairs”. The term Singularity as re-popularised by Futurist Ray Kurzweil borrows from the notions of exponential growth to extrapolate to a near term future when AIs are not just as smart as humans but orders of magnitude smarter than humans, thus allowing it/them to make humans irrelevant. I also think that some of these people could well be trying to sway public opinion from a philanthropic tack in order to put pressure on governments to outlaw using AI in weapons, which is entirely commendable and anything we can do to avoid making powerful weapons smarter is likely to be a good idea.