When we think about ethics it is traditionally in the context of decisions made by humans affecting other humans, animals or the environment. These decisions are made with the usual caveats that permeate the human consciousness such as greed, fear, lust, altruism, logic, love and the full range of human subjectivity. One would think that the greatest difference in discussing ethical issues is, in the context of Artificial Intelligence, the simulation of emotional responses and subjectivity and how poor these processes are understood.
Human Memory is Fallible
However, it is the fallible memory and subjective recall of the human mind that stands out most starkly against the absolute and objective recall of machine intelligence. In the simple example of a minor car crash, it is the testimony of those involved and eyewitness bystanders, and the evidence at the scene that are used to reconstruct the circumstances that lead to the crash. Who is at fault can be determined probablistically based on this evidence. Given the simple nature and low stakes (no injuries or deaths and minimal damage) we are happy as a society to accept that this methodology will mostly result in a fair and just outcome.
Unfortunately, when a closer examination is made of the human memory it is quite clear that it is poor in accurately recalling fast moving once-off incidents. Even worse, eyewitness testimony has been proven to be wholly inaccurate with a tragic number of people being incarcerated for crimes of which DNA evidence eventually proved them to be innocent (see Do the eyes have it ). Finally the cost and difficulty of reconstructing events from the physical evidence is very high; the law of entropy ensures this.
As a result it is often the case that someone can claim that they simply do not recall the facts or recall them differently to the actual events and there is really no way to validate the veracity of the statements. We all know well the scene of the businessman being cross-examined about a shading dealing and them claiming the defence that they have no recollection of the discussion or event.
Plausible Deniability
People can similarly claim that their decisions, which seem unethical after the fact, were made innocently due to incomplete information being available. Hypothetically, a government statement such as; a Drone Strike killing innocent women and children along with the military target because the information that they were present was unavailable, can shield the fact that the publicly unpalatable ethical decision to sacrifice the innocent for the sake of the military objective was made. This is the apocryphal “Plausible Deniability”.
No Where to Hide
However, in the case of Artificial Intelligence the ability to selectively recall or modify the recollection is diminished by a significant degree. The record of what happened is reliable and recollectable without error as long as the data was captured and stored. This is even starting to impact people right now in an indirect way. In many countries the use of a mobile phone whilst driving is illegal unless it is being used as a hands-free device in a cradle for calls or navigation. There have been cases where a driver has been found guilty of this offence after the fact due to the ability to access the data from the phone itself or the network company. Tragically, there have been incidents where people have been fatally injury in single vehicle accidents where it has been shown they were texting at the time of the crash.
The Third Umpire
There has been much debate recently about the safety of self-driving vehicles or autonomous mode driving. These constitute a variety of hypothetical scenarios that will test our definition of the term “accident”. When a self-driving vehicle is involved in a collision there will be available to the authorities the entire sensory spectrum of data for the vehicle in the moments leading up to the collision. In the case of a Google vehicle this may be as many as 10 different real-time data streams that can be replayed. This is much like the development of ultra slow-motion video replay, Infra-red heat signatures and the audio Snickometer now used in the game of Cricket to more closely evaluate close calls. In the past a batsman was on their honour to declare if they had felt the vibration of just glancing a caught ball. Now this can be examined in detail via three separate spectra which can reveal the fact of even the slightest of nicks.
Not an Accident
In a similar manner the data of a collision can be replayed and the decision making processes of the driverless vehicles software can be analysed to determine if any resulting death, injury or damage could have been avoided through different decision pathways. The engineering quality can be second guessed and the collision can no longer be called an “accident”, it may well be that they have to adopt the software industry terms of the crash being an incident, defect or bug!
Ethical Conundrums
Inevitably this has lead to the hypothesising of ethical conundrums whereby driverless vehicles are placed in situations where their decision making is challenged by complex chains of events where least harm outcomes versus the safety of the passenger are posed. These included scenarios such as a driverless car swerving to miss a drunk pedestrian who stumbles on to the road but as a result hits a truck killing its occupants. In this case it can be hypothesised that the driverless car is faced with the conundrum of killing the pedestrian or its occupant by swerving or not swerving. What does it do? Which life is more valuable, the pedestrian or the vehicle occupant?
These ethical conundrums are not just academic as, depending on the choice made by the driverless vehicle, the family of one or the other victims could litigate the manufacturer of the decision logic as to the validity of the choice the driverless vehicle makes in either circumstance. This is all being facilitated by the detailed and comprehensive recording of all the sensory inputs accessible by the driverless vehicle.