[box type=”info” align=”” class=”” width=””]César A. Hidalgo Chair, Artificial and Natural Intelligence Institute (ANITI), University of Toulouse[/box]
- Humans historically take a long time to trust the latest wave of machine technology.
- In scenarios involving physical harm, people tend to see machines as more harmful than humans performing the same actions.
- It’s important we combine our interest in how machines should behave with an understanding and of how we judge them.
Recently, voting machines have been on the receiving end of controversy. And yet people’s aversion of machines is nothing new.
Some 500 years ago, printing was being demonised as a satanic device. Today’s equivalent — artificial intelligence — is routinely criticised as a source of unemployment and bias.
But is every bit of anger justified?
Scholars studying people’s reactions to machines are beginning to learn when and why we judge humans and machines differently.
Imagine a car that swerves to avoid a falling tree, and in doing so runs over a pedestrian. Do people judge this action differently if they believe it was the action of a self-driving car as opposed to that of a human?
In my latest book, How Humans Judge Machines, my co-authors and I asked over 6,000 Americans to react to scenarios just like this one, using the setup of a clinical trial.
Half of our subjects saw only scenarios involving human actions, while the other half evaluated only scenarios involving the actions of machines. This allowed us to explore when and why people judge humans and machines differently.
Bad machine, good human
In the aforementioned car accident people judge the action of the self-driving car as more harmful and immoral, even though the action performed by the human was exactly the same.
In another scenario we consider an emergency response system reacting to a tsunami. Some people were told that the town was successfully evacuated. Others were told that the evacuation effort failed.
Our results showed that in this case machines also got the short end of the stick. In fact, if the rescue effort failed, people evaluated the action of the machine negatively and that of the human positively.
The data showed that people rated the action of the machine as significantly more harmful and less moral, and also reported wanting to hire the human, but not the machine.
Do machines always get the shortest straw?
For a long time, scholars have known that people have an aversion to algorithms. Even when algorithms are better at forecasting than humans, people tend to choose human forecasters. This phenomenon is known as algorithm aversion, and it can be costly in a world in which small differences in predictive accuracy matter.
In a recent paper, Berkeley Dietvorst, Joseph Simmons and Cade Massey explored algorithm aversions using five experiments whereby individuals could tie a monetary reward to predictions made by themselves, another person or a model.
While there is a need for machines to be transparent, it must be complemented by an understanding that transparency may ultimately bias people against machines.
—César A. Hidalgo
In some experiments, people knew the aggregate performance of the predictions and these tended to favour machines. In others, people could also observe the actual predictions.
The upshot? People tended to avoid algorithms more when they witnessed them err. That is, people’s preference for machines decreased when they saw the errors in addition to the aggregate result.
This finding is interesting in a world in which people often demand transparency as a fundamental pillar of ethical AI.
While there is a need for machines to be transparent, it must be complemented by an understanding that transparency may ultimately bias people against machines. If we fail to account for this nuance, transparency may push us to reject machines when they are actually a source of improvement.
But there are cases in which people rate machines higher than humans, albeit only slightly. These are moral scenarios involving violations of fairness and loyalty, which are also perceived to be highly intentional when performed by a human.
Consider a robot versus a human, both writing lyrics for a record label. Imagine an investigation discovers these lyrics plagiarize the work from lesser-known artists. When we presented people with this scenario, we found that they judged the action of the human as more harmful and less moral than that of the machine.
We obtained similar results for other scenarios involving fairness, such as biased human resource screenings and university admission systems.
People certainly do not like biased humans or machines, but when we test their repudiation experimentally, people rate human biases as slightly more harmful and less moral than those of machines.
We are shifting from an era of imposing norms on machine behaviour to one of discovering laws which do not tell us how machines should behave, but how we judge them. And the first rule is powerful and simple: people judge humans by their intentions and machines by their outcomes.
So, can we trust machines? Do we even want to? A blanket answer to such bold questions may not be possible, but current research is starting to give us some guidance.
César A. Hidalgo is the author of How Humans Judge Machines, a peer-reviewed book by MIT Press that is free to read at judgingmachines.com. He holds a Chair at the Artificial and Natural Intelligence Institute (ANITI) at the University of Toulouse, and appointments at the University of Manchester and Harvard University.
[box type=”note” align=”” class=”” width=””]License and Republishing