Exploiting the machine without guilt
The study, ‘Algorithm exploitation: humans are keen to exploit benevolent AI’, which was published on 3 June in the journal iScience, found that, upon first encounter, people have the same level of trust toward an AI as for human: most expect to meet someone who is ready to cooperate.
The difference comes afterwards. People are much less ready to reciprocate with an AI, and instead exploit its benevolence to their own benefit. Going back to the traffic example, a human driver would give way to another human but not to a driverless car.
The study identifies this unwillingness to compromise with machines as a new challenge to the future of human-AI interactions.
‘We put people in the shoes of someone who interacts with an artificial agent for the first time, as it could happen on the road,’ explains Dr Jurgis Karpus, a behavioural game theorist and a philosopher at LMU and the first author of the study. ‘We modelled different types of social encounters and found a consistent pattern. People expected artificial agents to be as cooperative as fellow humans. However, they did not return their benevolence as much and exploited the AI more than humans.’
With perspectives from game theory, cognitive science, and philosophy, the researchers found that ‘algorithm exploitation’ is a robust phenomenon. They replicated their findings across nine experiments with nearly 2,000 human participants.
Each experiment examines different kinds of social interactions and allows the human to decide whether to compromise and cooperate or act selfishly. Expectations of the other players were also measured. In a well-known game, the Prisoner’s Dilemma, people must trust that the other characters will not let them down. They embraced risk with humans and AI alike, but betrayed the trust of the AI much more often, to gain more money.
‘Cooperation is sustained by a mutual bet: I trust you will be kind to me, and you trust I will be kind to you. The biggest worry in our field is that people will not trust the machines. But we show that they do!’ notes Professor Bahador Bahrami, a social neuroscientist at the LMU, and one of the senior researchers in the study. ‚They are fine with letting the machine down, though, and that is the big difference. People even do not report much guilt when they do,’ he adds.