When an online conversation derails, participants might look to artificial intelligence to help get back on track, Cornell research shows.
Humans having difficult conversations said they trusted artificially intelligent systems – the “smart” reply suggestions in texts – more than the people they were talking to, according to the study, “AI as a Moral Crumple Zone: The Effects of Mediated AI Communication on Attribution and Trust,” published online in the journal Computers in Human Behavior.
The research has new relevance now that most daily conversations are taking place online because of social distancing.
“We find that when things go wrong, people take the responsibility that would otherwise have been designated to their human partner and designate some of that to the artificial intelligence system,” said Jess Hohenstein, M.S. ’16, M.S. ’19, a doctoral student in the field of information science and the paper’s first author. “This introduces a potential to take AI and use it as a mediator in our conversations.”
For example, the algorithm could notice things are going downhill by analyzing the language used, and then suggest conflict-resolution strategies, Hohenstein said.
The study was an attempt to explore the myriad ways – both subtle and significant – that AI systems such as smart replies are altering how humans interact. Choosing a suggested reply that’s not quite what you intended to say, but saves you some typing, might be fundamentally altering the course of your conversations – and your relationships, the researchers said.
“Communication is so fundamental to how we form perceptions of each other, how we form and maintain relationships, or how we’re able to accomplish anything working together,” said co-author Malte Jung, assistant professor of information science and director of the Robots in Groups lab, which explores how robots alter group dynamics.
“This study falls within the broader agenda of understanding how these new AI systems mess with our capacity to interact,” Jung said. “We often think about how the design of systems affects how we interact with them, but fewer studies focus on the question of how the technologies we develop affect how people interact with each other.”
For the study, the researchers used Google Allo, a now-defunct AI messaging app that suggests smart replies based on an algorithm and the conversation history. The 113 college students who participated were told they’d be chatting with another participant to complete a task, but their partners were actually controlling the dynamics of the conversations using pre-written scripts. The task required the pairs to rank five out of nine people for spots on a lifeboat.
There were four different versions of conversations: successful with AI replies; unsuccessful with AI; and successful or unsuccessful without AI. After participants completed or didn’t complete the task, they were asked to assign a percentage of responsibility for the task’s outcome to themselves, their partner and, in the AI-mediated conversations, to the AI system. They were also asked to rate their level of trust in their partner and the AI.
They found that AI impacted the results, whether the conversations succeeded or failed. In successful conversations, participants’ trust of their partners was 4.8 out of 6; in AI-mediated successful conversations, it was 5.76. In unsuccessful conversations, participants trusted the AI (3.13) more than their partners (3.04).
“There’s a theory that we are already assigning human characteristics to computers, so it follows that when we’re interacting with a human mediated by this artificially intelligent system, we would assign it some human characteristics,” Hohenstein said. “But still, these results are not what we expected.”
In addition to shedding light on how people perceive and interact with computers, the study offers possibilities for improving human communication – with subtle guidance and reminders from AI.
Hohenstein and Jung said they sought to explore whether AI could function as a “moral crumple zone” – the technological equivalent of a car’s crumple zone, designed to deform in order to absorb the crash’s impact.
“There’s a physical mechanism in the front of the car that’s designed to absorb the force of the impact and take responsibility for minimizing the effects of the crash,” Hohenstein said. “Here we see the AI system absorb some of the moral responsibility.”
The research was partly supported by the National Science Foundation.
This article, written by Melanie Lefkowitz, originally appeared at the Cornell Chronicle on March 30, 2020.