Researchers have used AI to create a way of improving the quality and civility of online discussions regarding polarizing topics by providing users with suggestions for rephrasing their comments before they post them. They say that, properly used, AI could be used to create a kinder, safer digital landscape.
Online conversations now play a central role in public discourse. But comment sections on social media platforms and digital news outlets are rife with discussions that have devolved into arguments, threats, and name-calling, particularly where the discussion concerns a divisive topic.
Now, researchers from Brigham Young University (BYU) and Duke University have developed AI that can moderate online chats, improving their quality and promoting civility.
They recruited 1,574 participants for their field experiment and asked them to engage in an online discussion about gun regulation in the US, a divisive issue that is often raised in the context of political debate. Each participant was matched with someone with an opposing view about gun policies.
Once matched, conversation pairs were randomly assigned to the treatment or control condition, and partners proceeded to have a conversation. In a treated conversation, one participant received three suggestions from GPT-3 for rephrasing their message before sending it. Participants could choose to send one of three AI-suggested alternatives, their original message, or edit any message.
On average, 12 messages were sent in each conversation, with a total of 2,742 AI-generated rephrasings suggested. Participants accepted AI-suggested rephrasings two-thirds of the time. Chat partners of individuals who implemented one or more AI rephrasing suggestions reported significantly higher conversation quality and were more willing to listen to the perspectives of their political opponents.
“We found the more often the rephrasings were used, the more likely participants were to feel like the conversation wasn’t divisive and that they felt heard and understood,” said David Wingate, one of the study’s co-authors.
The researchers say their findings suggest that this scalable solution could combat the toxic online culture that has pervaded the internet. They say it’d be easier to implement than, say, professional training sessions about online civility, which are limited in scope and availability as AI intervention could be broadly implemented across various digital channels.
Ultimately, say the researchers, this research shows that, properly used, AI can play an important role in creating a more positive online landscape, fostering discussions that are empathetic and respectful.
“My hope is that we’ll continue to have more BYU students build pro-social applications like this and that BYU can become a leader in demonstrating ethical ways of using machine learning,” Wingate said. “In a world that is dominated by information, we need students who can go out and wrangle the world’s information in positive and socially productive ways.”
The study was published in the journal PNAS.
Source: Brigham Young University
I come from a culture where heated debates are the norm (I am French), which means that learning how to debate is something that is very strong in our education, starting at a young age. How to truly listen to what the other person is saying, analyse it and then answer back. I don't feel auto-correction is the solution, as the issue here is not vocabulary, style or even tone: what is truly at stake here is critical thinking, and I think that we need to further develop are indeed better tools and education materials/solutions to support that.