Wellness & Healthy Living

Legislation is effective at moderating harmful social media content

Researchers have found that government legislation can effectively moderate harmful online content
Researchers have found that government legislation can effectively moderate harmful online content

A new study has found that government legislation such as that recently introduced in the European Union can be effective at moderating harmful social media content, even when it comes to fast-paced platforms such as X (formerly Twitter). The findings have implications for policymakers looking to introduce similar regulations.

Social media platforms are the new digital public squares. They’re great for staying updated, building personal and professional relationships, and finding and sharing information. But they also have a dark side: harmful content.

Harmful social media content can take many forms, including the spreading of mis- and disinformation, posts about terrorism and those that glamorize suicide, hate speech against women, immigrants and minority groups, and viral challenges that put young people’s health at risk.

While several countries are considering regulating social media content to reduce the likelihood of harm, the European Union (EU) took the step of introducing the Digital Services Act in November 2022 to combat the dissemination of dangerous content. A new study by researchers from the University of Technology Sydney and the Swiss Federal Institute of Technology in Lausanne has looked at the effectiveness of such legislation.

“Social networks such as Facebook, Instagram and Twitter, now X, don’t have much incentive to fight harmful content, as their business model is based on monetizing attention,” said Marian-Andrei Rizoui, corresponding author of the study. “Elon Musk acquired Twitter with the stated goal of preserving free speech for the future. However, alongside free speech, mis- and disinformation spreads and prospers in this unregulated space.”

The EU legislation has put in place notice and action mechanisms to report harmful online content and requires the appointment of “trusted flaggers,” subject matter experts who are responsible for detecting harmful content. Once the content is flagged, platforms are required to remove it within 24 hours. However, social media is best known for its ‘virality,' the ability for content to spread quickly and widely. So, the major question for the researchers in the current study was: can this legislation be effective?

“We’ve seen examples on Twitter where the sharing of a fake image of an explosion near the Pentagon caused the US share market to dip in a matter of minutes, so there were doubts about whether the new EU regulations would have an impact,” Rizoui said.

The researchers used information spread modeling to analyze the dynamics of content dissemination. To examine the relationship between moderation time and harm reduction, they looked at two measures: potential harm, the number of harmful ‘offspring’ posts generated, and content half-life, the time it takes for half of all offspring to be generated.

Previous research has determined the half-life of social media posts on different platforms. X has the fastest content half-life at 24 minutes, followed by Facebook at 105 minutes, Instagram at 20 hours, and LinkedIn at 24 hours, with YouTube’s half-life coming in at 8.8 days.

“A lower half-life means that most harm happens right after the content is posted, and content moderation needs to be performed quickly to be effective,” said Rizoui. “We found the reaction time required for content moderation increases with both the content half-life and potential harm.”

The researchers found that even with a 24-hour turnaround, government-mandated moderation is likely to be effective in limiting harm but was dependent upon some key factors.

“The key to successful regulation includes appointing trusted flaggers, developing an effective tool for reporting harmful content across platforms, and correctly calculating the necessary moderation reaction time,” said Rizoui.

The researchers say their study has implications for policymakers who want to enact similar legislation. The US and UK have conducted public surveys to ascertain the extent of harmful social media content and are considering its regulation.

In the UK, early findings from a national survey of public attitudes towards AI and data-driven technology published in March 2023 found that nearly 90% of people between 18 and 34 had witnessed or received harmful content online at least once. Two-thirds of all UK adults reported being exposed to hate speech, false information, fake images and bullying at least once, and more than 40% of 18-to-24-year-olds had been exposed to this type of harmful content “many times."

In May this year, the US Surgeon General issued an advisory about the effects of social media on youth health, reporting that 64% of adolescents had “often” or “sometimes” been exposed to hate-based content through social media. The advisory recommended that tech companies prioritize safety and health, especially for children.

Australia’s Online Safety Act came into effect in January 2022, providing the eSafety Commissioner with the power to enforce regulations that prohibit, among other things, cyberbullying, image-based abuse, and illegal, harmful and violent content.

In this context, it’s encouraging that the current study demonstrates how legislation can be effective in moderating online harms.

“By understanding the dynamics of content spread, optimizing moderation efforts, and implementing regulations like the EU’s Digital Services Act, we can strive for a healthier and safer digital public square where harmful content is mitigated, and constructive dialog thrives,” said Rizoui.

The study was published in the journal PNAS.

Source: University of Technology Sydney

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
1 comment
-dphiBbydt
The problem of hate and unwanted 'speech' seems entirely solvable. It's mostly text and therefore can be analysed by AI systems. People (and even bots) should have the right to free speech but equally, people should have the right to not-listen. An AI filter, trained on the billions of hate-speech comments could be added to your profile so you simply do not see AI-determined hateful messages. Prior to implementation of such an automated filter, why don't celebrities (who attract the most unwanted attention) simply use secretaries to pre-read social media posts and just delete the upsetting ones? They could inform the obnoxious sender that the post is being ignored - that would put a damper on them sending any more.