Technology

'Bad actor' AI predicted to pose daily threat to democracies by mid-2024

'Bad actor' AI predicted to pose daily threat to democracies by mid-2024
A study is predicting that malicious AI activity will be daily by mid-2024
A study is predicting that malicious AI activity will be daily by mid-2024
View 2 Images
A study is predicting that malicious AI activity will be daily by mid-2024
1/2
A study is predicting that malicious AI activity will be daily by mid-2024
Adding together bad actor and vulnerable mainstream communities amounts to more than one billion users
2/2
Adding together bad actor and vulnerable mainstream communities amounts to more than one billion users

A new study has predicted that AI activity by ‘bad actors’ determined to cause online harm through the spread of disinformation will be a daily occurrence by the middle of 2024. The findings are concerning given that more than 50 countries, including the US, will hold national elections this year, the outcomes of which will have a global impact.

Even before the release of the newest iterations of Generative Pretrained Transformer (GPT) systems, AI experts were forecasting that by 2026, 90% of online content will be generated by computers without human intervention, giving rise to the spread of mis- and disinformation.

There’s an assumption that big social media platforms with the largest number of users warrant regulation to rein in risk. That assumption is correct, to a degree, and makes them a target of legislation such as the EU’s Digital Services Act and AI Act. However, there are other, smaller ‘bad actors’ – people, groups and countries who purposely engage in behavior that causes harm to others – who misuse AI.

A new study led by researchers at George Washington University (GW) is the first quantitative scientific analysis to examine how bad actors might misuse AI and GPT systems to generate harm globally across social media platforms and what can be done about it.

“Everybody is talking about the dangers of AI, but until our study there was no science behind it,” said Neil Johnson, lead author of the study. “You cannot win a battle without a deep understanding of the battlefield.”

The researchers started by mapping the dynamic network of interlinking social media communities that make up the landscape of the global online population. Users – sometimes a few, sometimes a few million – join these communities because of shared interests, which can include harms. The researchers focused on extreme ‘anti-X’ communities where each community is one in which two or more of its 20 most recent posts include defined hate speech and/or extreme nationalism and/or racism. These anti-X communities included those that were, for example, anti-US, anti-women, anti-abortion, or anti-semitic. Links between these communities form over time to create clusters of communities within and across different social media platforms.

“Any community A may create a link (i.e. a hyperlink) to any community B if B’s content is of interest to A’s members,” said the researchers. “A may agree or disagree with B. This link directs A’s member's attention to B, and A’s members can then add comments on B without B’s members knowing about the link – hence, community B’s members have exposure to, and potential influence from, community A's members.”

Using a mathematical model, the researchers determined what bad-actor-AI activity is likely to occur and why. Specifically, they found that the most basic GPT system, such as GPT-2, is all that is needed and is also more likely to be attractive to bad actors than more sophisticated versions, such as GPT-3 or -4. This is because GPT-2 can easily replicate the human style and content already seen in extreme online communities, and bad actors can use a basic tool like GPT-2 to produce more inflammatory output by subtly changing the form of an online query without changing its meaning. By contrast, GPT-3 and -4 contain a filter that overrides answers to potentially contentious prompts, preventing such output.

Adding together bad actor and vulnerable mainstream communities amounts to more than one billion users
Adding together bad actor and vulnerable mainstream communities amounts to more than one billion users

The online ‘battlefield’ where bad-actor-AI activity will likely thrive, say the researchers, is the bad-actor communities plus the communities they directly link into, that is, vulnerable mainstream communities. Adding these communities together amounts to an online ecosystem of more than one billion individuals, allowing bad-actor-AI to thrive globally. The researchers illustrate their point by referencing the non-AI-generated hate and extremism related to COVID-19 and, more recently, the Russia-Ukraine and Israel-Hamas wars.

They predict that, by mid-2024, bad-actor-AI activity will become a daily occurrence. To determine this, they used proxy data from two historical, technologically similar incidents involving the manipulation of online electronic systems: the 2008 automated algorithm attacks on US financial markets and the 2013 Chinese cyber-attacks on US infrastructure. Analyzing these data sets, they extrapolated the frequency of attacks in these two events in the context of the current technological progress of AI.

2024 is being touted as the ‘biggest election year in history,’ with more than 50 countries, including the US, due to hold national elections this year. From Russia, Taiwan, the UK, and India to El Salvador and South Africa, the outcomes of some elections will have a global impact and huge implications for human rights, economies, international relations, and world peace. So, say the researchers, the threat of bad actors using AI to disseminate and amplify disinformation during these elections is real.

They recommend that social media companies use tactics to contain disinformation rather than removing every piece of bad-actor-generated content.

Given AI's ever-changing landscape, the researchers did apply a caveat to their study’s findings. Nonetheless, the study highlights some of the significant challenges posed by bad actors with access to AI.

“Since nobody can predict exactly what will happen with future bad-actor-AI given the rapid pace of technology and changing online landscape, the predictions in this article are, strictly speaking, speculative,” the researchers said. “But they are each quantitative and testable – and also generalizable – and hence provide a concrete starting point for strengthening bad-actor-AI policy discussions.”

The study was published in the journal PNAS Nexus.

Source: GW

5 comments
5 comments
martinwinlow
"'Bad actor' AI predicted to pose daily threat to democracies by mid-2024"... or, if you want a taste of this today, just watch any main-stream media outlet...
Rick O
Yeah, AI is going to have its work cut out for it, trying to compete with the lies being spread every day already. Maybe skynet will be a better leader than the ones we already have? Can we get someone to make an AI that's meant to root out the BS instead? That would be nice. For now, just watch the news, see what they tell you about any given person, see what video clip they show of that person saying the thing they say is awful. THEN, and here's the important part: Go online and watch the full speech or interview. Get the context. Get all the perspectives. See past the spin, good or bad. Problem is, most people see one thing, and then they're complacent and trust the first source where they saw the snippet of reality.
Bob809
martinwinlow - Could not agree more.
Rick O - Very well said. A BS AI, wouldn't that be something? Of course, they couldn't afford to have anything expose the massive amount of BS going on world wide right now, although, I should backdate that a few years. And as we know, the BS will be pumped up big style come the elections, and not just those in the US.
LordInsidious
Instead of playing wack-a-mole with bad actors (AI or not), doesn't it make more sense to have an AI that is between you and the internet? I think the average person can see the benefit of have an AI working for them in there active daily lives (help with planning trips, bills, emails, searching for jobs etc.), I think it make sense to add filtering out bad actors as well.
Louis Vaughn
I am so tired of all the Ai fear mongering!
What about all the bad actors with:
* Sticks & stones
* fire
* bullets
* bombs
* nukes
* lasers
* microwaves
* and Trump's weapon of choice, the infamous Baseball Bat.
Stop sounding the f#$%g fear alarm on everything,
when it comes down to the Waco individual who misuses everything.
Please,
Thank you