Science

AntiFake AI tech could keep your voice from being deepfaked

AntiFake AI tech could keep your voice from being deepfaked
AntiFake reportedly keeps deepfake systems from replicating people's recorded voices
AntiFake reportedly keeps deepfake systems from replicating people's recorded voices
View 1 Image
AntiFake reportedly keeps deepfake systems from replicating people's recorded voices
1/1
AntiFake reportedly keeps deepfake systems from replicating people's recorded voices

One of the more sinister functions of deepfake AI systems is the ability to replicate a person's voice, based on even just a short recording. A new software tool known as AntiFake, however, could help keep that from happening.

Among other things, deepfaked versions of people's voices – along with faked videos – can be used to make it sound and look as if a politician or celebrity has said something that they never actually did say.

There have additionally been cases of people receiving phone calls in which the deepfaked voice of a friend or family member asked them to send money as soon as possible, due to an emergency of some sort. Additionally, replicated voices could be used to bypass voice-recognition security systems.

While there are already technologies designed to determine if supposedly legitimate voices are deepfakes, AntiFake is reportedly one of the very first systems to keep such fakes from being produced in the first place. In a nutshell, it does so by making it much harder for AI systems to read the crucial vocal characteristics in recordings of real people's voices.

"The tool uses a technique of adversarial AI that was originally part of the cybercriminals’ toolbox, but now we’re using it to defend against them," said inventor Ning Zhang, an assistant professor of computer science and engineering at Washington University in St. Louis. "We mess up the recorded audio signal just a little bit, distort or perturb it just enough that it still sounds right to human listeners, but it’s completely different to AI."

This means that even if a deepfake is created from an AntiFake-altered recording, it won't sound anything like the speaker's actual voice. Tests conducted so far have shown the technology to be over 95% effective at preventing the synthesis of convincing deepfakes.

"While I don’t know what will be next in AI voice tech – new tools and features are being developed all the time – I do think our strategy of turning adversaries’ techniques against them will continue to be effective," said Zhang.

The AntiFake code is freely available to anyone who wants it.

Source: Washington University in St. Louis

1 comment
1 comment
byrneheart
The sooner this is an app that automatically starts with any phone or video call and a browser and desktop feature for any voice recording, the better. It's not just celebrities whose livelihoods depend on being an authentic representation of themselves.