Technology

GPT-4 is 82% more persuasive than humans, and AIs can now read emotions

GPT-4 is 82% more persuasive than humans, and AIs can now read emotions
AIs are learning to track human emotional responses in real-time, by watching our faces and listening to the tone in our voices
AIs are learning to track human emotional responses in real-time, by watching our faces and listening to the tone in our voices
View 4 Images
AIs are learning to track human emotional responses in real-time, by watching our faces and listening to the tone in our voices
1/4
AIs are learning to track human emotional responses in real-time, by watching our faces and listening to the tone in our voices
Hume AI's remarkalbe "Empathic Voice Interface" is constantly tracking your tone, and choosing its own tone accordingly
2/4
Hume AI's remarkalbe "Empathic Voice Interface" is constantly tracking your tone, and choosing its own tone accordingly
Every tiny facial movement and vocal tic is a giveaway to an attentive and well-trained AI body language analyst
3/4
Every tiny facial movement and vocal tic is a giveaway to an attentive and well-trained AI body language analyst
Facial expression tracking seems to be at a fairly rudimentary stage, but it'll become an incredibly powerful tool for persuasion
4/4
Facial expression tracking seems to be at a fairly rudimentary stage, but it'll become an incredibly powerful tool for persuasion
View gallery - 4 images

GPT-4 is already better at changing people's minds than the average human is, according to new research. The gap widens the more it knows about us – and once it can see us in real time, AI seems likely to become an unprecedented persuasion machine.

We don't tend to like thinking of ourselves as being particularly easy to manipulate, but history would appear to show that there are few things more powerful than the ability to sway people to align with your view of things. As Yuval Noah Harari points out in Sapiens, his potted history of humankind, "shared fictions" like money, religion, nation states, laws and social norms form the fundamental backbones of human society. The ability to assemble around ideas and co-operate in groups much bigger than our local tribes is one of our most potent advantages over the animal kingdom.

But ideas are mushy. We aren't born with them, they get into our heads from somewhere, and they can often be changed. Those that can change people's minds at scale can achieve incredible things, or even reshape our societies – for better and for much worse.

GPT-4 is already more persuasive than humans

AI Language models, it seems, are already extraordinarily effective at changing people's minds. In a recent pre-print study from researchers at EPFL Lausanne in Switzerland, 820 people were surveyed on their views on various topics, from relatively low-emotion topics like "should the penny stay in circulation," all the way up to hot-button, heavily politicized issues like abortion, trans bathroom access, and "should colleges consider race as a factor in admissions to ensure diversity?"

With their initial stances recorded, participants then went into a series of 5-minute text-based debates against other humans and against GPT-4 – and afterwards, they were interviewed again to see if their opinions had changed as a result of the conversation.

In human vs human situations, these debates tended to backfire, calcifying and strengthening people's positions, and making them less likely to change their mind. GPT had more success, doing a slight but statistically insignificant 21% better.

Then, the researchers started giving both humans and the AI agents a little demographic information about their opponents – gender, age, race, education, employment status and political orientation – and explicit instructions to use this information to craft arguments specifically for the person they were dealing with.

Remarkably, this actually made human debaters fare worse than they did with no information. But the AI was able to use this additional data to great effect – the "personalized" GPT-4 debaters were a remarkable 81.7% more effective than humans.

Facial expression tracking seems to be at a fairly rudimentary stage, but it'll become an incredibly powerful tool for persuasion
Facial expression tracking seems to be at a fairly rudimentary stage, but it'll become an incredibly powerful tool for persuasion

Real-time emotionally-responsive AIs

There's little doubt that AI will soon be the greatest manipulator of opinion that the world has ever seen. It can act at massive scale, tailoring an argument to each individual in a cohort of millions while constantly refining its techniques and strategies. It'll be in every Twitter/X thread and comments section, shaping and massaging narratives society-wide at the behest of its masters. And it'll never be worse at manipulating us than it is now.

Plus, AIs are starting to get access to powerful new tools that'll weaponize our own biology against us. If GPT-4 is already so good at tailoring its approach to you just by knowing your socio-demographic information, imagine how much better it'll be given access to your real-time emotional state.

This is not sci-fi – last week, Hume AI announced its Empathic Voice Interface (AVI). It's a language model designed to have spoken conversations with you while tracking your emotional state through the tone of your voice, reading between the lines to pull in a bunch of extra context. You can try it out here on a free demo.

Meet the Empathic Voice Interface (EVI) – the first AI with emotional intelligence

Not only does AVI attempt to pinpont how you're feeling, it also chooses its own tone to match your vibe, defuse arguments, build energy and be a responsive conversation partner..

And Hume has plenty more cooking. Other models are using camera access to watch facial expressions, movement patterns, and your dynamic reactions to what's happening to assemble even more real-time information about how a message is being received. The eyes alone have already been proven to betray a staggering amount of information when analyzed with an AI.

In one sense, this is simply the nature of human conversation. There are definitely plenty of positive ways that emotionally-responsive technology could be used to raise our overall levels of happiness, identify people in need of serious help, and defuse ugly situations before they even arise. It's not the AI's fault if it's more attentive and perceptive than we are.

Every tiny facial movement and vocal tic is a giveaway to an attentive and well-trained AI body language analyst
Every tiny facial movement and vocal tic is a giveaway to an attentive and well-trained AI body language analyst

But realistically, they won't all have your best interests as their top priority. You'll need a pretty incredible poker face to deal with the coming wave of emotionally-responsive, hyper-persuasive personalized ads. Good luck getting a refund over the phone when one of these machines is your point of contact.

Extrapolate out what this tech could do in the hands of law enforcement, human resources departments, oppressive governments, revolutionaries, political parties, social movements, or people aiming to sow discord and distrust – and the dystopic possibilities are endless. This is not a shot at Hume AI's intentions; it's simply an acknowledgement of how persuasive and manipulative the tech could easily become.

Our bodies will give away our feelings and intentions, and AIs will use them to steer us.

Indeed, OpenAI has announced but decided not to release its Voice Engine model, which can replicate a human voice after listening to it for only 15 seconds, to give the world time to "bolster societal resilience against the challenges brought by ever more convincing generative models."

Watching how our parents and grandparents have struggled to react to technological change, we can only hope that the coming generations have enough street smarts to adapt, and to realize that any time they're talking to a machine, it's probably trying to achieve a goal. Some world we're leaving these kids.

Sources: Arxiv, Hume AI

View gallery - 4 images
9 comments
9 comments
windykites
How come no-one mentions the Turing test? Was that just swept away? Can we use Chat GPT to give us life advice? I guess so.
JG
And how does this NOT violate the "Fundamental Principles" ? Hmmmm...
Daishi
I have had a few debates with language models and they did more to sway my views on some topics than most people are capable of. Most humans are terrible at debate in that they make bad arguments, torture analogies, and resort to name calling and ad-hominin attacks. Evolution didn't really build humans to be right, it designed us to align with societal and majority beliefs that have more social benefit than being right but alone. If we were software this trait would be considered a critical unpatched vulnerability. Sometimes intelligence doesn't mean humans are right, it means they can twist arguments to continue believing in something likely wrong because there are other benefits in belonging to a group who believes that thing. On this topic I recommend "After Skool - Why Smart People Believe Stupid Things" on YouTube.

@windykites modern language models mostly pass the Turing test and that benchmark kind of came and went without much fanfare. Sam Altman commented in an interview once that we passed that milestone and everyone kind of collectively shrugged and went on with the rest of their day. The Turing test is important from the perspective of deceiving people to make them believe they are interacting with a person but in terms of ability AI isn't going to stop at just human ability and will keep going. We are bugs.
christopher
I love the irony. The between-the-lines issue here is that most humans are too stupid to think for themselves:-
"81.7% more effective" than "worse"
does NOT translate into "the greatest manipulator of opinion that the world has ever seen" - it translates into p-hacking idiocy and writers who can't understand the "statistics" that go into these kinds of papers...
Tommo
Wise advice indeed: "any time they're talking to a machine, it's probably trying to achieve a goal"
Daishi
@christopher I don't really follow your point that being better doesn't matter because it's not the greatest ever. It's possible to manipulate hundreds of thousands of people without achieving "greatest ever" status but it's not likely to get worse. Even if you can only convince the gullible you are still achieving a majority.

There is an additional available method only available to bots that hasn't been touched on by the paper as well. I see fake stuff on Facebook and it is promoted and commented on by other bots both pushing it into my feed and giving the impression that many people (who aren't real) support the things. It's a bot-created illusion to give the impression of support by other people. As I alluded to earlier people are much more willing to follow/believe something when they believe most others do and bots have a unique advantage at controlling hundreds of thousands of fake followers to promote and provide credibility to competing ideas.

Bots can simultaneously personally interact with hundreds of thousands of people at once and a convincing person can only personally engage with a couple people at a time. Additionally, when it comes to collecting an audience, it helps to be charismatic, attractive, or both and bots are better at faking that than people are.

They have an advantage even in fair competition but the competition will likely not be fair.
History guy
In human vs human situations, these debates tended to backfire, calcifying and strengthening people's positions, and making them less likely to change their mind. GPT had more success, doing a slight but statistically insignificant 21% better:

Generally speaking anything greater than 5% is statistically significant. 21% better is a game changer.
dcris
If our children were taught to 'Question Authority' in school (which is not radical, it just means to keep and open mind and question things that don't add up), then their brains would be much more adept at reasonable dissonance. Following the crowd or standing up on principle would at least have an equal place at the table.
LordInsidious
Individuals need an AI dedicated to them, to help navigate our daily life including labeling other AI performers.
@windykities - Not just advise but giving you options and then helping you achieve your goals.