Computers

What happened when Google's new AI assistant sounded too human

The revelation of Google Duplex, which can mimic natural human speech, has sparked an ethical debate
The revelation of Google Duplex, which can mimic natural human speech, has sparked an ethical debate
View 6 Images
Some of the conversation suggesting how AI voices should not mimic human qualities
1/6
Some of the conversation suggesting how AI voices should not mimic human qualities
Some of the conversation suggesting how AI voices should not mimic human qualities
2/6
Some of the conversation suggesting how AI voices should not mimic human qualities
Some of the conversation suggesting how AI voices should not mimic human qualities
3/6
Some of the conversation suggesting how AI voices should not mimic human qualities
The revelation of Google Duplex, which can mimic natural human speech, has sparked an ethical debate
4/6
The revelation of Google Duplex, which can mimic natural human speech, has sparked an ethical debate
From the Google presentation of Duplex
5/6
From the Google presentation of Duplex
Yaniv Leviathan, Google Duplex lead, and Matan Kalman, engineering manager on the project, enjoying a meal booked through a call from Duplex.
6/6
Yaniv Leviathan, Google Duplex lead, and Matan Kalman, engineering manager on the project, enjoying a meal booked through a call from Duplex.

A few days ago, at Google's annual I/O developer conference, the search giant revealed a new AI system called Duplex. The system interacts with Google Assistant and can essentially engage in simple conversational tasks via phone calls to businesses, such as scheduling a hair salon appointment, or making a reservation at a restaurant. Not everyone was happy with the groundbreaking presentation, though, and a subsequent outcry over the ethical implications of an AI voice basically tricking humans into thinking it was human has prompted Google to now suggest the product will be programmed to disclose its computer identity in all future uses.

The big hallmark of Google Duplex is the system's ability to conduct natural sounding conversations. The system is programmed to have a quick response time and incorporate what Google refers to as "speech disfluencies" to sound more natural. This includes subtly calibrated "hmm"s and "uh"s to sound like a real person, and not the rigid mechanical computer voices we are generally used to.

The demonstration of the technology at the conference was both impressive and startling. The first example showed Duplex calling a hair salon and scheduling an appointment (about at 1 minute into the video above). The second example involved an even more complex conversation, with the system calling a restaurant to try to make a reservation. In the course of the conversation the system is told it wouldn't need a reservation for that many people on that particular day. Understanding this, Duplex thanks the person on the other end and hangs up.

These examples of a human-sounding AI interacting with a real person are undeniably impressive, but the technology's ability to so blatantly fool another human being into thinking it is real has left many unnerved. From suggestions Google had failed at ethical and creative AI design to more explicit accusations that the company was ethically lost, rudderless and outright deceptive, it seems something had gone drastically wrong.

Had Google entirely miscalculated how its new technology would be perceived? Was this a case of a Silicon Valley company developing a technology in a complete vacuum without realizing the real-world implications of its product?

In a blog published concurrently with the presentation, Yaniv Leviathan, Principal Engineer, and Yossi Matias, Vice President of Engineering, seem to be simply excited about the potential of their technology, suggesting it is a response to the frustrations inherent in, "having to talk to stilted computerized voices that don't understand natural language."

Yaniv Leviathan, Google Duplex lead, and Matan Kalman, engineering manager on the project, enjoying a meal booked through a call from Duplex.
Yaniv Leviathan, Google Duplex lead, and Matan Kalman, engineering manager on the project, enjoying a meal booked through a call from Duplex.

Very little understanding seems to have gone into comprehending the real-world repercussions of the technology, and in the blog and conference presentation, there is no mention of the technology being required to disclose its artificial identity. The only reference to transparency in the substantial blog post is: "It's important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We'll be experimenting with the right approach over the coming months."

Debate has raged since the revealing conference presentation over whether it should be a goal to even create an AI system that can accurately mimic humans. Erik Brynjolfsson, an MIT professor, suggested in an interview with the Washington Post that, "Instead, AI researchers should make it as easy as possible for humans to tell whether they are interacting with another human or with a machine."

Arvin Narayanan from Princeton University suggested, "We need ground rules. One proposal: "Turing Red Flag law" — bots should be designed so that it's clear they're bots; they should also identify themselves as bots up front."

Some of the conversation suggesting how AI voices should not mimic human qualities
Some of the conversation suggesting how AI voices should not mimic human qualities

Some of the conversation suggesting how AI voices should not mimic human qualities
Some of the conversation suggesting how AI voices should not mimic human qualities

It seems as though, on this issue, Google has catapulted the mainstream discourse into a place many were not ready to go. Apart from the obvious concerns over the technology being negatively utilized for telemarketing and robocalls, the ethical issue became paramount. Does AI need to identify itself when communicating with a human?

In response to the burgeoning controversy, Google released a statement claiming full disclosure will be built into the software.

"We understand and value the discussion around Google Duplex – as we've said from the beginning, transparency in the technology is important," a Google spokeswoman reported to CNET. "We are designing this feature with disclosure built-in, and we'll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product."

While the technology itself is fascinating, perhaps the more interesting revelation from the past few days has been the vociferous public conversation. Natural voice interactions with computers until now were a far off sci-fi concept, experienced only in movies where people converse with AI systems such as HAL in 2001: A Space Odyssey. This week, for the first time, we have had to grapple with the actual reality of this development. Computers won't sound robotic and talk in a stilted manner like Siri forever. Google Duplex has offered us a glimpse into the near future - it's exciting, confronting and a little creepy.

21 comments
Daishi
It's an interesting demo but it's still far easier to build a bot to handle a specific scripted scenario than just general conversation because the number of possible branches quickly goes off the rails. Anyone remember SoundHound's impressive Hound demo from a couple years back? https://www.youtube.com/watch?v=M1ONXea0mXg Despite a lot of recent advancements in voice assistants my dialogues with them is still nothing close to as fluid as that demo from 2015. The Google voice AI in the demo essentially passes the Turing test but we'll see how it does in more real world unscripted scenarios. I might not be ready to give it a go as my personal assistant but I'd probably be ready to let it take my order at a drive through.
Martin Winlow
I'm a bit baffled by all the outcry. Humans are hardly the most infallibly trustworthy of creatures!
piperTom
Okay, Luddites, here is your "get used to it" moment. AI development isn't going to stop; soon you will prefer to talk to one because it will understand you more easily than a human would. Also, we put odorants in natural gas so we could escape death! Lack of "umms" and "ahs" in speech is NOT going to be a parallel.
ScottEzell
When making an appointment or reservation, there's a limited amount of data that needs to be conveyed (name, date, time...). The only thing that would make it difficult for this AI would be the human on the other end. Maybe those humans will soon be replaced with an AI too.
guzmanchinky
What a bunch of hooey. People also thought cars going over 30mph would be the end of the world.
Brian Smith
I expect if it is forced to first declare that it is a computer generated call many people would hang up on it before booking the appointment or reservation. Maybe declaring the intention first to book john a hair cut then saying I am johns virtual assistant would convince them to not immediately hand up and lose the business. Maybe if google can solve the robocalling problem for us we could all not have such a negative expectation of computer generated calls in the future.
CzechsterMarek
So when will Google provide credited College Courses so we can eliminate human professors? Can you imagine an AI Professor that understands any language and gives you an unbiased and correct answers every time?
Rusty Harris
Is it time for "the three laws of Robotics" yet? ;)
Ctp
I have received a number of calls from companies that use humans to do the voices, but the AI behind the responses is all computer. This has been going on for a couple of years now. They are subtle and it take a couple of interactions to discover that they are not real, but just AI. So how is this different from what Google Assistant was demonstrated in the keynote? Google just had better AI and respond-ability then their competition. So why are people not speaking out about that tech? Those are usually sales calls or requests for donations. Google is at least not selling, just taking my place. I'd rather have Google Assistant making my appointment than ask someone else to make the appointment for me. It'll probably get it more right, because it KNOWS my schedule. I think that the outcry is probably mostly from those who are afraid that Google will be ahead of them in the marketplace, or it will take the jobs of the "personal assistants". Google can't yet deliver a cup of coffee in the morning to the boss's desk, so I think the jobs are still secure. I bet Amazon is looking at that delivery, though. Where is the outcry against Amazon?
GhostDoggSamurai
Oh please. The only thing I find disconcerting about this is the boring and inane way years of AI research is being used - to make appointments at hairdressers and restaurants. You really only need to be afraid if you are a PA or work in telemarketing!
Thanks for reading our articles. Please consider subscribing to New Atlas Plus.
By doing so you will be supporting independent journalism, plus you will get the benefits of a faster, ad-free experience.