Technology

Interview: Hyperreal is working on digital celebrities you can talk to

Interview: Hyperreal is working on digital celebrities you can talk to
Hyperreal is developing exquisitely detailed "digital identities" that can be used in a range of projects – up to and including fully conversational AIs
Hyperreal is developing exquisitely detailed "digital identities" that can be used in a range of projects – up to and including fully conversational AIs
View 6 Images
Hyperreal is developing exquisitely detailed "digital identities" that can be used in a range of projects – up to and including fully conversational AIs
1/6
Hyperreal is developing exquisitely detailed "digital identities" that can be used in a range of projects – up to and including fully conversational AIs
"Alta B" – a "virtual being," in her debut performance alongside Now United
2/6
"Alta B" – a "virtual being," in her debut performance alongside Now United
Madison Beer's Hypermodel performa
3/6
Madison Beer's Hypermodel performs in a virtual Sony Music Hall
Hyperreal captured and de-aged Paul McCartney, allowing him to perform in a Beck video as his younger self
4/6
Hyperreal captured and de-aged Paul McCartney, allowing him to perform in a Beck video as his younger self
Remington Scott has had a remarkable career in the movie and game industries. Now he's hoping to spearhead a new digital identity movement
5/6
Remington Scott has had a remarkable career in the movie and game industries. Now he's hoping to spearhead a new digital identity movement
The Hypermodels are captured in exquisite detail, and can be rendered at any quality for individual 2D or 3D projects
6/6
The Hypermodels are captured in exquisite detail, and can be rendered at any quality for individual 2D or 3D projects
View gallery - 6 images

Motion capture guru Remington Scott is now building hyper-realistic "digital twins" that actors and performers can hire out for movie, music, VR or videogame gigs – as well as AI-powered conversation partners trained on a star's own thoughts and words, that fans can talk with individually.

Remington Scott's CV in Hollywood movies is extraordinary, placing him at the bleeding edge of motion capture in some of the most transformative projects in movie history. He supervised and directed motion capture for Gollum/Smeagol in Peter Jackson's Lord of the Rings: The Twin Towers, Dr. Manhattan in Watchmen, Several of the leads in Spider-Man 2 and 3 and Superman Returns, and many others.

He led the motion capture team for Final Fantasy: The Spirits Within – the first theatrical release ever to use a fully digital, motion-captured cast – and supervised and directed scenes with Anthony Hopkins, Angelina Jolie, John Malkovich and others for Beowulf, using novel eye motion detection systems he developed with colleagues.

In the video game world, he was the animation lead and co-creator of WWF: SuperStars of Wrestling for the Atari ST all the way back in 1986 – the first home video game to use digitized graphics rather than hand-drawn sprites. To give you an idea how early that was, Double Dragon didn't launch until 1987. Since then, he's directed Kevin Spacey in Call of Duty: Advanced Warfare, and worked on Just Cause 4, The Order: 1886, Killzone: Shadow Fall, The Amazing Spider-Man, and many others.

Official Reveal Trailer | Call of Duty: Advanced Warfare

"When I was starting out in the early 90s," he tells me over a video chat, "I was one of a handful of people working with this motion capture technology. It was being used in medical labs at the time for gait analysis, subjects would walk on a treadmill with those big ol' ping pong balls on their hips and legs, to capture the way the ball of the femur was rotating in the hip. It had such accuracy it could see beneath the skin and the muscle, and look at the biomechanics of the skeletal system.

"But we were in the entertainment industry," he continues. "We were digitizing people in 2D for games like Mortal Kombat, but we were beginning to see a whole new bunch of 3D games coming out, so we knew we had to start creating these animations in 3D. The tools were very crude and difficult. So, building the first motion capture studio in the world dedicated to entertainment, that was where I got my start. I was a director there. We started to have visual effects people coming to us to shoot a couple of shots for a film or two, and I realized this technology could cross over into creating digital humans in film."

Once Jurassic Park dropped, it was clear to Scott that Hollywood was ready to expand its use of high-quality 3D modeling. "Hollywood knows how to render them," he says, "we can move them. I was fortunate to be a director on the first feature film that used motion capture to create all the animation, Final Fantasy: The Spirits Within. Then I went on to be the supervisor at Weta Digital, overseeing the creation of Gollum and Smeagol.

Lord of the Rings: The Two Towers (2002) - Gollum Attacks Frodo & Sam Scene | Movieclips

"That was the culmination of what I wanted to achieve bringing motion capture into film. Andy Serkis is an incredible actor, he brought life into these digital characters in a way that nobody was doing before him, he really embraced the technology. That was a live action production – we were doing it in real time, looking at a monitor and talking to Gollum, and Gollum was talking back to us. It was a fundamental, quantum leap from the way that animation had been done for the previous century at studios like Disney."

All of which is a long-form way of pointing out that this fella rather knows what he's on about, and has already been instrumental in several game-changing technology plays in the entertainment business. So the fact that he's tossed it all in to work on "the future of digital identity" should raise eyebrows.

In 2020, Scott founded Hyperreal in Los Angeles, a company dedicated to the creation of "Hypermodels" – ultra-realistic digital recreations of a person's entire body, face, voice and motion-captured performance mannerisms.

"I was working with A-list talent on these games and films, and we'd create these high-level avatars, these digital humans," says Scott. "But at the end of the project, those digital humans couldn't work anymore. The actor couldn't take their own digital twin and use it in another project, they didn't have the rights. And the producer didn't have the rights to use it in other projects either. It fundamentally needed to be changed."

So Hyperreal works for the talent, creating Hypermodels compatible with a range of different 3D rendering engines and that an actor or performer can do more or less what they like with.

"This way, the talent own their own digital identities," he says. "There's three components, your image, your signature motion style, and your voice. We record that data and put it in our vault. Those digital copies can go out and work on licensed projects, but that doesn't stop them from being used on other projects. It's one asset that can have many lifetimes."

Madison Beer's Hypermodel performa
Madison Beer's Hypermodel performs in a virtual Sony Music Hall

Hypermodels can be used just about anywhere. You can dress, style and modify them to suit a given project. They're captured at a resolution that can handle close-ups in an IMAX movie, but they're also volumetric 3D assets, so once the movie's come out, the same asset can become a controllable character in the video game. Or the Virtual Reality experience, for that matter. Or a Tik Tok.

The idea took a while to sink in.

"That's part of the education of the industry, that they don't need to build new assets on all these different properties," says Scott. "If you have a digital double in the film, you can use the same asset in the video game. There really wasn't a lot of business out there for these early on. Talent and management were saying well, they're beautiful, but what are we gonna do with them? You say there's going to be so many opportunities – well, what are they? So, we had to make 'em! We couldn't just focus on the tech platform, we had to create opportunities."

Probably the most impressive recent example is a virtual concert Hyperreal shot for popstar Madison Beer. Presented in a virtually recreated version of New York's Sony Hall, and rendered using Unreal Engine, it's a 10-minute concert in which Beer's Hypermodel runs through a medley of her tunes with 3D digital effects around her. You can see it in probably its least interesting form, as a 2D YouTube video, below.

Madison Beer - Life Support (Immersive Reality Concert Experience)

As you can see, the model is absolutely remarkable. Without looking closely, you'd assume it was a live performance. There's still room for improvement, particularly in the way the mouth moves, which still has a touch of the video game NPC about it. And as well as the way the model performs, Beer is definitely (somehow) better-looking and more expressive in live performance. But the progress demonstrated here is extraordinary, and relentless.

"That Hypermodel has gone on to be in a multitude of new opportunities across a spectrum of technology," says Scott. "She performed a virtual concert in real time. It was a morning for her. That concert was initially ray-traced and rendered on the cloud, and pixel-streamed in real time over 5G to your mobile device as an XR experience. You can be anywhere you want inside the Sony Music Call, and you can move around. You're up on stage next to her, or you're in the audience looking at her on your phone. Then it goes to Tik Tok, where it's distributed as a live concert. Then it goes to Sony's development team, they're speccing out the PSVR2, and using this to showcase that, so we'll be seeing her in VR now. And now Sony's got a new technology called spatial reality, it's like a 3D holographic screen you don't need glasses for. This concert's been used to showcase that as well."

Then there's the company's Fountain service. On another recent project, Hyperreal created a model of Paul McCartney, then digitally de-aged it, and used the model in a music video for a song McCartney put together with Beck.

"Fountain's where we de-age the performer, and that's what we did for Paul," says Scott. "We're finding a time and a look the artist considers timeless, and rebuilding the model to that time. It takes more work!"

Paul McCartney, Beck - Find My Way

And then there's Phoenix, which is where the company will work with a deceased performer's estate to build the best model they can with the materials available.

"Phoenix is about re-igniting the power of a superstar," says Scott. "This is where we go back through images and videos, old movies, VHS videos, whatever source material we can use. We layer them on with the Phoenix technology and we rebuild them digitally. We did that for the Notorious BIG, and his Hypermodel starred in an hour-long mixed-reality concert event for Meta on their VR system, performing with people that have been part of his life, as well as new artists. It was a narrative event, with a story happening. And Meta's now got something in the works called The Brook, it's a Metaverse world set in Brooklyn from the 90s, that's where he grew up. So you'll be able to go into that world in VR and watch him make music."

And of course, once you've got a 3D model and a voice model, well, opportunities start getting a lot more interesting when you start mixing them in with AI language models like ChatGPT. We're talking video game characters that you can have conversations with inside the game world, each with their own voice, backstory, levels of knowledge and individual ways of speaking. Or opportunities to spend time with celebrities, teachers or whoever else, using digital characters trained up with that person's own words.

"We're working with a best-selling author," says Scott. "He's had more than 20 books on the New York Times list. Talented guy. And he wants to create a conversational AI – you'll talk to him, he'll talk to you. So the training model we've used for his conversational brain is all his work. It's not cross-pollinated with other things like ChatGPT is, this is his own words. So you're speaking directly to the words that he says, and the thoughts and feelings he's collected. We do tests, asking him and the AI the same questions. And he can't see the AI's answers, but they'll both answer almost exactly the same. It's as authentic as it's gonna get."

"And these things are gonna be everywhere," he says. "There'll be an AI barista taking your oder, there'll be an AI agent at the train station helping you get the right ticket or find your train. So why not a conversational AI of your favorite author? You could talk to them about the books, or have them act something like a mentor. There's a huge opportunity for people to scale themselves to the point where they can have these personal interactions with individual fans. And that's what we're effectively talking about – a new way to scale yourself."

Remington Scott has had a remarkable career in the movie and game industries. Now he's hoping to spearhead a new digital identity movement
Remington Scott has had a remarkable career in the movie and game industries. Now he's hoping to spearhead a new digital identity movement

While Hyperreal's digital twin services are currently by invitation only and available mainly to A-listers who'll be able to recoup an investment on their Hypermodels, the company definitely sees a path to opening things up to the unwashed masses should VR and the metaverse concept take off.

"We've started with A-list talent, because frankly, we're building a brand, and top-tier talent can monetize these assets," says Scott. "We can work with talent and help them scale themselves, but as a growing business, that's somewhat limited. The next part of the roadmap is where we create digital identities for corporations, synthetic humans that don't exist in the real world. Brand ambassadors, virtual influencers, whatever you want to call them. We're building that now.

"And the next part of the roadmap is where we start doing customizable Hypermodels for everybody. There's a lot of really cool tech demos out there where you can take your phone and move it around your face and build an avatar view in a few moments. But the key for us is quality. We're not going to release some janky-looking dead-eyed avatar of you that looks weird and doesn't seem like it's alive. I applaud the tech demos, but we're not there yet. We'll get there when the time is right."

By the time it reaches the consumer stage, Scott's ambition is that your personal Hypermodel will be quickly and easily exportable in formats compatible with, well, anything you might need an avatar for. Your PlayStation account. Your Meta VR presence. Your Netflix account.

"Our digital identities are cross-platform and interoperable," says Scott. "One identity across everything. You own it, you control it. You don't have corporations coming in and taking your identity, owning it, controlling it and gathering the data. You have that, you're licensing it accordingly. That way, you can monetize it for data use. It really is the future that needs to be."

Sony Music's 'digital Madison Beer' sets the virtual concert world on fire | Unreal Engine

Source: Hyperreal

View gallery - 6 images
2 comments
2 comments
anthony88
He's right about the eyes, but there's no need to change anything. I think humans are already on their way to imitating these AI characters.
Bob Flint
Not just the eyes, nothing like the real thing, seeing the sweat build up, on the pores, and the mist of the particles as singers breath is reflected off the lighting as they perform. The motions are too smooth, no surface muscular changes, as dancers flex, & pivot, facial hair, wrinkles, freckles, crows feet, so much more needed to enhance the flesh & blood pulsing through the veins. I see why it's called "Artificial Intelligence"