May 18, 2024
Can AI Replace Actors? Here’s How Digital Double Tech Works

In the heart of a technological sphere, the world dissolves into a play of blinding white light and fleeting flashes. Surrounding this sphere, darkness looms, creating a stark contrast. Imagine being securely strapped into a chair within this contraption. From the depths of the obscurity, a voice guides you – it suggests expressions, instructs on how to shape your mouth and eyebrows, outlines scenarios to react to, provides phrases to articulate, and prescribes emotions to embody. At unpredictable intervals, the voice reassures you not to be anxious and warns of more impending flashes.

“I don’t think I was freaked out, but it was a very overwhelming space,” says an actor, who, for privacy reasons, prefers not to disclose his name. He describes his experience inside what he fondly calls “the orb” – a photogrammetry booth used to capture his likeness during the production of a major video game in 2022. “It felt like being in a magnetic resonance imaging machine,” he reflects. “It was really very sci-fi.” This actor’s journey is part of the scanning process that enables media production studios to capture images of cast members in various poses, ultimately creating versatile digital avatars that can perform virtually any action or motion in a realistic video sequence.

Advances in artificial intelligence are now streamlining the creation of digital doubles, and it’s becoming possible even without subjecting actors to the intensity of “the orb.” However, these advancements raise concerns among some actors who fear they may be pressured to relinquish their likeness rights, potentially leading to the replacement of human actors with digital counterparts. This is one of the factors that led members of the Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) to go on strike. The union’s statement emphasized the need for performers to protect their images and performances from being supplanted by artificial intelligence technology.

While the idea of AI replacing actors is unsettling, it’s important to understand that digital doubles in today’s media productions still rely on human performers and special effects artists. Let’s delve into how this technology works and how AI is altering the established process.

You may also like reading: Why AI Won’t Replace Humans in the Workplace (And How to Ensure It Doesn’t)

The Mechanics of Digital Double Technology

The Mechanics of Digital Double Technology
Mechanics of Digital Double Technology

Over the past 25 years, it has become increasingly common for big-budget media productions to create digital doubles of at least some performers’ faces and bodies. This technology is now an industry standard, especially in movies, TV shows, and video games involving extensive digital effects, elaborate action scenes, or portrayals of characters at different ages.

The photogrammetry booth, a critical component of this process, is surrounded by hundreds of cameras, sometimes arranged in an orb shape or around a square room. These cameras capture thousands of intentionally overlapping two-dimensional images of a person’s face at high resolution. Starring performers who have speaking roles or need to convey various emotions require more extensive scans than secondary or background cast members. Similarly, larger setups are used to scan entire bodies.

With this data in hand, visual effects (VFX) artists transform the 2D model into a 3D digital double. The key lies in the overlap of the photographs. By using camera coordinates and aligning the overlapping sections, the images are mapped and folded, akin to digital origami. Artists then rig the resulting 3D digital double to a virtual “skeleton” and animate it. This animation can be achieved by directly replicating an actor’s real-world, motion-captured performance or by blending it with a series of computer-generated movements. The animated figure can seamlessly be placed in a digital environment and given dialogue. In fact, it’s possible to use a person’s scans to create photorealistic video footage of them doing and saying things the actor never did or said.

Moreover, special effects artists can apply an actor’s digital performance to a virtual avatar that looks entirely different from the human actor. For example, an actor in a video game may make faces in the orb and record lines in a booth. They also physically act out scenes for motion capture. However, when players engage with the final product, they see the modified digital double, designed to look like a unique character within the game.

This process has been utilized in film and television for decades, albeit with a considerable labor and cost commitment. Despite the challenges, digital doubles are commonly used for minor adjustments or major edits, such as transforming a small group of background actors into a massive digital crowd. However, these edits are most successful when the original footage aligns closely with the desired final output. For instance, it’s difficult to edit a background actor scanned in 19th-century attire into a futuristic setting featuring space suits.

Nonetheless, generative artificial intelligence, similar to the technology behind ChatGPT, is simplifying aspects of the digital double process, making it faster and more efficient.

AI Steps In

Several VFX companies are already using generative AI to expedite the modification of a digital double’s appearance. This advancement is particularly useful for “de-aging” actors, allowing them to appear younger, as seen in movies like Indiana Jones and the Dial of Destiny, which includes a flashback featuring a youthful Harrison Ford. AI also proves valuable for face replacement, superimposing an actor’s likeness over a stunt double, essentially creating a sanctioned deepfake.

Furthermore, advances in AI have made some photogrammetry scans unnecessary. Generative models can be trained using existing photographs and footage, even of individuals no longer living. This technology enables the creation of fake digital performances by historical figures, opening up new possibilities for storytelling. While AI plays a significant role in this process, living actors are still essential to infuse the nuances and authenticity that captivate the audience.

Fears of Actor Replacement

Fears of Actor Replacement
Fears of Actor Replacement

It’s crucial to distinguish between adjusting a digital double and completely replacing an actor’s performance with AI. The uncanny valley, which represents the eerie feeling when something looks almost, but not quite, human, is still a significant challenge. Currently, there isn’t a generative AI model capable of creating a complete, photorealistic, moving scene from scratch. Achieving such a milestone would require a substantial leap in AI intelligence.

However, concerns regarding the potential misuse of actors’ likeness persist. The ease with which one person’s face can be swapped for another’s raises questions about the overuse of famous actors or the mass replacement of background actors with AI-generated crowds. The legal landscape surrounding likeness rights is complex and varies by location. While individuals often have legal ownership over their likeness, there are exceptions for artistic and expressive use, which filmmaking could fall under. The terms of contracts can also play a crucial role in determining who owns the rights to an actor’s digital likeness.

Conclusion

In conclusion, while AI is pushing the boundaries of what’s possible in the world of digital doubles, there is still a long way to go before it can entirely replace human actors. The technology remains a valuable tool for enhancing performances and creating new storytelling opportunities. Nevertheless, it’s essential to navigate the legal and ethical implications of this technology to protect the rights of actors and maintain the authenticity of human performances.

FAQs

Can AI completely replace human actors in the future?

AI has made significant advancements in creating digital doubles, but there are still limitations to replicating the depth and authenticity of human performances.

What legal protections do actors have for their digital likenesses?

The legal landscape regarding likeness rights is complex and varies, but individuals often have ownership over their likeness, with exceptions for artistic and expressive use.

How is AI being used to enhance digital doubles in film and television?

AI is used to expedite the modification of digital doubles’ appearances, such as “de-aging” actors or superimposing likenesses over stunt doubles.

Are digital doubles replacing background actors in the industry?

While digital doubles are common in the industry, their use depends on the specific needs of each production.

What’s the role of SAG-AFTRA in protecting actors in the age of AI?

SAG-AFTRA is actively involved in protecting the rights of actors and ensuring their images and performances are safeguarded from AI technology abuse.

Leave a Reply

Your email address will not be published. Required fields are marked *