Skip to main content
Editorial illustration for Bryan Cranston, SAG-AFTRA say OpenAI is taking Sora 2 deepfake issues seriously

Editorial illustration for Cranston and SAG-AFTRA Engage OpenAI on Sora 2 Deepfake Concerns

Bryan Cranston Confronts OpenAI Over Sora 2 Deepfake Risks

Bryan Cranston, SAG-AFTRA say OpenAI is taking Sora 2 deepfake issues seriously

Updated: 2 min read

The AI video generation landscape just got more complicated. Actor Bryan Cranston and the Screen Actors Guild are stepping into the deepfake debate, putting pressure on OpenAI to address potential misuse of its Sora 2 technology.

Artificial intelligence's ability to generate hyper-realistic video has Hollywood on edge. What happens when an actor's likeness can be replicated without consent or compensation?

Cranston, known for his major roles in "Breaking Bad" and other critically acclaimed productions, isn't sitting on the sidelines. He and SAG-AFTRA are taking a proactive approach, engaging directly with OpenAI to understand and potentially mitigate the risks of AI-generated video content.

The stakes are high. With Sora 2's recent release, the entertainment industry is grappling with unusual questions about digital representation, intellectual property, and the boundaries of AI-powered creativity. Cranston's involvement signals a serious industry-wide conversation about to unfold.

Actors, studios, agents, and the actors union SAG-AFTRA have all expressed their concerns about appearing in Sora 2’s AI-generated videos ever since the deepfake machine was released last month. Now a joint statement from actor Bryan Cranston, OpenAI, the union, and others says that after videos of him appeared on Sora — one even showed him taking a selfie with Michael Jackson — the company has “strengthened guardrails” around its opt-in policy for likeness and voice. Bryan Cranston and SAG-AFTRA say OpenAI is taking their deepfake concerns seriously The Breaking Bad actor never opted in to appear on OpenAI’s video sharing app Sora, yet videos of him definitely showed up.

The Breaking Bad actor never opted in to appear on OpenAI’s video sharing app Sora, yet videos of him definitely showed up. The joint statement said that OpenAI “expressed regret for these unintentional generations.” It also carried cosigns from talent agencies United Talent Agency, the Association of Talent Agents, and the Creative Artists Agency, which had criticized the company’s lack of protections for artists in the past.

The Sora 2 deepfake controversy reveals the complex intersection of AI idea and performer rights. Actors like Bryan Cranston aren't just passive observers but active participants in shaping technological boundaries.

OpenAI's response suggests the company recognizes the serious ethical implications of its technology. By "strengthening guardrails" around likeness and voice usage, they're attempting to address legitimate concerns from performers and their representatives.

The collaborative approach between Cranston, SAG-AFTRA, and OpenAI signals a potential framework for responsible AI development. Performers now have a seat at the table when major technologies emerge.

Still, questions remain about long-term protections for actors' digital identities. The current opt-in policy represents an initial step, but the technology's rapid evolution demands ongoing dialogue.

Ultimately, this incident highlights the need for proactive conversations between creative professionals and AI developers. As deepfake technologies advance, establishing clear ethical guidelines will be important for protecting individual rights and artistic integrity.

Common Questions Answered

How did Bryan Cranston become involved in the Sora 2 deepfake controversy?

Bryan Cranston discovered AI-generated videos of himself on OpenAI's Sora 2 platform, including a deepfake showing him taking a selfie with Michael Jackson. He collaborated with SAG-AFTRA to pressure OpenAI into strengthening their guardrails around performer likeness and consent.

What specific actions has OpenAI taken to address performer concerns about Sora 2?

OpenAI has reportedly strengthened its opt-in policy for performer likeness and voice usage in response to concerns raised by Cranston and SAG-AFTRA. The company is working to create more robust ethical guidelines to protect actors' rights in AI-generated video content.

Why are actors and Hollywood studios concerned about AI video generation technologies like Sora 2?

Actors are worried about the potential unauthorized replication of their likeness without consent or compensation in AI-generated videos. The technology raises significant ethical questions about performer rights and the potential misuse of an individual's image and performance characteristics.