
Trevor Noah is frightened about the place issues are headed with controversial AI video mills.
The comic and former Every day Present host mentioned AI video apps like OpenAI’s Sora might be “disastrous” in the event that they proceed to make use of folks’s likenesses with out permission.
“I’ve to determine what they’re doing and the way they’re doing it,” he instructed GeekWire. “However I don’t suppose it’ll finish properly after they’re not coping with permissions.”
We caught up with Noah — Microsoft’s “chief questions officer” — on Thursday after his look on the firm’s headquarters in Redmond, the place he helped launch a new AI education initiative in Washington state.
OpenAI final week rolled out Sora 2, a brand new model of its AI video-generation system that creates hyper-realistic clips from textual content prompts or current footage. The brand new model provides a “Cameo” function that permits customers to generate movies that includes human likenesses by importing or referencing current pictures.
The improve has made Sora, obtainable on an invite-only foundation, one of the vital viral client tech merchandise of 2025 — it’s the highest free app on Apple’s App Retailer.
It’s additionally drawn intense pushback from main Hollywood expertise companies which have criticized the software program for enabling the usage of an individual’s picture or likeness with out express consent or compensation.
In the meantime, AI-generated movies depicting deceased celebrities equivalent to Robin Williams and George Carlin have sparked public outrage from their households.
Noah instructed GeekWire that “this might find yourself being essentially the most disastrous factor for anybody and everybody concerned.”
He referenced Denmark, which recently introduced legislation that may give people possession of their digital likeness.
“I feel the U.S. must compensate for that ASAP,” Noah mentioned.
Authorized specialists say the following wave of AI video instruments — together with these from Google and Meta — will take a look at current publicity and likeness legal guidelines. Kraig Baker, a Seattle-based media legal professional with Davis Wright Tremaine, mentioned the issue isn’t prone to be deliberate misuse by advertisers however relatively the flood of informal or careless content material that features folks’s likenesses, now made doable by AI.
He added that the difficulty might be particularly thorny for deceased public figures whose estates now not actively handle picture rights.
There are broader potential impacts, as New York Instances columnist Brian Chen noted: “The tech might symbolize the top of visible reality — the concept video might function an goal report of actuality — as we all know it. Society as an entire must deal with movies with as a lot skepticism as folks already do phrases.”
OpenAI revealed a Sora 2 Safety doc outlining consent-based likeness. “Solely you determine who can use your cameo, and you may revoke entry at any time,” the corporate says. “We additionally take measures to dam depictions of public figures (besides these utilizing the cameos function, in fact).”
Sora initially launched with an opt-out coverage for copyrighted characters. However in an update, OpenAI CEO Sam Altman mentioned that the corporate now plans to provide “rightsholders extra granular management over technology of characters” and set up a income mannequin for copyright holders.
The surge of consideration on AI video mills is creating alternative for startups equivalent to Loti, a Seattle firm that helps celebrities, politicians, and different high-profile people shield their digital likeness.
“Everybody is anxious about how AI will use their likeness and they’re in search of trusted instruments and companions to assist information them,” mentioned Loti CEO Luke Arrigoni.
He mentioned Loti’s enterprise is “booming proper now,” with roughly 30X development in signups month-over-month. The startup raised $16.2 million earlier this yr.