YouTube will make reducing low-quality AI-generated content and strengthening deepfake detection a major priority in 2026, according to CEO Neal Mohan.
In his annual letter published on Wednesday, Mohan said the rapid spread of artificial intelligence is blurring the line between authentic and synthetic content, creating new risks for creators, viewers, and the platform itself.
“As AI becomes more prevalent, it’s becoming harder to tell what’s real and what’s AI-generated,” Mohan wrote. “That challenge becomes especially serious when it comes to deepfakes.”
AI-generated content floods video platforms
YouTube, one of the world’s largest video platforms, has seen a sharp rise in AI-created videos. Much of it falls into what creators and viewers now call “AI slop”. These videos are often repetitive, low-effort, and designed to game algorithms rather than inform or entertain.
The problem is not unique to YouTube. Similar content has spread across TikTok and Meta-owned platforms. However, YouTube’s scale makes moderation more complex.
In response, Mohan said the company is expanding systems originally built to combat spam and clickbait. These tools now play a larger role in identifying and limiting the reach of low-quality AI content.
Labels, disclosures, and removals
YouTube already requires creators to disclose when they use AI to alter or generate content. The platform also labels AI-created videos and removes synthetic media that violates its policies.
At the same time, YouTube has stepped up its efforts to detect impersonation. In December, the company expanded its likeness detection technology, which flags deepfakes that use a creator’s face or identity without permission. The feature is now rolling out to millions of creators in the YouTube Partner Program.
AI remains a tool, not a replacement
Despite growing concerns, Mohan stressed that YouTube does not see AI as a substitute for human creators.
Instead, the company continues to promote AI as a support tool. In December alone, more than one million channels used YouTube’s AI creation features every day, he said.
Looking ahead, YouTube plans to expand AI-powered tools across its ecosystem, particularly on Shorts. New features will allow creators to generate videos using their own likeness, experiment with AI music, and even build interactive games through text prompts.
Creators remain central to the platform
Mohan described creators as “the new stars and studios,” noting that many now invest heavily in production quality and long-term audiences.
To support that shift, YouTube is rolling out additional monetisation options, including shopping tools, brand partnerships, and fan-funding features such as Jewels and gifts.
Safety also remains a priority. Mohan said YouTube will simplify how parents create and manage accounts for children and teens, aiming to improve protection without limiting access.
Big money, bigger responsibility
Since 2021, YouTube has paid more than $100 billion to creators, artists, and media companies. Analysts estimate that if YouTube operated as a standalone business, it could command a valuation of up to $550 billion.
That scale raises the stakes. As AI-generated content accelerates, YouTube now faces a balancing act. It must support creativity while preventing its platform from being overwhelmed by synthetic noise.
For 2026, Mohan’s message is clear. AI may power the future of video, but unchecked AI content will not define it.
Photo: Neal Mohan, the CEO of YouTube, speaks during a panel for the Summit for Democracy on March 30, 2023 in Washington, DC.
Source: Anna Moneymaker | Getty Images
Follow us on Instagram and stay updated with our current news



