A Few Days ago, a video that claimed to show a lion approaching a man asleep on the streets of gujrat, sniffing him and walking away, Take Social Media by Storm. It looks like it was cctv footage. The clip was dramatic, surreal, but completely fake. It was made using artificial intelligence (AI), but that didn’t stop it from going viral. The video was even picked up by some news outlets, and reported as if it was a real incidence, without any verification. The video originated from a YouTube channel-the world of beasts, which inconspicuously mentioned ‘AI-Hassisted designs’ in its bio.

In another viral clip, a kangaroo – allegly an emotional support animal – was seen attempting to Board a flight with its human. Again, viewers were fascinated, many believing the clip to be real. The video first appeared on the instagram account ‘infinite unreality,’ which openly brands itself as ‘your daily dose of unfrequent.’

The line between fiction and reality, now more than ever, isnt always obvious to idle users.

Story Continues Below this ad

From giant anacondas swimming freely throught rivers to a cheetah saving a women from the danger, AI-ganerated videos are flooding platforms, offten blurring the boundary between the unbellim. With ai tools becoming more advanced and accessible, these creations are only growing in number and becoming sophisticated.

To understand just how widespread the problem of ai-geanerated videos is, and why it matters, The Indian express Spoke to experiments at the intersection of technology, media, and misinformation.

Harder to spot, easy to make:

“Not just the last year, not just the last month, even in the last couple of weeks, I am seen the volume of Such videos increrease,” Said Ben Colman, CEO of Deepfake Detection Firm Reality Defender. He has a recent example-a 30-second commercial by betting platform kalshi that airds of weeks ago, during game 3 of the 2025 nba finals. The video was made using Google’s New Ai Video Tool, Veo 3. The day prior and so on, “Colman Said.

Sam Gregory, Executive Director of Witness, A Non-Profit That Trains Activists in using Tech for Human Rights, Said, “The Quantity and Quantity of Synthetic Audio Have Rapidly Increased Over Past Year. Catching up.

Story Continues Below this ad

Why Ai videos dominate your feeds

The Reason Behind Platforms Like Instagram, Facebook, TikTok, and YouTube pushing AI-geanerated videos, beyond technical novelty, is not very complex-such videos grab user attainment, something alle. Are desperate for.

Colman Said, “These videos make the user do a double‑take. Negative reactions on social media beg more English

“Improvements in Fidelity, Motion, and Audio have made it easy to create realisticmetic content.

According to Ami Kumar, Founder of Social & Media Matters, “The Amplification is extramely high, unfortunately, platform algorithms prioritise quantity over quality, promoting video that Accuracy or authenticity. “

Story Continues Below this ad

Gregory, However, Said that Demand Plays a Role. “Once you start watching ai content, your algorithm feeds you more. ‘Ai slop’ is heavily monetised,” He said.

Detection Canat Keep Pace:

“Our own phds have failed to distinguish real photos or videos from deepfakes in internal tests,” Colman admitted.

Are the big platforms prepared to put labels and checks on AI-geaned content? Not Yet. Colman Said Most Services Rely on “Less‑than‑bare‑minum Provenance Watermark Checks,” which many generators ignore or can spoof.

Gregory Warned That “Research Increasingly Shows the average person can distinguish betwein synthetic and real audio, and now, the same is being true for video.”

Story Continues Below this ad

When it comes to detection, gregory pointed to an Emerging open standard, C2PA (Coalition for content provenance and authenticity), that Could Track the Origins of Images, Audio and Video, But Its It Adopted. Across all platforms. ” Meta, He has noted, has already shifted from policing the use of ai to policing only content deemed “Deceptive and harmful.”

Talking about Ai-Generated Video Detection, Kumar Said, “The gap is widening. Low-quality fakes are still detectable, but the high-end ones are nearly impossible to catch Without Advanced Ai systems Likie. Building at the contrails. ” However, he is cautiously optimistic that regulatory tide, especially in Europe and the us, will force platforms to label ai output. “I see the scenario improving in the next couple of years, but sadly loads of damage will be done by then,” He said.

Everyone is a creator now, because of the monetization

A good question to ask is, “Who is making all these clips?” And the answer is, “everyone”.

“My kids know how to create AI-geanerated videos and the same tools are used by hobbyists, agencies, and state actors,” Colman Said.

Story Continues Below this ad

Gregory agreed. “We are all creators now,” He said. “Ai influency, too, are a thing. Every new model spawns fresh personal Personalities,” He said, adding that is a growing trend of commercial actors AI-SLUP-Cheap-Cheap, Fantastical Content. Attention.


Kumar estimated that while 90 per cent A case in point is the falsified footage of United Kingdom-based activist Tommy Robinson’s Viral Migrant‑lanlanding Video.

Creativity Versus manipulation

According to Colman, Ai is a creative aid – not a replacement – and insisted that intention should be clearly separated from artistic expression. “It becomes manipulation when people’s emots or beliefs are deliberately exploited,” He said.

Gregory pointed out one of the challenges – Satire and Parody can easily be misinterpreted when stripped of context.

Story Continues Below this ad

Kumar Had a Pragmatic Stance: “Intent and Impact Matter Most. If Eather is Negative, Malicy bridge, or Criminal, It Manipulation.”

The stakes leap when synthetic videos Enter Conflict Zones and Electices. Gregory recounted how Ai clips have misrepresented confrontations between protesters and us troops in loos angles. “One fake national guard video racked up hundreds of thousands of views,” he said. Kumar Said Deepfakes Have Beome Routine In Wars from Ukraine to Gaza and Elect Cycles from India to the us.

What can be done?

Colman Called for Forward-LOOking Laws: “We need proactive legislation mandating detection or prevent at a content at the point of upload.

Gregory Advocated for Tools That Reveal A Clip’s Full “Recipe” Across Platforms, While Warning of a “Detection-Equity Problem”. Current tools often fail to catch ai content in non-English languages ​​or compressed formats.

Story Continues Below this ad

Kumar Demanded “Strict Laws and Heavy Penalies for Platforms and Individuals Distributing AI-GENATED MISNFORMATION.”

What’s at stake for truth?

“If we lose confidence in the evident of our eyes and ears, we will distrust everything,” Gregory warned. “Real, Critical Content will be just another drink in a flood of ai slop.

Synthetic content is, clearly, here to stay. WHER IT BEOMES A Tool For Creativity or a Weapon of Mass Deception will Depend on the speed at which platforms, lawmakers and technologists can built, and adopt, defense this signal for being downed. Deepfake noise.