Technology experts have urged people to stay mindful online when watching videos on social media pages, sharing several ways to identify an AI-generated video easily.
We’re entering a new era of online video content – one which experts say comes with a pretty dangerous risk. AI-generated videos are spreading rapidly across social media, pushing digital deception to an all time high. Here’s what you need to know to spot real content from fake.
A tech expert has warned that ultra-realistic AI video tools, available on platforms like Sora 2, are being used by fraudsters to fuel a wave of deepfake scams online.
Deepfake videos have surged from half a million in 2023 to over eight million this year, with scam attempts increasing by 3,000 per cent in just two years, according to cybersecurity firm DeepStrike. The experts warn it is becoming increasingly difficult to tell real from fake, as humans can now detect high-quality deepfakes only one in four times, leaving millions vulnerable to being scammed.
Deepfakes and AI-generated videos began as a lighthearted internet trend but have quickly grown into a multi-billion-pound criminal enterprise. Scammers are now using AI to replicate voices, faces and emotions, deceiving victims into handing over money or spreading false information.
Real-time deepfake video calls are expected to become more common, with scammers impersonating loved ones or bosses live on screen, using cloned voices and similar mannerisms. This is made possible by tools like Sora, a text-to-video app that can generate realistic videos in under a minute, the Mirror reports.
The team at tech and software company, Outplayed, has outlined several key ways to identify an AI-generated video easily. These include:
- Blinking and Eyes: Subjects blink less or move their eyes in robotic patterns.
- Hands and Fingers: Look for warped or merged fingers – AI still struggles with hands.
- Lighting Errors: Shadows fall in odd directions or faces look too bright.
- Water and Reflections: Liquid looks “too perfect” or unnaturally still.
- Lip Sync Issues: Lips don’t quite match the words – especially on “p” or “b” sounds.
- Smooth Skin: AI faces appear plastic-like, without pores or texture.
- Glitchy Backgrounds: Warped objects or flickering edges give the game away alongside unhidden watermarks.
- Fabric Movement: Clothes move stiffly or unnaturally in wind.
- Scene Transitions: Awkward jumps or “blips” mid-shot suggest synthetic editing.
- Strange Context: People appearing out of character or in odd places.
- Gut Feeling: If it feels off – it probably is.
An expert at Outplayed has urged people to recognise just how rapidly this technology is advancing. “What once took Hollywood studios weeks to produce can now be made by anyone with a laptop. The videos from the latest AI tools like Sora 2 are almost indistinguishable from reality – and that’s terrifying in the wrong cases it may be utilised in,” they said.
People are falling victim to scams on social media – in business terms and even in romantic relationships – as these criminals are exploiting the longstanding belief of ‘believing what you see’. The expert continued: “People trust video evidence. That’s why we’re entering a dangerous new era with deepfakes on a rapid rise. They bypass logic and go straight for emotion.”
Now Sora has over one million downloads and users are creating short clips, that are causing controversy. Many users are creating videos of celebrities including Jake Paul and Michael Jackson, and blurring the lines between fiction and authenticity.
Following the appearance of several disrespectful deepfakes, the platform’s creators were forced to ban videos featuring Martin Luther King Jr, which highlight the growing ethical challenges posed by this technology.
