top of page

While millions are asking “is it real or AI?” we built the answer

  • 2 days ago
  • 3 min read

By Olga Scryaba

Head of Product at isFake.ai

ree

When you scroll through your feed and see a political scandal, celebrity gossip or a cute baby playing with a puppy, do you question yourself “wait…is it AI?” Actually this question belonged to fact-checkers and cybersecurity pros. But now millions of people ask it under thousands of posts on social media, and most can’t find a reliable answer.

I'm Olga Scryaba, Head of Product at isFake.ai, and I was observing how this shift happens in real-time from the side of cybersecurity, machine learning and AI detection.


These days I see the tendency that AI content itself is no longer a problem, lots of people easily accept its usage. But society doesn’t accept deception. When creators hide that content is AI-generated, when audiences can't distinguish authentic from synthetic, the fabric of online trust deteriorates. And unlike tech problems, trust problems don't have technical solutions alone.


Between July 2023 and 2024, 82 deepfakes targeting politicians surfaced across 38 countries during election seasons. Every five minutes in September 2024, a fake video scam occurred somewhere. And Q1 2025 alone, scammers using convincing deepfakes stole over $200 million globally. 


But the statistics don't capture what really terrifies me. It's the normalization of this lie. How we came from "it's just a prank" excuse of AI usage between peers to monetization of fake natural disaster videos on TikTok. And now we are in the middle of the moment, when people can’t trust their eyes and ears anymore and, well, ironically ask Grok to tell the verdict, if this post is real or AI…


New approach to AI detection

At isFake.ai, we built something different from typical AI detectors. We didn’t want to create another black box in raw, where you only get a yes or no verdict with no room for conversation. We are a team of researchers and cybersecurity experts, so we know that mystery breeds distrust. Because of that we designed our platform to show people exactly why it flags something as AI-generated.


Our platform analyzes text, images, videos, and audio to identify synthetic content, but we represent results differently. We highlight the specific passages in text that triggered detection. 


And generate heatmaps showing AI areas in the image. We mark the precise frames and timestamps where deepfake indicators are strongest in the video. And we provide waveform analysis revealing the signatures of cloned voices for the audio. Every analysis comes with visual evidence and detailed explanations.


This approach matters because we're not positioning isFake as a judge in any way. We prefer calling it Google of truth. When educators use our platform to check student’s work, and find AI content there, the result becomes a starting point for discussion of research methods and academic integrity, not a witch hunt. When journalists verify content before publication, we see it as their professional defense against misinformation. And when businesses protect their brands from deepfakes with CEO, CFO etc, they're managing risk and protecting their customers, partners and employees.


So this transparency is not just a technical challenge, it's also an education challenge, and a societal challenge, really. We're working on this through continuous research and open collaboration. We're also building searchable verification of content that will let anyone paste a post link and get detailed analysis of how much AI is in it.


Our goal is to make transparency the default, and let people see exactly what they trust and share online. Because while synthetic content is getting everywhere, clarity becomes the most valuable currency.


Connect With Olga

 
 
 

Comments


bottom of page