
How to Spot a Deepfake Video
Videos have always been considered powerful pieces of evidence—“seeing is believing,” as the saying goes. But that old truth is now under attack. In recent years, a new type of video called a deepfake has emerged, which uses artificial intelligence to mimic the appearance and sound of an actual video or audio recording, even when the events depicted never happened. These convincing fakes can show public figures making false statements, place someone’s face into another person’s body, or even replicate a person’s voice to deliver messages they never spoke.
While some deepfakes are created for harmless entertainment, others are far more dangerous. They have been used to spread political lies, manipulate financial markets, and even give fake medical advice that could put people’s lives at risk. For example, AI-generated “doctors” have appeared on TikTok dispensing dangerous health guidance, complete with a fabricated backstory and digitally generated faces.
And these aren’t limited to obscure corners of the internet. Deepfakes are already appearing on social media feeds and can be broadcast on television news segments. Experts warn that they could even make their way into commercial advertising, blurring the line between marketing and manipulation. The troubling reality is that deepfake technology keeps getting better—and harder to spot. But even the most sophisticated deepfakes usually contain subtle errors that can reveal their true nature.
With the help of AI insights and research assistance, let us explore how to determine whether a video is a deepfake or not.
1. Pay Attention to Eye Movement and Blinking Patterns
Human blinking follows natural, unpredictable rhythms. Deepfake algorithms often have difficulty replicating these patterns convincingly. In early deepfakes, characters barely blinked at all. Modern versions have improved, but blinking can still look unnatural—too slow, too frequent, or oddly timed.
Watch closely for eyes that seem overly fixed or glassy, or blinks that happen at the wrong moment in a conversation. These subtle discrepancies can be a clue that the footage has been digitally manipulated.
2. Examine Lighting and Shadows
One of the biggest giveaways in many deepfakes is mismatched lighting. If a person’s face is brightly lit but the background suggests they should be in shadow—or if shadows fall in different directions—it could be a fake.
Also, pay attention to skin tone consistency. Artificially generated faces may have patchy lighting or strange color shifts, especially around the edges of the face. This is particularly noticeable if the person moves their head and the lighting doesn’t change naturally with their movement.
3. Look for Facial Artifacts and Glitches
Even the most advanced AI has trouble blending a face seamlessly into a moving scene. If you spot blurriness around the jawline, warping near the ears, or distortion when the subject turns quickly, that’s a red flag.
Small details like glasses, earrings, and facial hair are also tricky for deepfake creators. These can appear fuzzy, change shape slightly from frame to frame, or fail to cast realistic shadows.
4. Check Lip Sync Accuracy
Lip movement in a deepfake may appear almost perfect, but it often isn’t. Look for moments where the lips finish moving slightly before or after the audio. Pay extra attention to hard consonants like “p,” “b,” and “m,” which require the lips to fully close—if they don’t, that’s suspicious.
5. Watch the Head and Body Movement
Deepfakes can struggle with natural head motion. A fake video might show a head that turns unnaturally, moves stiffly, or doesn’t quite align with the body’s posture. In some cases, the neck may appear blurred or too thin when the person turns, or the head might look as if it’s “floating” slightly apart from the shoulders.
6. Listen for Voice Clues
Deepfakes aren’t just visual—they can include audio deepfakes that perfectly mimic a person’s voice. These AI-generated voices often lack the natural imperfections of human speech, such as slight changes in pitch, natural breathing, or emotional emphasis.
This isn’t just theoretical. In one real-world case, a UK energy firm’s CEO was tricked into transferring about €220,000 (roughly $243,000) after receiving a call from what sounded like his boss. In reality, the voice was an AI deepfake designed to mimic the German executive’s accent and tone.
More recently, cybersecurity firm Wiz reported that scammers used a deepfake of its CEO’s voice to leave voicemails for employees, requesting sensitive credentials. Fortunately, the attempt failed.
7. Spot Unnatural Emotional Reactions
Real human emotions show up in micro-expressions—tiny, involuntary muscle movements that AI often overlooks. If someone smiles but their eyes stay cold, or if they suddenly change from laughing to serious in an instant, it could be a digital creation.
Deepfakes often overemphasize big facial expressions but fail to capture these subtle emotional details.
8. Scan the Background
Many deepfake creators focus almost entirely on the person’s face, leaving the background neglected. Look for flickering objects, warped lines, or blurry areas around the subject. If the background shifts oddly when the person moves, or if textures like bricks or patterns distort, that’s suspicious.
9. Question the Context
Sometimes the most important clue isn’t in the pixels—it’s in the situation. Ask yourself: Does this make sense? Would this person realistically be saying or doing this, in this location, at this time?
During the 2024 French legislative elections, deepfake videos emerged showing fabricated news broadcasts, including a fake France 24 segment claiming an assassination plot. In Argentina’s 2023 primary elections, altered images and videos falsely portraying politicians spread widely on social media.
And in South Africa’s 2024 elections, AI-generated videos depicted international celebrities like Joe Biden and Eminem endorsing local political parties—content designed to blur the line between satire and manipulation.
10. Verify with Reverse Search Tools
If you suspect a video might be fake, use reverse image search tools like Google Images, TinEye, or InVID to look for the original source. You may find an earlier, genuine version that has been altered, or discover that the footage comes from an entirely unrelated event.
11. Use AI to Check a Video
Consider using ChatGPT, Claude AI, or other AI tools to help determine whether a video is fake. One possible prompt would be: “Review the video at www.____.com and let me know if it’s real or a deepfake. Describe how you arrived at that conclusion.” The answer to this prompt can be surprisingly helpful.
Conclusion on Deepfake Videos
Deepfakes are no longer a fringe curiosity—they’re a growing part of our media environment, and they can appear almost anywhere: in a casual social media post, embedded in a television segment, or even hidden inside what looks like a legitimate commercial. They have the potential to mislead millions and cause real harm, from personal scams to national-level disinformation.
The most powerful defense against deepfakes is awareness—and a refusal to take things at face value. Before you share, stop and verify. Ask yourself: Has this been confirmed by multiple credible sources? Is this being reported by trustworthy outlets? Is there an original source that can be traced and authenticated?
This is especially important when a video assigns blame, shame, or attribution. A convincing deepfake can ruin a reputation, destroy a career, or provoke outrage in minutes. Once misinformation spreads, the correction rarely reaches as many people as the original fake—and the damage is often permanent. That’s why it’s crucial to check facts before repeating claims, especially those that portray someone in a damaging light.
If you’re not certain a video is authentic, don’t share it “just in case” or “for discussion.” Every unverified repost gives a fake more reach and legitimacy. By pausing, verifying, and questioning suspicious content, you protect not only yourself but also the people who could be unfairly targeted. In a world where seeing is no longer believing, responsible sharing and questioning validity are two of the most powerful tools against deception.
Related Articles:
- What to Know About AI - A Guide to the Basics
- What is a Deepfake?
- A Guide to ChatGPT for Beginners
- 15 Success Tips for First-Time Entrepreneurs
About the Authors:
Dominique A. Harroch is the Chief of Staff at AllBusiness.com. She has been the Chief of Staff or Operations Leader for multiple companies where she leveraged her extensive experience in operations management, strategic planning, and team leadership to drive organizational success. With a background that spans over two decades in operations leadership, event planning at her own start-up and marketing at various financial and retail companies. Dominique is known for her ability to optimize processes, manage complex projects and lead high-performing teams. She holds a BA in English and Psychology from U.C. Berkeley and an MBA from the University of San Francisco. She can be reached via LinkedIn.
Richard D. Harroch is a Senior Advisor to CEOs, management teams, and Boards of Directors. He is an expert on M&A, venture capital, startups, and business contracts. He was the Managing Director and Global Head of M&A at VantagePoint Capital Partners, a venture capital fund in the San Francisco area. His focus is on internet, digital media, AI and technology companies. He was the founder of several Internet companies. His articles have appeared online in Forbes, Fortune, TIME, MSN, Yahoo, Fox Business and AllBusiness.com. Richard is the author of several books on startups and entrepreneurship as well as the co-author of Poker for Dummies and a Wall Street Journal-bestselling book on small business. He is the co-author of a 1,500-page book published by Bloomberg on mergers and acquisitions of privately held companies. He was also a corporate and M&A partner at the international law firm of Orrick, Herrington & Sutcliffe. He has been involved in over 200 M&A transactions and 250 startup financings. He can be reached through LinkedIn.
Copyright © by Richard D. Harroch. All rights reserved.



