
Seven Ways to Train Employees to Detect Deepfakes
Deepfakes are manipulated or wholly synthetically created videos, audio, and texts produced by generative AI technologies. The modus operandi behind deepfake attacks is to circumvent conventional defenses by exploiting victims’ trust. The best can look so realistic that even the most diligent struggle to distinguish real from fake. No one is immune.
Cybercriminals leverage automation to conduct powerful deepfake attacks at scale, impersonating business leaders to execute financial fraud. They damage brand reputation or disseminate misinformation that seems to originate from trusted sources. Since the tools to produce deepfakes are readily available, there’s little barrier to producing them.
The most critical component of defending against deepfakes is enabling humans. Organizations must create a culture of constant vigilance. They must provide continuous defense drills to fight AI-facilitated deception techniques, such as sophisticated phishing and deepfakes.
The following are a few effective strategies that organizations can embrace:
1. Immersive deepfake awareness training
Organizations must prepare employees with the skills and knowledge to detect and counter deepfake attacks. A cybersecurity training initiative integrating simulated tools can train users in real-world settings by exposing them to AI-generated videos, voice clones, and images. Interactive workshops using real deepfake examples, such as fake audio and cloned CFO video calls, can help participants recognize the threats and stay vigilant.
Users can learn to spot red flags that indicate deepfake attacks. Look for suspicious requests to share passwords or transfer funds immediately. Watch out for videos with unnatural face movements, lack of throat movement, blurry side profiles, or speech that sounds off. Since deepfake techniques are always evolving, conduct refresher training at least quarterly. This keeps users informed about new threats.
2. Deepfake phishing and social engineering training drills
Defense against deepfake phishing and socially engineered attacks involves understanding how AI is used to tailor and impersonate trusted sources and identifying emotional triggers in social engineering attacks.
Integrate AI simulations with deepfake elements into phishing training exercises to gauge employee preparedness. Providing immediate feedback on identifying deceptive tactics and rewarding employees who can spot anomalies helps enhance learning.
3. Verify to confirm
Set mandates that require users to verify sensitive requests (i.e., wire transfers) through secondary sources like a phone call or an in-person meeting. Security questions or “safe” words can be used to authenticate high-risk transactions. Tools such as Google Reverse Image Search (Google Lens), Microsoft Video Authenticator, or Intel’s FakeCatcher can be used to identify fake media. But, beware, many ‘detection’ tools can provide a false sense of security and accuracy. Test with multiple tools and do additional investigation before drawing a conclusion.
4. Mitigate cognitive biases
Threat actors attempt to persuade victims to believe deepfake content by exploiting cognitive weaknesses, including perception, trust, and human biases.
For example, users with a confirmation bias tend to align with information that supports their existing beliefs, users with authority bias often behave blindly to the whim of someone in a position of power, and users with urgency bias (anxiety) are more likely to fall for false emergency requests that may subvert their rationality.
Firms must train employees to “verify, then trust” to break the habit of acting on assumptions. They must also make users aware of intellectual humility. It teaches openness to the idea that they may be mistaken, not to rigidly stick to their views, and instead to question themselves, “What might I be missing here?”
5. Deploy technology and policies
Provide users with access to deepfake detection tools like Adobe’s Content Credentials and OpenAI’s GPT detectors, although these tools have limitations.
To help validate content authenticity, consult c2pa.org, a coalition working to develop technical standards for verifying the source and history (provenance and authenticity) of digital content to combat misinformation.
Employ dynamic authentication with Zero Trust policies to ascertain the trustworthiness of users, devices, and actions for financial transactions, data access, and sensitive communications.
6. Collaborative resilience
Simulate coordinated deepfake attacks across multiple functions, such as HR, finance, and IT, to help facilitate a collaborative defense.
For instance, a well-coordinated simulated HR email and a CFO video call will effectively facilitate an interdepartmental drill. It’s essential to have cybersecurity experts lead robust training and awareness initiatives focused on emerging AI threats and effective strategies to counteract these risks.
Organizations can remain one step ahead of threat actors by exchanging threat intelligence with industry organizations and peers. Take advantage of free resources available on OWASP. .
7. Evaluate and refine training modules
Organizations can use anonymous surveys to collect feedback on employee knowledge and confidence levels.
This data-driven method enables training programs to be relevant and focused, thereby enhancing the organization’s resilience against deepfake-related threats. Monitoring phishing simulation success rates and employee feedback can also help improve training programs.
Deepfakes exploit the increasing sophistication of AI technologies to create manipulative content designed to evade human detection and evoke a response that benefits the attacker. Yet people accustomed to repeated exposure to synthetic content via training exercises can serve as a critical line of defense against AI-driven deception. That’s because human judgment blends emotional intelligence, context awareness, and skepticism—attributes that machines cannot (at least not yet) convincingly replicate.
Providing employees with immersive security training and awareness helps them develop the experience and cognitive resilience necessary to thwart deceitful and manipulative attacks.
About the Author
Perry Carpenter is Chief Human Risk Management Strategist at KnowBe4, the world-renowned cybersecurity platform that comprehensively addresses human risk management. His latest book, “FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions,” explores AI’s role in deception. With over two decades in cybersecurity focusing on how cybercriminals exploit human behavior, Perry hosts the award-winning 8th Layer Insights and Digital Folklore award-winning podcasts.
Catch up with him on social:
X: @PerryCarpenter