Protecting Crisis Communications From AI Deepfake Attacks


Deepfakes pose a significant challenge for crisis communication teams. In the time it takes to issue a response, a fake video, image or sound clip can spread disinformation, damage reputations and erode trust. For those in security and communications, the question is not if this will happen, but how ready the team is to handle it.

Why Deepfakes Threaten Crisis Communications

AI audio and video technology continues to improve. It easily duplicates voices, faces and gestures. Someone skilled in crafting prompts can make false content appear realistic. Even experts cannot always tell the difference. Deepfake fraud attempts grew by 3,000% in 2023, and humans were only able to detect manipulated media 24.5% of the time. That level of realism makes it easy for attackers to impersonate leaders or issue fake company statements concerning sensitive events.

The biggest threat is how fast a deepfake can go viral online and reach thousands before being debunked. Previously, leaders could follow a customary crisis playbook of verification chains and press briefings. Today, a viral video is beyond the control of any one leader.

How to Detect and Mitigate a Deepfake Attack

Catching attacks early and implementing a preformed strategy can reduce damage. Here are the steps to take to protect an organization.

1. Monitor Continuously

Organizations can implement active digital surveillance with media listening tools to detect suspicious uploads, publicly available footage and AI-generated content on social media channels. Companies have started using AI to note inconsistencies in voice recordings or facial movements before sharing information. Ideally, security and communications staff should share a dashboard and alert information so the first person to spot a problem can begin verification.

2. Verify Sources Before Responding

Vet video or audio messages from the executive or brand account whenever possible. Stay in contact with forensic investigators and media verification companies who can help determine authenticity. Some organizations will cryptographically watermark or digitally sign official correspondence to show its authenticity. Internal libraries of trusted voice samples, press templates, and metadata may also be helpful for comparison against suspicious clips.

3. Prepare a Rapid Response Framework

Since security teams can’t pause a deepfake attack, a response plan should include:

  • A list of those who can verify potential misinformation
  • A list of who communicates with the public
  • Preapproved responses when something must be released immediately

PR Daily recommends using templates for a false announcement or fake apology to keep audiences from becoming confused. Speed is important, but accuracy is everything. One trusted source giving calm facts works better than many quick denials.

4. Collaborate Across Departments

Combating disinformation requires department-wide collaboration. Security, IT, legal and PR can work together. They can manage defamation or compliance issues, maintain digital signatures to ensure content’s authenticity, and make sure messages fit with the company’s brand identity.

Cross-functional teams must cooperate to prevent misinformation from spreading. 

Organizations should combine AI, cybersecurity and digital ethics to build defense frameworks equipped to handle attacks.

5. Train Staff to Recognize Red Flags and Report

As early as 2022, 66% of security professionals had to respond to incidents involving the use of deepfakes. Employees should be trained to look for inconsistent lighting, unnatural blinking or audio alteration to identify manipulated media. Simulated deepfake drills test the speed of staff in detecting and reporting misinformation. Instruct customer service, PR and executive staff to adopt a verify-first mindset, as employees and leadership may be impersonated in phishing attempts.

6. Be Transparent

If someone posts a deepfake, transparency is the best course of action. A company should do the following:

  • Fact-check what occurred.
  • Speak openly about the situation.
  • Update people frequently. 
  • Conduct forensics to prove it is a forgery.
  • Post confirmation through all owned channels.

Follow up by asking what allowed this fake to spread in the first place. The security response team should identify if any verification steps were skipped. Keeping the playbook updated for every incident will help in the future.

Protecting Reputations From Lying Pixels

The organizations that will successfully counter the growth of AI-generated deception will have the best collaboration and communication. Trained teams can spot deepfakes, share information and stop the damage quickly. An agile security team combats even a sophisticated attacker’s attempts and protects its company’s reputation.


Devin Partida is an industrial tech writer and the Editor-in-Chief of ReHack.com, a digital magazine for all things technology, big data, cryptocurrency, and more. To read more from Devin, please check out the site.


Follow Brilliance Security Magazine on Twitter and LinkedIn to ensure you receive alerts for the most up-to-date security and cybersecurity news and information. BSM is cited as one of Feedspot’s top 10 cybersecurity magazines.