Is Deepfake Technology Outpacing Security Countermeasures?
Every good CISO is closely watching a new emerging cybersecurity threat with the potential to significantly impact every organization: deepfake technology. In what’s considered one of the first successful uses of deepfake technology by cyber criminals, it was reported in early 2024 that a finance employee for a multinational company was scammed out of $25 million by a deepfake impersonation of a corporate executive during a video conference. It’s a harrowing story, and the perfect opportunity to discuss what deepfakes are, how they work, their creation process, and whether current IT security countermeasures are up to the task of combating this growing threat.
What is a Deepfake?
A deepfake is synthetic media in which a person in an existing image or video is replaced with someone else’s likeness using advanced artificial intelligence (AI) and machine learning technologies. These convincing fake videos and audio recordings have become increasingly sophisticated, making it difficult to distinguish between what’s real and what’s fabricated.
How Does a Deepfake Work?
Deepfake technology leverages AI algorithms, specifically deep learning methods to analyze and learn the characteristics of a person’s face and voice. It then superimposes these learned characteristics onto another individual in videos or audio recordings, creating a realistic, but entirely fabricated, representation.
How to Create a Deepfake
Creating a deepfake involves feeding vast amounts of video and audio data into these AI models, allowing them to understand and mimic the target’s facial movements, voice, and expressions accurately. This process requires significant computational power and sophisticated software, but as technology advances, it’s becoming more accessible to a broader audience, including cyber criminals.
What are Some Deepfake Security Concerns?
Deepfakes pose significant challenges for businesses, especially as deepfake technology becomes more accessible to cyber criminals. A recent report from U.S Cybersecurity and Infrastructure Security Agency (CISA) outlined the threat deepfakes pose, noting this threat increases as criminals will become more sophisticated in how they apply their deepfakes.
Imagine you’re an attorney and you receive a Zoom video call (but it could be Microsoft Teams, Cisco Webex, or any other audio/video app which are little more than traditional telecom in a fancy collaboration wrapper) from one of the firm’s partners. You see that she’s in a hallway in the courthouse, says she only has a few minutes before meeting with the judge, and wants additional data about the case. She says your audio is stuttering because her internet connection is bad, and turns off her video to save bandwidth, and you begin the conversation again. At the end of the call, the firm’s top lawyer restarts her video to give you a thumbs up, says you did a great job, and a week later you’re fired because that whole conversation was between you and a deepfake set up by a group allied with opposing counsel. This example is 100% possible with today’s deepfake technology and can be devastating for the victims.
At a macro level, deepfakes can also be used to manipulate stock prices or commit financial fraud by impersonating financial leaders. This could be accomplished by creating false-but-convincing news stories that is shared with an employee at a brokerage firm, followed by impersonating trusted individuals, and convincing the employee into divulging sensitive information or credentials that sets in motion a major operation. Again, while this scenario might seem like the plot of a Netflix thriller, it is entirely possible using current deepfake technology.
How to Spot a Deepfake
Recognizing a deepfake can be challenging, but with a little bit of education most people can spot their common flaws. Given the current state of deepfake technology, look for these telltale signs:
- Unnatural Eye Movement – Deepfakes often struggle to replicate natural eye movement. A famous real-world example of this was in a recent Star Wars streaming series. Although original actor Mark Hamill had aged out of his youthful Luke Skywalker role, deepfake technology was used to superimpose his 1983 face onto his 2021 head (resulting in a “dead stare”). If you have suspicions that you’re talking to a deepfake, then pay close attention to the person’s eyes.
- Unnatural Facial Expressions – The AI can have difficulty in capturing the subtleties of human expressions, leading to anomalies in facial movements. Sometimes it can be subtle… but often it will be obvious. Look for inhuman facial tics or abnormal frowns, smiles, and blinking.
- Facial Morphing – Look for the person’s face momentarily warping on the screen. The effect will be strange and unsettling, rather than slight glitches seen during a temporary problem with the person’s network connection.
Better Skeptical Than Sorry
While humans can be trained to identify common signs of a deepfake today, the technology will continue to improve—and spotting these signs will become increasingly difficult. It is more likely than not that in the future, CISOs and other IT leaders will need advanced detection tools and strategies that will use AI to find deepfakes. Deepfake security is in its infancy, but undoubtedly will mature with the threat.
Proactively Address Deepfake Security Concerns with Blue Mantis
The question is not if deepfake technology will be used against your organization, but when. As these threats continue to evolve, staying one step ahead will be crucial for your business. There are a few best practices that CISOs can implement today that can help mitigate potential deepfake threats:
- Partner with your executive team – Deepfakes are not just a cybersecurity issue because data, finance, and business strategies are all intertwined. By working with the CFO, CEO, and other executives you can raise awareness and gain executive buy-in to combat the threat.
- Segregate controls for financial transactions – Once the CFO understands the potential harm deepfakes pose to the company, you can work with the Finance team to put sensible controls around money transfers that also meet the demands of compliance reporting rules such as Sarbanes-Oxley in the United States.
- Examine your corporate communications – The ability of a VP to ask the Director to “jump on a quick Teams call” to get things done quickly in our modern always-connected workplace is what a lot of criminals with deepfakes are counting on. So, consider making it a part of your company’s culture to verify employee identities in a way that doesn’t impact your business agility.
Blue Mantis has a dedicated cybersecurity risk assessment team to address the multifaceted challenges of the digital age, including the threat of deepfakes. Our experts are ready to conduct a comprehensive cybersecurity risk assessment, ensuring your defenses are equipped to meet the challenges of tomorrow.
Don’t wait for a breach to expose the gaps in your security posture. Connect with the Blue Mantis Cybersecurity Risk Management team today to learn more about how we can help protect your organization from the insidious threat of deepfakes and other cyber risks.