Deepfakes and Their Impact on the Security Industry
[Editor’s Note: This is the first in a series of articles that will examine each of the trends identified in SIA’s annual Megatrends Report for 2023. Each year the report is a highly anticipated look at the most significant trends driving operations in the physical and connected security space. The articles will appear approximately monthly throughout 2023 exploring each of the megatrends detailed in the report and tap the expertise of SIA members in applicable areas. First up: cybersecurity.]
Cybersecurity is and will be the classic arms race. Just as soon as you think you have a handle on it – that patches are being implemented, devices are secured, cloud and servers have the protections needed and your firm’s employees are better at recognizing the latest phishing attack – along comes something new that threatens to disrupt the world. One significant concern, according to the SIA Cybersecurity Advisory Board, is deepfakes. The advisory board helps track and provide guidance on emerging cybersecurity-related issues and shares its guidance freely with the global security industry.
As you dive into the world of deepfakes, you can easily find examples like ThisPersonDoesNotExist.com, which has a deep library of autonomously generated persons who, like the website names implies, simply don’t actually exist. It’s part of a broader collection from ThisXDoesNotExist.com that demonstrates fake resumes, fake cats, fake song lyrics and even fake cities and food blogs. You’ll find early humorous examples like BuzzFeed’s Obama deepfake video with Jordan Peele, but more concerning were early examples of deepfakes being used to replicate a CEO’s voice so that a scammer could convince a company to wire nearly a quarter-million dollars to their bank account.
Patrick Simon, president and manager of Beehive Technology Solutions, a SIA member company that participates in the SIA Cybersecurity Advisory Board, explained the overall situation this way in an article for SIA’s Center of Excellence:
“Deepfakes have become an infamous tool used to create fake digital material that is being used in evil ways throughout our society and has also become a ‘weapon of choice’ for cybercriminals and nation-state adversaries worldwide,” wrote Simon. “Rapid advancements in computer vision technologies, coupled with manipulated sounds and signals, combined and converged with other technologies create a digital challenge to our military, law enforcement, legal and health care systems, to name a few.”
“Deepfakes have the potential to really disrupt facial recognition systems in physical access control tech,” added John Deskurakis, chair of SIA’s Cybersecurity Advisory Board and chief product security officer of Carrier Corporation. “A lot of physical security systems are leveraging facial recognition to grant access to premises.”
And the global disruption potential is enormous: “As technology advances, and machines are studying and learning these movements, deepfakes can be generated easily to cause political unrest and chaos in the world,” added Min Kyriannis, CEO of Amyna Systems and a past chair of the SIA Cybersecurity Advisory Board.
Deepfakes: Impact to the Security Industry
First, let’s talk about what deepfakes are and how they could impact the security industry.
Pauline Norstrom, CEO of Anekanta Consulting and a member of the SIA Cybersecurity Advisor Board, explained it this way (and she notes that her answer was 100 percent human generated): “Deepfakes are software-generated replicas of human characteristics which can take the form of video, images, voice or text. They can be so convincing, it may be impossible for a human to differentiate between the fake and a real person. Deepfakes are created by generative AIs that synthesize human responses. Video and images may be altered through a form of AI neural network, a generative adversarial network (GAN), which seamlessly alters, adds or subtracts data to or from the image which was not in the original. ChatGPT is also a generative AI which creates text that can be indistinguishable from a human's writings.”
“A deepfake video or image may have the face automatically substituted for another, effectively stealing a person's identity, or a voice may be replicated to generate misinformation,” Norstrom said. “Widely known to generate social media disinformation or manipulation, deepfakes also impact the security industry, but in different ways. A realistic deepfake may bypass a facial biometric access control system and, in doing so, allow an imposter into restricted areas. Video evidence may be altered and replaced with deepfakes to cover a trail of crime or to frame an innocent person. A deepfake voice may appear authentic in providing instructions which distract or trick the recipient into revealing critical information.”
“Deepfakes should remain high on the risk register, and mitigations should include robust cybersecurity practice to keep systems secure at source and to implement human-centric processes which authenticate users of critical security systems,” Norstrom added. “Equally, software developers must continue to ensure that their systems are effective in detecting a fake – for example, through liveness detection in biometrics systems. Maintaining the chain of evidence in an audit log from image sensor capture through to export through software or physical methods will help to ensure deepfakes are detected before they disrupt and cause untold harm and cost to business operations and reputation.”
How Deepfakes Can Cause Business Problems
Rachelle Loyear, vice president of integrated security solutions at Allied Universal and a member of the SIA Cybersecurity Advisory Board, points to the potential problem that deepfakes can have on biometric solutions, particularly as biometric technology becomes even more mainstream as a solution for multifactor authentication or as the primary authentication methodology.
“The main thing for the end user out in the wild is that they need to make sure that their choices in biometrics are future-proofed against potential deepfake attacks,” Loyear said. “At this moment, deepfakes are not quite sophisticated enough to fool the digital eye or ear, even though they can fool a human. However, the rapid advancement of this technology indicates that there will come a time that security through facial recognition or voice confirmation alone is unlikely to be able to guarantee that the entity on the other end of the check is who they say they are. We are back to the old concepts of multifactor authentication, but in the new era of deepfakes, we will need dynamic authentication methods, with a changing set of requirements that will be harder to generate ‘on the fly.’”
“The AI ‘arms race’ is slightly different for manufacturers of systems that require authentication,” Loyear continued. “These firms will need to think about the kinds of authentication mechanisms they deploy, and, if not building the tools themselves, will need to develop robust and continual penetration testing schemes to ensure they are able to provide the level of security that they claim to provide with their authentication choices.”
Chuck Davis, a SIA Cybersecurity Advisory Board member who runs his own cybersecurity consultancy and virtual CISO services firm, Caveat Labs, explained the business problem this way:
“Deepfakes are becoming less detectable at a rapid scale, and the threat of deepfakes should be assessed by each company or organization to determine their risks.”
Davis listed five ways he believes that deepfakes can and will cause business problems:
- Reputation damage: “Deepfakes can be used to create videos or audio recordings that imitate executives or other high-profile individuals within a company, which can damage their reputation and that of the company.”
- Misinformation: “Deepfakes can be used to spread false information about a company or its products or services, which can lead to lost sales or legal issues.”
- Cybersecurity threats: “Deepfakes can be used in phishing scams or other cyberattacks, which can lead to data breaches or other security incidents. One example is using deepfake video and audio to imitate a colleague, customer or vendor on a teleconference like Zoom.”
- Disruption of financial markets: “Deepfakes can be used to create ‘fake news’ or other false information that can affect stock prices or other financial markets.”
- Legal disputes: “Companies may face legal disputes if they become the target of deepfakes created by threat actors.”
Beyond the business world, the potential to create deepfakes is fast changing consumer media, and not just for humorous Tik Tok and YouTube videos.
“The entertainment industry has been looking at creating these images to cut costs in the media, and it’s easier and easier to develop deepfakes using technology versus using people to create these videos,” Kyriannis said. “On top of it, as technology advances, it becomes easier for the general consumer to utilize these apps to create realistic videos. Here’s an interesting [deepfake] video of a Taiwanese singer who passed away at least 40 years ago. The video literally captures her movements and also her expressions as she’s singing.”
For now, the industry and the world are not only in awe of deepfakes, but on high alert for how to respond to the power of deepfakes. According to the SIA Cybersecurity Advisory Board, the response is already underway in many companies across the industry,
“Companies are researching potential impacts – and they are real – and testing ways to combat and mitigate [this new threat],” said Deskurakis.
One thing is clear: the work to control the impact of deepfakes is far from done.