Black Hat 2019: Deepfakes Require a Rethink of Incident Response
The growth of AI-based fake videos have security managers looking at how they might impact their organization -- deepfakes have the very real ability to be used in criminal activity against businesses.
August 7, 2019
Las Vegas -- Would you be worried if a video of you, or your corporate CEO, saying embarrassing or negative things started making the rounds on the web? It’s a very real possibility thanks to deepfakes, which are now a mainstream concern for security professionals.
Deepfakes are video and audio messages based on artificial intelligence that often make it appear as if someone is doing or saying something they never actually did. They can be created with easily accessible materials and can be used to impersonate a person's voice with a few samples of spoken audio.
In 2017, the name deepfakes was coined after a Reddit user with the handle "deepfakes" published a series of fake celebrity porn videos and was the subject of an article in Vice. In recent months, several incidents have made headlines involving the use of deepfakes to misrepresent well-known politicians. Many headline-making deepfakes thus far have been either tongue in cheek or created to show the potential for these videos. For example, directors Jordan Peele and Jonah Peretti created a deepfake using Barack Obama as a public service announcement about the danger of deepfakes.
But deepfakes have the very real ability to be used in criminal activity against businesses. Symantec recently blogged about how a finance executive at a company received an urgent voice mail from his boss, ordering a wire transfer. But it was a fake audio message. Others hypothesize that deepfakes could be used by criminals to create fake videos that manipulate banking customers into allowing access to their accounts.
Two sessions at this year’s Black Hat event here in Las Vegas dive into the issue and offer insights on how deepfakes are created, and also highlight advances in technology that can possibly be used to detect the videos. Titled "Detecting deepfakes with Mice" and "Playing Offense and Defense with deepfakes," the sessions’ place on the agenda solidify that this is an issue for the security department to pay attention to as more criminals use deepfakes in social engineering attacks.
Brian Wrozek, director of information security at Optiv Security and the CISO of Texas Instruments for 14 years, said the deepfake problem has prompted his team to do some hard thinking around their incident response plan.
“Historically IR was mainly technically-focused,” said Wrozek. “How do you clean up malware, or do forensics on hard drives? When you get into a scenario of a deepfake, now you’re dealing much more with PR and communications and marketing. If one of these videos is posted on social media, that is not technology you can control in-house.”
Wrozek believes deepfakes are poised to become a widespread problem in the next year.
“Deep fakes seem to be following a similar trajectory as malware and ransomware. It started off as a sophisticated attack targeting a specific, high-level audience. But eventually it has started to become mainstream and soon everyone will be a potential victim.”
For his part in thinking about defending and responding, he said his team will be
monitoring the dark web for any possible instances, and revamping details of existing policy in order to be prepared for the possibility that a deepfake could be used against them.
“You are also going to be judge by how well you respond to an incident. If a video were to be put out how are you going to respond? Will you be able to respond quickly? Will you produce evidence to dispute it? You need to have that figured out beforehand.”
About the Author
You May Also Like