Artificial Intelligence Risk: Get Ready for AI-Powered Malware

IBM’s DeepLocker PoC gives the industry a look at artificial intelligence risk--and what an attack produced with the help of deep neural networks will look like.

Jeffrey Burt

September 12, 2018

6 Min Read
Machine learning

Cybersecurity vendors are aggressively applying artificial intelligence--and subsets such as machine learning and deep learning--to better defend against the growing numbers and increasing sophistication of threats from ransomware to cryptomining to banking Trojans. Not surprisingly, cybercriminals also are likely experimenting with AI for many of the same reasons--from greater automation and improved scalability to better flexibility and response times. In short, the artificial intelligence risk factor is rising.

The likelihood of attacks that leverage AI and machine learning was outlined in a report earlier this year. The report, which was pulled together by more than two dozen experts in the United States and the United Kingdom, warned that the “use of AI to automate tasks involved in carrying out cyber attacks will alleviate the existing trade-off between the scale and efficacy of attacks.”

Spear-phishing is an example of a labor-intensive attack that can benefit from greater automation and improved targeting of victims to enable bad actors to scale the size and numbers their attacks and drive a greater return on their efforts.

“Cybersecurity is an arms race, where attackers and defenders play a constantly evolving cat-and-mouse game,” Marc Stoecklin, principal RSM and manager of cognitive cybersecurity intelligence at IBM Research, wrote in a blog post. “Every new era of computing has served attackers with new capabilities and vulnerabilities to executive their nefarious actions. … We are on the cusp of a new era: the artificial intelligence (AI) era. The shift to machine learning and AI is the next major progression in IT. However, cybercriminals are also studying AI to use it to their advantage--and weaponize it.”

Stoecklin and other IBM researchers were at the recent Black Hat 2018 conference, works their way assessment)DeepLocker, an effort to create new malware that leverages AI to circumvent current security solutions. The idea behind DeepLocker is to enable cybersecurity vendors and companies to be more proactive in addressing malware that has been combined with AI techniques.

DeepLocker was created “to better understand how several existing AI models can be combined with current malware techniques to create a particularly challenging new breed of malware,” he wrote. “This class of AI-powered evasive malware conceals its intent until it reaches a specific victim. It unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.”

DeepLocker hides the malicious payload in such in benign applications like video conferencing software to avoid detection, according to Stoecklin. The IBM researchers used a deep neural network (DNN) AI model to enable the malware to unlock its payload only if the intended target is reached. This model makes it almost impossible to reverse-engineer the issue, a common practice among cybersecurity researchers.

He said IBM researchers wanted to present what they were doing with DeepLocker to raise awareness about AI-powered malware, demonstrate how attackers can build malware to get around common defenses, and show how organizations can reduce risks and deploy countermeasures.

“While a class of malware like DeepLocker has not been seen in the wild to date, these AI tools are publicly available, as are the malware techniques being employed--so it’s only a matter of time before we start seeing these tools combined by adversarial actors and cybercriminals,” Stoecklin wrote. “The security community needs to prepare to face a new level of AI-powered attacks. We can’t, as an industry, simply wait until the attacks are found in the wild to start preparing our defenses.”

According to Chris Gonsalves, director of research at The 2112 Group, IBM’s DeepLocker project is worth talking about, but doesn’t have a lot of practical impact right now. A cybercriminal building something like DeepLocker would have to put in considerable effort to build stealth around conventional malware to target a single victim with well-known attributes needed to inform a neural network.

“That’s nation-state level tradecraft,” Gonsalves told ITPro Today. “If I ran a nuclear enrichment facility in Iran or North Korea, I’d be pretty breathless and sweaty about this. For every other CISO out there battling conventional attacks with tight budgets and short staffs, this is going to be pretty far down the priority list.”

In addition, “far from silently targeting simple individuals with complex weaponry and obfuscation, today’s criminals are mostly about hitting as many victims as possible and separating them from their Bitcoin as quickly as possible before their C2 [command-and-control server] gets shut down,” he said.

The type of attack seen with DeepLocker is just one of the ways AI can be leveraged by threat actors, particularly against such targets as CFOs or other privileged users, where a lot of information--such as user names, biometrics, device profiles and system configurations--is known. That information can be used to train the DNN. Such malware needs to be put into an innocuous app, and Gonsalves said such attacks will likely show up in the supply chain where the code base of a trusted application or service is compromised.

“Much different than DeepLocker, the other kind of AI security issue arises from the malicious feeding of bad inputs into neural nets in order to get AI systems to misinterpret and mishandle them, something known as adversarial machine learning,” the analyst said. “Think about bombarding the neural network at the heart of a self-driving car with a bunch of inputs that indicate that a red light is actually green or that a highway exit sign is a stop sign. It’s not something that’s top of mind for IT folks right now, but as the number of predictive algorithms used in everything from retail to healthcare grows, this flavor of AI threat bears watching, as well.”

Another area that bears watching as the issue (and prevalence) of AI and malware grows is the deception technology space, where the technique of honeypots has evolved into sophisticated defense platforms. In this arena, vendors like Cymmetria, Attivio and Illusive will be key in figuring out how to detect the new kinds of behavior demonstrated by DeepLocker, Gonsalves added.

The work IBM has done around DeepLocker is an example of researchers imagining what upcoming attacks will look like, an important step in improving defenses, Gonsalves said. However, he cautioned that such future-looking analysis shouldn’t stop the work organizations are doing to protect systems against current threats. For example, application whitelisting--including creating comprehensive asset inventories and gleaning basic insights into application behavior--could help in stopping such threats as DeepLocker.

“Unless and until a CISO has implemented things like judicious network segmentation, privileged account management, configuration and change controls, and data classification, then any conversation about the future of AI-powered cyberweapons is pretty much a distraction and a fool’s errand,” he said.

 

 

About the Author(s)

Jeffrey Burt

https://www.itprotoday.com/author/Jeffrey-Burt

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like