To Spot Attacks Through AI Models, Companies Need Visibility

We Keep you Connected

To Spot Attacks Through AI Models, Companies Need Visibility

News, news analysis, and commentary on the latest trends in cybersecurity technology.
Rushing to onboard AI, companies and their developers are downloading a variety of pre-trained machine-learning models, but verifying security and integrity remains a challenge.
March 11, 2024
As companies rush to develop and test artificial-intelligence and machine-learning (AI/ML) models in their products and daily operations, the security of the models is often an afterthought, putting the firms at risk of falling prey to backdoor and hijacked models.
Companies with their own machine-learning team have hundreds of models in production — more than 1,600 — and 61% of companies acknowledge that they do not have good visibility into all of their machine-learning assets, according to survey data published by HiddenLayer, an AI/ML security firm. The result: Attackers have identified models as a potential vector for compromising companies, with a recent exploration by software security firm JFrog into models posted to the Hugging Face repository finding malicious files that create a backdoor on the victim's machine.
Companies need to look to the security of the AI/ML models and their MLOps pipeline as they rush to develop AI-enabled capabilities, says Eoin Wickens, technical research director at HiddenLayer.
"With the democratization of AI and the ease with which pre-trained models can be downloaded from model repositories these days, you can get a model, fine tune it for purpose, and put it into production easier now than ever," he says, adding: "It remains an open question as to how we can ensure the safety and security of these models, once they've been deployed."
The pace of AI adoption has security experts concerned. In a talk at Black Hat Asia in April, two security researchers with Dropbox will present their investigation into how malicious models can attack the environments in which they are executed. The research identified ways of hijacking models, where running the model allows embedded malware to compromise the host environment, and backdooring, where the model has been modified to influence its behavior and produce certain outcomes.
Without efforts to check the security and integrity of models, attackers could easily find ways to run code or bias the resulting output, says Adrian Wood, a security engineer with the Red Team at Dropbox and a co-presenter at Black Hat Asia.
Data scientists and AI developers are "using models from repositories that are made by all kinds of people and all kinds of organizations, and they are grabbing and running those models," he says. "The problem is they are just programs, and any program can contain anything, and so when they run it, it can cause all sorts of problems."
While the estimate of more than 1,600 AI models in production may sound high, companies with teams focused on data science, machine learning, or data-focused AI have a lot of models in production, says Tom Bonner, vice president of research for HiddenLayer. Over a year ago, when the company's red team conducted a pre-engagement assessment of a financial services organization, they only expected a handful of machine-learning and AI models to be in production. The real number: More than 500, he says.
"We're starting to see that, with a lot of places, they're training up perhaps small models for very specific tasks, but obviously that counts to the sort of overall AI ecosystem the end of the day," Bonner says. "So whether it's finance, cybersecurity, or payment processes [that they are applying AI to], we're starting to see a huge uptick in the number of models people are training themselves in-house."
Companies' lack of visibility into what models have been downloaded by data scientists and machine-learning application developers means that they no longer have control over their AI attack surface.
Models are frequently created using frameworks, all of which save model data in file formats that are able to execute code on an unwary data scientist's machine. Popular frameworks include TensorFlow, PyTorch, Scikit-Learn, and to a lesser degree, Keras, which is built on top of TensorFlow. In their rush to adopt generative AI, many companies are also downloading pre-trained models from sites such as Hugging Face, Tensorflow Hub, PyTorch Hub, and Model Zoo
Typically, models from these models are saved as Pickle files by Scikit-Learn (.pkl) and PyTorch (.pt), and as the Hierarchical Data Format version 5 (HDF5) files often used by Keras and TensorFlow. Unfortunately, these file formats can contain executable code and often have insecure serialization functions that are prone to vulnerabilities. In both cases, an attacker could attack the machines on which the model is run, says Diana Kelley, chief information security officer at Protect AI, an AI application security firm. 
"Because of the way that models work, they tend to run with very high privilege within an organization, so they have a lot of access to things because they have to touch or get input from data sources," she says. "So if you can put something malicious into a model, then that would be a very viable attack."
Hugging Face, for example, now boasts more than 540,000 models, up from less than 100,000 at the end of 2022. Protect AI scanned Hugging Face and found 3,354 unsafe models — about 1,350 that were missed by Hugging Face's own scanner, the company stated in January.
To secure their development and deployment of AI models, organizations should integrate security throughout the machine-learning pipeline, a concept often referred to as MLSecOps, experts say.
That visibility should start with the training data used to create models. Making sure that the models are trained on high-quality and secured data that cannot be changed by a malicious source, for example, is critical to the ability to trust the final AI/ML system. In a paper published last year, a team of researchers including Google DeepMind engineer Nicholas Carlini found that attackers could easily poison the training of AI models by buying up domains that were known to be included in the datasets. 
The team responsible for the security of the machine-learning pipeline should know every source of data used to create a specific model, says Hidden Layer's Wickens.
"You need to understand your ML operations lifecycle, from your data-gathering and data-curating process to feature engineering — all the way through to model creation and deployment," he says. "The data you use may be fallible."
Companies can start with looking at metrics that can hint at the underlying security of the model. Similarly to the open-source software world, where companies are increasingly using tools that use different open-source project attributes to create a report card for security, available information about a model can hint at its underlying security. 
Trusting downloaded models can be difficult as many are made by machine-learning researchers who may have little in the way of a track record. HiddenLayer's ModelScanner, for example, analyzes models from public repositories and scans them for malicious code. Automated tools, such as Radar from Protect AI, produce a list of the bill of materials used in an AI pipeline and then determines whether any of the sources pose a risk. 
Companies need to quickly implement an ecosystem of security tools around machine-learning components in much the same way as the open-source projects have created security capabilities for that ecosystem, says Protect AI's Kelley.
"All those lessons we learned about securing open source and using open source responsibly and safely are going to be very valuable as the entire technical planet continues the journey of adopting AI and ML," she says. 
Overall, companies should start with gaining more visibility into their pipeline. Without that knowledge, it's hard to prevent model-based attacks, Kelley says.
Read more about:
Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.
You May Also Like
Assessing Your Critical Applications’ Cyber Defenses
Unleash the Power of Gen AI for Application Development, Securely
The Anatomy of a Ransomware Attack, Revealed
How To Optimize and Accelerate Cybersecurity Initiatives for Your Business
Building a Modern Endpoint Strategy for 2024 and Beyond
Cybersecurity’s Hottest New Technologies – Dark Reading March 21 Event
Black Hat Asia – April 16-19 – Learn More
Black Hat Spring Trainings – March 12-15 – Learn More
To Spot Attacks Through AI Models, Companies Need Visibility
How to Ensure Open Source Packages Are Not Landmines
The Challenges of AI Security Begin With Defining It
Cloud Apps Make the Case for Pen-Testing-as-a-Service
Copyright © 2024 Informa PLC Informa UK Limited is a company registered in England and Wales with company number 1072954 whose registered office is 5 Howick Place, London, SW1P 1WG.

source

TNC

LET US MANAGE YOUR SYSTEM
SO YOU CAN RUN YOUR BUSINESS

GET THE LATEST UPDATES, OFFERS, INFORMATION & MORE