The Challenges of AI Security Begin With Defining It

We Keep you Connected

The Challenges of AI Security Begin With Defining It

News, news analysis, and commentary on the latest trends in cybersecurity technology.
Security for AI is the Next Big Thing! Too bad no one knows what any of that really means.
March 5, 2024
As artificial intelligence (AI) continues to grab everyone's attention, security for AI has become a popular topic in the marketplace of ideas. Security for AI is capturing the media cycle, AI security startups are coming out of stealth left and right, and incumbents are scrambling to release AI-relevant security features. It is clear security teams are concerned about AI.
But what does "AI security" mean, exactly?
Frankly, we don't really know what security for AI means yet because we still don't know what AI development means. "Security for X" typically arrives after X has matured — think cloud, network, Web apps — but AI remains a moving target.
Still, there are a few distinct problem categories emerging as a part of AI security. These line up with the concerns of different roles within an organization, so it is unclear whether they easily merge, though of course they do have some overlap.
These problems are:
Data leak prevention
AI model control
Building secure AI applications
Let's tackle them one at a time.
Security always starts with visibility, and securing AI applications is no different. Chances are many teams in your organization are using and building AI applications right now. Some might have the knowledge, resources, and security savviness to do it right, but others probably don't. Each team could be using a different technology to build their applications and applying different standards to ensure they work correctly. To standardize practices, some organizations create specialized teams to inventory and review all AI applications. While that is not an easy task in the enterprise, visibility is important enough to begin this process.
When ChatGPT was first launched, many enterprises went down the same route of desperately trying to block it. Every week new headlines emerged about companies losing their intellectual property to AI because an employee copy-pasted highly confidential data to the chat so they could ask for a summary or a funny poem about it. This was really all anybody could talk about for a few weeks.
Since you cannot control ChatGPT or any of the other AIs that appear on the consumer market, this has become a sprawling challenge. Enterprises issue acceptable use policies with approved enterprise AI services, but those are not easy to enforce. This problem got so much attention that OpenAI, which caused the scare in the first place, changed its policies to allow users to opt out of being included in the training set and for organizations to pay to opt out on behalf of all their users.
This issue — users pasting the wrong information into an app it does not belong to — seems similar to what data loss prevention (DLP) and cloud access security broker (CASB) solutions were created to solve. Whether enterprises can use these tools created for conventional data to protect data within AI remains to be discovered.
Think about SQL injection, which boosted the application security testing industry. It arises when data is translated as instructions, resulting in allowing people who manipulate application data (i.e., users) to manipulate application instruction (i.e., its behavior). With years of severe issues wreaking havoc on Web applications, application development frameworks have risen to the challenge and now safely handle user input. If you're using a modern framework and going through its paved road, SQL injection is for all practical purposes a solved problem.
One of the weird things about AI from an engineer's perspective is that it mixes instructions and data. You tell the AI what you want it to do with text, and then you let your users add some more text into essentially the same input. As you would expect, this results in users being able to change the instructions. Using clever prompts lets you do that even if the application builder really tried to prevent it, a problem we all know today as prompt injection.
For AI application developers, trying to control these uncontrollable models is a real challenge. This is a security concern, but it is also a predictability and usability concern.
Once you allow AI to act on the user's behalf and chain those actions one after the other, you’ve reached uncharted territory. Can you really tell whether the AI is doing things it should be doing to meet its goal? If you could think of and list everything the AI might need to do, then you arguably wouldn't need AI in the first place.
Importantly, this problem is about how AI interacts with the world, and so it is as much about the world as it is about the AI. Most Copilot apps are proud to inherit existing security controls by impersonating users, but are user security controls really all that strict? Can we really count on user-assigned and managed permissions to protect sensitive data from a curious AI?
Trying to say anything about where AI or by extension AI security will end up is trying to predict the future. As the Danish proverb says, it's difficult to make predictions, especially about the future. As AI development and usage continue to evolve, the security landscape is bound to evolve with them.
Michael Bargury
CTO & Co-Founder, Zenity
Michael Bargury is an industry expert in cybersecurity focused on cloud security, SaaS security, and AppSec. Michael is the CTO and co-founder of, a startup that enables security governance for low-code/no-code enterprise applications without disrupting business. Prior to Zenity, Michael was a senior architect at Microsoft Cloud Security CTO Office, where he founded and headed security product efforts for IoT, APIs, IaC, Dynamics, and confidential computing. Michael holds 15 patents in the field of cybersecurity and a BSc in Mathematics and Computer Science from Tel Aviv University. Michael is leading the OWASP community effort on low-code/no-code security.
You May Also Like
Assessing Your Critical Applications’ Cyber Defenses
Unleash the Power of Gen AI for Application Development, Securely
The Anatomy of a Ransomware Attack, Revealed
How To Optimize and Accelerate Cybersecurity Initiatives for Your Business
Building a Modern Endpoint Strategy for 2024 and Beyond
Cybersecurity’s Hottest New Technologies – Dark Reading March 21 Event
Black Hat Asia – April 16-19 – Learn More
Black Hat Spring Trainings – March 12-15 – Learn More
How to Ensure Open Source Packages Are Not Landmines
The Challenges of AI Security Begin With Defining It
Cloud Apps Make the Case for Pen-Testing-as-a-Service
Apple Beefs Up iMessage With Quantum-Resistant Encryption
Copyright © 2024 Informa PLC Informa UK Limited is a company registered in England and Wales with company number 1072954 whose registered office is 5 Howick Place, London, SW1P 1WG.