AI in Cybersecurity: How It Works

We Keep you Connected

AI in Cybersecurity: How It Works

There’s a never-ending battle going on between cyber defenders and attackers, and this plays out with security products too: As soon as a security vendor develops a way to mitigate the latest threat, attackers are busy finding a way around it or a new threat to take its place.
To try to gain an edge in their efforts to protect businesses and individuals from scammers, malware, and data theft, many cybersecurity companies have turned to artificial intelligence (AI) and machine learning (ML) as a potentially useful weapon in their arsenal.
There are some benefits to employing AI in a cybersecurity context. It can make defensive measures stronger and response times faster, but it’s not a perfect solution. AI is not a replacement for human intelligence—especially when it comes to identifying and mitigating threats—but in the right contexts, with the right team, it can be helpful.
Table of Contents
Whether it’s SIEM solutions attempting to enhance their predictive capabilities or threat intelligence software trying to automate the threat detection process, businesses the world over are looking to AI as a critical part of their cybersecurity futures.
AI in general is in vogue right now, but its use in cybersecurity is expected to explode in coming years. A Statista report expects the “AI in Cyber Security” market to grow from $10.5 billion in 2020 to $46.3 billion by 2026, in the process taking an ever-bigger slice of a cybersecurity products market that’s approaching $200 billion.
Here are some examples of what companies think AI and machine learning can do to give cybersecurity firms an edge over their cybercriminal competition:

A common refrain when talking about AI and automation is that it ultimately can’t replicate the creative and strategic thinking that human intelligence provides. Based on how AI has been implemented and developed thus far, this is accurate.
The tasks AI and machine learning have proven to be good at are tasks with simple, predictable patterns and tasks that require the processing of large data sets. This is how AI can potentially speed up incident response times, as humans wouldn’t be able to process network traffic as quickly as automation can.
On the flip side, in use cases where the AI has to deal with a number of unusual or unpredictable behaviors, it struggles. This is why behavioral analysis can be a mixed bag as a solution. A 2018 paper published by IEEE goes into more detail about it, explaining, “Machine learning has limitations dealing with privileged users, developers, and knowledgeable insiders. Those users represent a unique situation because their job functions often require irregular behaviors. This cause[s] difficulties for statistical analysis to create a baseline [for] the algorithms.”
Additionally, if an AI system is poorly implemented, it can be weaponized against a company in an attack. This could happen at the data level, where malicious actors manipulate the data sets that AI algorithms use to learn their behaviors. Vulnerabilities could also come from biases or gaps in the data. Hackers sometimes use a technique called neural fuzzing to determine where weaknesses lie in software that processes input data.
To prevent your AI from working against you, it’s important to create safeguards. You should regularly evaluate the configurations of your devices and applications and monitor other areas of your cybersecurity infrastructure that aren’t directly-related to artificial intelligence tools. This is not only beneficial for your AI, but also for your security posture overall.
AI’s increased prominence in cybersecurity also goes both ways. As more cybersecurity enterprises leverage AI to boost their security, hackers are able to do much the same, through methods like AI-generated phishing emails or constantly-changing malware signatures.
Thankfully, well functioning AI is difficult to build, even for companies with the resources and expertise to do so. As such, your average cyber criminal probably isn’t going to be using AI for their next social engineering scheme. However, state-backed hackers from countries like Russia might have access to sophisticated AI hacking capabilities.
AI’s efficacy in cybersecurity is the same as in any field it’s deployed in. When focused on the things AI has been proven to do effectively and consistently, it’s useful, but when focused anywhere else, it struggles, often mightily so.
Knowledge is key when implementing AI into a cybersecurity strategy, both knowledge in the form of the data you feed your AI to train it and knowledge in the form of understanding what AI is good at and how to best leverage that for your business.
Ultimately, AI, like firewalls or IDPS, is a tool, and no one tool is going to be the cure for all your cybersecurity woes. Although artificial intelligence can be a benefit to your organization’s cybersecurity strategy, you still need people working to support it. Otherwise, you’ll be putting your weight on an unstable foundation.

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.