ISC2 Research: Most Cybersecurity Professionals Expect AI to Impact Their Jobs

We Keep you Connected

ISC2 Research: Most Cybersecurity Professionals Expect AI to Impact Their Jobs

ISC2 Research: Most Cybersecurity Professionals Expect AI to Impact Their Jobs
Your email has been sent
Most cybersecurity professionals (88%) believe AI will significantly impact their jobs, according to a new survey by the International Information System Security Certification Consortium; with only 35% of the respondents having already witnessed AI’s effects on their jobs (Figure A). The impact is not necessarily a positive or negative impact, but rather an indicator that cybersecurity pros expect their jobs to change. In addition, concerns have arisen about deepfakes, misinformation and social engineering attacks. The survey also covered policies, access and regulation.
Survey respondents generally believe that AI will make cybersecurity jobs more efficient (82%) and will free up time for higher-value tasks by taking care of other tasks (56%). In particular, AI and machine learning could take over these aspects of cybersecurity jobs (Figure B):
The survey doesn’t necessarily rank a response of “AI will make some parts of my job obsolete” as negative; instead, it’s framed as an improvement in efficiency.
In terms of cybersecurity attacks, the professionals surveyed were most concerned about:
The community surveyed was conflicted on whether AI would be better for cyber attackers or defenders. When asked about the statement “AI and ML benefit cybersecurity professionals more than they do criminals,” 28% agreed, 37% disagreed and 32% were unsure.
Of the surveyed professionals, 13% said they were confident they could definitively link a rise in cyber threats over the last six months to AI; 41% said they couldn’t make a definitive connection between AI and the rise in threats. (Both of these statistics are subsets of the group of 54% who said they’ve seen a substantial increase in cyber threats over the last six months.)
SEE: The UK’s National Cyber Security Centre warned generative AI could increase the volume and impact of cyberattacks over the next two years – although it’s a little more complicated than that. (TechRepublic)
Threat actors could take advantage of generative AI in order to launch attacks at speeds and volumes not possible with even a large team of humans. However, it’s still unclear how generative AI has affected the threat landscape.
Only 27% of ISC2 survey respondents said their organizations have formal policies in place for safe and ethical use of AI; another 15% said their organizations have formal policies on how to secure and deploy AI technology (Figure C). Most organizations are still working on drafting an AI use policy of one kind or another:
The survey found a very wide variety of approaches to allowing employees access to AI tools, including:
The adoption of AI is still in flux and will surely change a lot more as the market grows, falls or stabilizes, and cybersecurity professionals may be at the forefront of awareness about generative AI issues in the workplace since it affects both the threats they respond to and the tools they use for work. A slim majority of cybersecurity professionals (60%) surveyed said they feel confident they could lead the rollout of AI in their organization.
“Cybersecurity professionals anticipate both the opportunities and challenges AI presents, and are concerned their organizations lack the expertise and awareness to introduce AI into their operations securely,” said ISC2 CEO Clar Rosso in a press release. “This creates a tremendous opportunity for cybersecurity professionals to lead, applying their expertise in secure technology and ensuring its safe and ethical use.”
The ways in which generative AI is regulated will depend a lot on the interplay between government regulation and major tech organizations. Four out of five survey respondents said they “see a clear need for comprehensive and specific regulations” over generative AI. How that regulation may be done is a complicated matter: 72% of respondents agreed with the statement that different types of AI will need different regulations.
The survey was distributed to an international group of 1,123 cybersecurity professionals who are ISC2 members between November and December 2023.
The definition of “AI” can sometimes be uncertain today. While the report uses the general terms “AI” and machine learning throughout, the subject matter is described as “public-facing large language models” like ChatGPT, Google Gemini or Meta’s Llama, usually known as generative AI.
Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays
Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays
ISC2 Research: Most Cybersecurity Professionals Expect AI to Impact Their Jobs
Your email has been sent
Get the web’s best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let’s start with the basics.
* – indicates required fields
Lost your password? Request a new password
Please enter your email adress. You will receive an email message with instructions on how to reset your password.
Check your email for a password reset link. If you didn’t receive an email don’t forgot to check your spam folder, otherwise contact support.
This will help us provide you with customized content.
Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add newsletters@nl.technologyadvice.com to your contacts list.

source

GET THE LATEST UPDATES, OFFERS, INFORMATION & MORE