Cybersecurity teams are taking a cautious approach to artificial intelligence (AI), despite industry hype and pressure from business pioneers to accelerate adoption, according to a new survey from ISC2.
While AI is widely promoted as a game-changer for security operations, only a small proportion of practitioners have integrated these tools into their daily workflows, with many remaining hesitant due to concerns over privacy, oversight, and unintended risks.
Advanced in industrial sectors
Many CISOs remain cautious about AI adoption, citing concerns around privacy, oversight, and the risks of moving
Many CISOs remain cautious about AI adoption, citing concerns around privacy, oversight, and the risks of moving too quickly. A recent survey of over 1,000 cybersecurity professionals found that just 30 percent of cybersecurity teams are currently using AI tools in their daily operations, while 42 percent are still evaluating their options. Only 10 percent said they have no plans to adopt AI at all.
Adoption is most advanced in industrial sectors (38 percent), IT services (36 percent), and professional services (34 percent). Larger organisations with more than 10,000 employees are further ahead on the adoption curve, with 37 percent actively using AI tools.
Potential of AI in cybersecurity
In contrast, smaller businesses, particularly those with fewer than 99 staff or between 500 and 2,499 employees, show the lowest uptake, with only 20 percent using AI. Among the smallest organisations, 23 percent say they have no plans to evaluate AI security tools at all.
Andy Ward, SVP International at Absolute Security, commented: “The ISC2 research echoes what we’re hearing from CISOs globally. There’s real enthusiasm for the potential of AI in cybersecurity, but also a growing recognition that the risks are escalating just as fast. Our research shows that over a third (34%) of CISOs have already banned certain AI tools like DeepSeek entirely, driven by fears of privacy breaches and loss of control."
Robust strategies for cyber resilience
Among the smallest organisations, 23 percent say they have no plans to evaluate AI security tools at all
Ward added: "AI offers huge promise to improve detection, speed up response times, and strengthen defences, but without robust strategies for cyber resilience and real-time visibility, organisations risk sleepwalking into deeper vulnerabilities."
"As attackers leverage AI to reduce the gap between vulnerability and exploitation, our defences must evolve with equal urgency. Now is the time for security pioneers to ensure their people, processes, and technologies are aligned, or risk being left dangerously exposed.”
Privacy and control over sensitive data
Arkadiy Ukolov, Co-Founder and CEO at Ulla Technology Ltd, comments: “It’s no surprise to see security professionals taking a measured, cautious approach to AI. While these tools bring undeniable efficiencies, privacy and control over sensitive data must come first. Too many AI solutions today operate in ways that risk exposing confidential information through third-party platforms or unsecured systems."
"For AI to be truly fit for purpose in cybersecurity, it must be built on privacy-first foundations, where data remains under the user’s control and is processed securely within an enclosed environment. Protecting sensitive information demands more than advanced tech alone, it requires ongoing staff awareness, training on AI use, and a robust infrastructure that doesn’t compromise security."
Key areas of improvement
Over half of cybersecurity professionals believe AI will reduce the need for entry-level roles
Despite this caution, where AI has been implemented, the benefits are clear. 70 percent of those already using AI tools report positive impacts on their cybersecurity team’s overall effectiveness. Key areas of improvement include network monitoring and intrusion detection (60 percent), endpoint protection and response (56 percent), vulnerability management (50 percent), threat modelling (45 percent), and security testing (43 percent).
Looking ahead, AI adoption is expected to have a mixed impact on hiring. Over half of cybersecurity professionals believe AI will reduce the need for entry-level roles by automating repetitive tasks.
Skills and roles required to manage AI technologies
However, 31% anticipate that AI will create new opportunities for junior talent or demand new skill sets, helping to rebalance some of the projected reductions in headcount.
Encouragingly, 44% said their hiring plans have not yet been affected, though the same proportion reports that their organisations are actively reconsidering the skills and roles required to manage AI technologies.
Stay ahead in the era of intelligent security systems powered by Artificial Intelligence with our special e-magazine on AI in security.