In the ever-evolving struggle between cyber offence and defence, attackers have almost always moved first. In the emerging domain of artificial intelligence, this pattern appears to be repeating itself. Yet, global cybersecurity pioneers appear disconcertingly disengaged. Just over half even agree that AI-driven attacks are set to become dramatically more complex and widespread.
Equally concerning is the widespread apathy regarding AI’s role in expanding an already sprawling corporate attack surface. This is no small oversight. A recent global Trend Micro study showed that 73 percent of organisations have already suffered cybersecurity incidents due to unknown or unmanaged assets. In an era where digital blind spots are both common and consequential, hesitation is a risk few can afford. Security has to shift from reactive protection to proactive risk exposure management.
The opportunity and the risk of AI
Threat actors are now using jailbroken versions of legitimate generative AI tools such as ChatGPT
The potential for AI to transform enterprise operations is enormous, but so is the risk. The warnings have been loud and clear. As early as the first quarter of 2024, the UK’s National Cyber Security Centre (NCSC) stated that AI would “almost certainly increase the volume and heighten the impact of cyber-attacks over the next two years.”
Their prediction is proving accurate. Threat actors are now using jailbroken versions of legitimate generative AI tools such as ChatGPT, freely traded as services on the dark web, as well as malicious models like FraudGPT, built on open-source large language models (LLMs). These tools are no longer just about automating tasks; they are turbocharging the entire attack lifecycle. From more convincing phishing emails and precise target selection, to sophisticated malware creation and lateral movement within breached systems, AI is driving a step-change in threat actor capability.
Integrating open-source models
However, this is only one side of the coin. The other, often overlooked, is AI’s impact on the corporate attack surface. Even well-meaning employees can unintentionally expand organisational risk. The widespread use of AI-as-a-service tools like ChatGPT introduces significant shadow IT concerns, especially when sensitive business information is input without proper oversight. Data processing and storage practices for many of these services remain opaque, raising additional compliance concerns under regulations like the UK GDPR and the EU’s AI Act.
For those organisations that choose to build or customise their own LLMs, the risks multiply. Integrating open-source models may expose businesses to vulnerabilities, misconfigurations and flawed dependencies. Each new tool and environment adds to the complexity of an attack surface already strained by remote work setups, sprawling cloud deployments, IoT ecosystems, and accelerating digital transformation programmes.
Managing the expanding risk landscape
Many have already shared security incidents where a lack of asset visibility was the root cause
Many security pioneers do understand what is at stake. Nine in ten agree that effective attack surface management is tied directly to business risk. They cite a long list of potential consequences, disruptions to operations, reputational damage, declining competitiveness, strained supplier relationships, financial losses and reduced staff productivity. Many have already experienced security incidents where a lack of asset visibility was the root cause.
Despite this recognition, however, the response remains largely inadequate. Fewer than half of global organisations use dedicated tools to monitor their attack surface proactively. On average, only a quarter of cybersecurity budgets are allocated to managing cyber risk exposure. Third-party risk management is similarly neglected: fewer than half of firms actively monitor their vendors for vulnerabilities.
This inertia creates an obvious contradiction. Security pioneers understand the business implications of unmanaged risk, but they are not equipping themselves with the tools or processes to respond. That needs to change—and fast.
How AI can help defenders take the lead
There is good news: AI is not only a weapon for cybercriminals. It can also be a powerful ally for defenders, particularly in the field of Cyber Risk Exposure Management (CREM). The best tools in this category use AI to continuously scan an organisation’s entire digital footprint. They can automatically detect vulnerabilities, spot misconfigurations, identify rogue or shadow assets, and provide prioritised remediation recommendations.
CREM platforms apply contextual filtering to reduce false positives and elevate the most urgent threats
Intelligent algorithms can also analyse network behaviour to identify anomalies that could signal a breach in progress. Unlike traditional tools, which often drown analysts in noise, CREM platforms apply contextual filtering to reduce false positives and elevate the most urgent threats. For overburdened security teams, this enables a far more focused and effective response.
However, the keyword here is “continuous.” The nature of today’s IT environments, especially in the cloud, is dynamic and fast-moving. Assets appear and disappear within minutes. Static, point-in-time assessments are no longer sufficient. Yet more than half of organisations still lack continuous scanning processes. This leaves them exposed to risks that might persist undetected for weeks or months.
Overcoming barriers to adoption
So what is holding organisations back? In many cases, it’s not the technology itself but the internal politics of investment. Security pioneers interested in CREM tools often prioritise real-time alerting, clear dashboards, and seamless integration with their existing environments. All of this is now achievable. The challenge lies in securing board-level support.
Many security teams still work in silos, disconnected from the broader business
Boards are often cautious when it comes to cybersecurity investment, particularly when immediate ROI is not clear. To gain their trust, security pioneers must learn to speak the language of business risk, not technical threat. They must frame cyber exposure in terms of reputational impact, regulatory liability, operational continuity, and investor confidence.
There is also a cultural component. Many security teams still work in silos, disconnected from the broader business. This limits their influence and makes it harder to embed security as a strategic enabler. In the AI era, this divide must be bridged. Cybersecurity must become a board-level concern, and risk exposure must be treated as a fundamental operational issue.
Time to act
We are at a critical inflection point. The AI revolution is not on the horizon, it is already here. Threat actors are moving rapidly to exploit it, leveraging tools and techniques that were unthinkable just a few years ago. Meanwhile, organisations remain slow to respond. Too few are investing in the tools, processes, and people needed to manage their risk exposure effectively.
AI can be used not only to attack but to defend. CREM tools powered by AI offer a powerful way to regain visibility, restore control, and build lasting resilience. They enable proactive rather than reactive security. And they help organisations align their cybersecurity strategy with their broader business objectives.
Security teams have to elevate the conversation. They must advocate not just for new tools, but for a new mindset, one that treats cyber risk as an enterprise risk, and one that prioritises continuous visibility as a prerequisite for resilience.