Analysis: AI investment surges ahead of safeguards

As companies race to harness the power of artificial intelligence, a widening “governance gap” is putting them at risk. According to new research from BSI, many firms are investing heavily in AI tools to boost productivity and cut costs — but without the oversight, policies, or protective frameworks needed to manage the risks. The result? Businesses may be sleepwalking into a crisis of accountability, security, and compliance.
The global study, combining an AI-assisted analysis of over 100 annual reports from multinationals and two global polls of over 850 senior business leaders, conducted six months apart, offers a comprehensive view of how AI is publicly framed in communications, alongside executive-level insights into its implementation.
What is the governance gap?
62% of business leaders expect to increase investment in AI in the next year, and when asked why, a majority citied boosting productivity and efficiency (61%), with half (49%) focused on reducing costs. A majority (59%) now consider AI to be crucial to their organisation’s growth, highlighting the integral role executives see AI playing in the future success of their businesses.
Highlighting the striking absence of safeguards, less than a quarter (24%) reported that their organisation has an AI governance programme, although this rose modestly to just over a third (34%) in large enterprises [1], a pattern repeated across the research. While nearly half (47%) say AI use is controlled by formal processes (up from 15% in February 2025), only a third (34%) report using voluntary codes of practice (up from 19%). Only a quarter (24%) say employee use of AI tools is monitored, and only 30% have processes to assess the risks introduced by AI and the required mitigations. Just one in five businesses (22%) restrict employees from using unauthorised AI.
The AI-assisted analysis reinforced this emerging governance gap and also identified a second, geographical one. Keyword analysis showed governance and regulation were more central themes to reports produced by UK-based companies, appearing 80% more frequently than in reports from companies based in India and 73% more than those based in China.
A key component of governance and management of AI lies in how data is being collected, stored and used to train large language models (LLMs). Yet only 28% of business leaders know what sources of data their business uses to train or deploy its AI tools, down from 35% in February. Just two fifths (40%) said their business has clear processes in place around use of confidential data for AI training.
Susan Taylor Martin, chief executive, BSI said: “The business community is steadily building up its understanding of the enormous potential of AI, but the governance gap is concerning and must be addressed. While it can be a force for good, AI will not be a panacea for sluggish growth, low productivity and high costs without strategic oversight and clear guardrails – and indeed without this being in place, new risks to businesses could emerge.
“Divergence in approaches between organisations and markets creates real risks of harmful applications. Overconfidence, coupled with fragmented and inconsistent governance approaches, risks leaving many organizations vulnerable to avoidable failures and reputational damage. It’s imperative that businesses move beyond reactive compliance to proactive, comprehensive AI governance.”
What risk and security concerns remain under-addressed?
Nearly a third of executives (32%) felt AI has been a source of risk or weakness for their business, with just one in three (33%) having a standardised process for employees to follow when introducing new AI tools.
Capability in managing these risks appears to be declining, with only 49% saying their organisation includes AI-related risks within broader compliance obligations, down from 60% in the last six months. Just 30% reported having a formal risk assessment process to evaluate where AI may be introducing new vulnerabilities.
In their annual reports, financial services (FS) organisations placed the highest emphasis on AI-related risk and security (25% more focus than the next highest, the built environment).
FS firms particularly highlighted the cybersecurity risks associated with implementing AI, likely reflecting traditional consumer protection responsibilities and the reputational consequences of security breaches. In contrast, technology and transport companies placed significantly less emphasis on this theme, raising questions about sectoral divergence in governance approaches.
How much focus is on errors and value?
There is also limited focus on what happens if AI goes wrong. Just a third say their organization has a process for logging where issues arise or flagging concerns or inaccuracies with AI tools so they can be addressed (32%), while just three in ten (29%) cite having a process for managing AI incidents and ensuring timely response. Around a fifth (18%) felt if generative AI tools were unavailable for a period of time, their business could not continue operating.
More than two fifths (43%) of business leaders say AI investment has taken resources that could have been used on other projects. Yet only 29% have a process for avoiding duplication of AI services across the organisation in various departments.
How has human oversight and training falls to the bottom of the list?
Across the annual reports, the term “automation” is nearly seven times more prominent than upskilling, training, or education. Overall, the relatively lower prominence of workforce-related topics suggests businesses may be underemphasizing the need to invest in human capital alongside technological advancement.
There is some complacency among business leaders that the workforce is well equipped to navigate the disruptions of AI and the new skills required to get the best out of it. Over half of leaders globally (56%) say they are confident their entry level workforce has the skills needed to use AI, and 57% say their entire organization currently possesses the necessary skills to effectively use AI tools in their daily tasks. 55% say they are confident their organisation can train staff to use generative AI critically, strategically, and analytically.
A third (34%) have a dedicated learning and development programme to support AI training. A higher proportion (64%) say they’ve received training to use or manage AI safely and securely, suggesting that fear of AI may be driving reactive training, rather than proactive capability-building. This report follows earlier research by BSI into the impact of the rollout of generative AI on roles and work patterns published in October 2025.
BSI published the first AI management standard in late 2023 (BS ISO/IEC 42001:2023), and has since certified businesses to this including KPMG Australia.






