AI 'hesitancy', 'dangerous pathways' and rising expectations of risk officers

By William Maher on Dec 14, 2025 7:17PM
AI 'hesitancy', 'dangerous pathways' and rising expectations of risk officers
[L-R] Brad Howarth, Author and Researcher; David Siroky, Cisco; Carl Solder, Cisco; Daniel Popovski, Governance Institute of Australia, at Cisco Live Melbourne 2025

AI partners that do not know their customers’ risk officers – if they have one – would do well to change that in the new year if Cisco and the Governance Institute of Australia’s advice is any guide.

Between them and Australian ICT advisory IBRS, they paint a grim picture of AI governance in Australia: boardroom hesitancy layered on top of years of weak information governance and tech debt, which IBRS says is creating “dangerous pathways for unmanaged AI access”.

Cisco and the Governance Institute of Australia recently joined the chorus of commentators pointing out the disconnect between AI hype and business reality, with the publication of their paper, Turning Hesitation Into Action: How Risk Leaders Can Unlock AI’s Potential – a paper light on fresh data and pitched as a “conversation starter”.

The paper argues that risk officers could play an important role in addressing AI “hesitancy”.

But it doesn’t give the impression they are brimming with confidence. Risk professionals told the team behind the paper about “education, accountability, fear of failure, lack of AI pilots or the use of secure sandboxes, and a cautious ‘watch and learn’ approach”.

A lack of “proven use cases and clear ROI statements has led organisations to adopt a 'wait and see' stance on AI, in the hope of learning from the successes of others before acting themselves.”

Fully 93% of 344 professionals surveyed in February 2025 by the Governance Institute of Australia (for a report sponsored by the National Artificial Intelligence Centre, PKF and Diligent), said they were unable to effectively measure return on investment for AI initiatives.

The benefits of AI were not sufficiently understood to overcome the associated risks, so funds available for AI projects remained limited, according to the recent Cisco paper.

“Many pilot programmes have reported disappointing results, leading to further negativity regarding the value of AI investments.”

“We are observing greater hesitancy with AI, particularly in workplace environments and across boardrooms,” - Governance Institute of Australia AI, Tech and Cyber Policy and Advocacy Lead Daniel Popovski in the foreword to the Cisco/Institute paper.

“One of the issues is companies are holding back, based on their understanding of where AI may impact their company,” Popovski told techpartner.news at the Cisco Live conference in Melbourne last month.

“So for instance, there might be a safety or privacy breach. I'm not saying there are notifiable, reportable incidence of that, but their awareness of the potential for that to occur has led to the hesitancy.”

Governance debt

IBRS saw the picture painted by Cisco and the Governance Institute of Australia as a symptom of an older problem.

“This reflects a deeper failure in technology governance that began well before AI,” IBRS future of work advisor Dr Joseph Sweeney told techpartner.news.

“The cracks appeared during COVID, when we rushed to deploy collaboration tools such as Teams, throwing information governance to the wind in many cases.”

“This resulted in sprawling information messes which we still have not fully addressed,” he said.

“Our two IBRS end-of-year papers show this debt (it’s both governance debt and legacy tech debt) is now compounding: information assets are growing uncontrollably, creating challenges for managed AI, and worse, dangerous pathways for unmanaged AI access.”

IBRS captured this in data based on behaviour of IBRS clients' plans and activities in 2025, Sweeney said. Its upcoming paper will show “governance remains reactive and structurally underpowered compared to what leaders claim, and AI needs.”

Tools “unknown”

Tools and capabilities for AI risk assessment are “unknown” to risk professionals or “not viewed as a tool specific to AI risk assessment”, claims the Cisco and Governance Institute of Australia paper.

Core AI concepts and capabilities are “rarely taught outside of technology streams”.

Still, it argued risk professionals are one of the most critical components in any organisation’s AI adoption strategy.

The paper recommends they embed AI into business strategy, create an interdisciplinary AI governance committee, raise AI awareness across the workforce, and measure AI project results.

It also recommends spending on “appropriate controls to ensure their governance structures can be implemented and monitored at a tactical level.” This includes controls for policies and procedures, tools for permissions and monitoring, and management of data loss and unvetted release of AI apps and agents to the public.

Risk professionals should also forge close ties with their CIO or CTO, or with external service providers, the paper recommends.

Sweeney agreed risk functions can play a key role, but only if their approach changes.

“Risk officers can be important influencers in AI strategies, but only if they pivot from a narrow focus of compliance to exploring value (which they can express as risk and reward) for strategic planning.”

In his view, many risk leaders are not yet equipped for that.

“However, many risk officers lack the understanding to manage the 'probabilistic' (really stochastic) nature of general AI, and few have a strong notion of the deeper business risks associated with AI – growing and often unexpected costs of AI, the expected near future changes to AI vendors and the market as a whole, and the technology itself.”

In Sweeney’s view, one issue should be front of mind for risk teams.

“A big issue for them should be market shift from agentic hype to 'orchestration' – embedded AI workflows. This change requires deep financial and information governance, not just technical checks.”

Cost concerns

Asked if he saw a gap for smaller tier advisory firms to address this issue, Sweeney said he saw some technology partners, including MSPs, doing some of this work, though “often in narrow domains.”

“At a recent gathering of partners I attended (AWS, Google and Microsoft) many were looking at how they could move 'up the food chain' in terms of offering advisory around AI. There is definitely a need and that need will certainly be filled.”

AI governance was the leading topic of interest for IBRS’s clients throughout much of 2025, though Sweeney said their focus was now shifting to financial issues.

“Financial risks associated with AI are a growing concern, and rightly so. Running AI solutions is about to get more costly.”

William Maher travelled to Cisco Live Melbourne 2025 as a guest of Cisco.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © nextmedia Pty Ltd. All rights reserved.
Tags:

Log in

Email:
Password:
  |  Forgot your password?