The past holds lessons for the “wild west” of AI security

By Ben Moore on Sep 4, 2025 1:50PM
The past holds lessons for the “wild west” of AI security
David Williams, Dell EMC.
LinkedIn

In the current “wild west” of artificial intelligence in business IT, as organisations grapple with what good AI security looks like, it’s important to go back to basics.

That's according to Dell EMC data protection and cyber resilience systems engineer David Williams, who shared his thoughts at the 2025 Dell Technologies Forum in Sydney.

Williams said that while large language model (LLM) technology may be new, it still lives on the same kinds of infrastructure organisations have been using for decades.

“Protecting your infrastructure is the first step to protecting your AI landscape and applications,” he said.

Williams made the point that 10 years ago, it was common for people to have near-unrestricted access to public cloud as they could just create a new account, expense it, and use it how they pleased.

Now, justifications must be made and signed off, there are security policies to be met, and it’s bundled into one main purchase code.

He said AI is currently in that “wild west” place public cloud was in 10 years ago.

“We haven't worked out quite what all the attack factors are on your large language models (LLMs), quite how your RAGs are going to be poisoned, quite what we need to do to enforce the policy and risk wrappers end-to-end across the estate,” he explained.

“But,” he added, “what we can do is learn the lessons of times gone by.”

First, there are the same kinds of restrictions that is seen for using public cloud now, which Williams said are already in use internally at Dell.

“Within Dell, if you want to use an LLM, it will say 'website blocked - we have an internal corporate approved AI chatbot'" he said.

"If you try and build an AI agent on it, you have to go through a process to be able to get the ability, the skills and the sign off to be able to do that. We've learned from our history.”

He extends this comparison to the most fundamental aspect of any AI tool: the data that it accesses and is fed to function.

He said the three main pillars in a traditional threat funnel remain vital to defending IT systems.

First is reducing the attack surface; Williams said this means security being built in at the hardware layer along with tools like secure component verification.

Second is detecting attacks, an area that he said is being enhanced by machine learning tools that are able to detect anomalies.

Third is recovering from attacks.

“If you can't get back quickly, then is the organisation still going to be there? Are your customers still going to have trust in you? Are you still going to be able to be a business by the time you come back?” he asked.

Additionally, he said more attackers were targeting backups.

“We’re dealing with a new era in cyberattacks," he told the audience.

"94% of the time backups came under attack, either successfully or not,” he said, attributing the statistics to Sophos and Crowdstrike.

His recommendation was to invest in a tool that would provide a highly secure backup, adding that those backups are made through a pull replication, which he claimed is more secure as it is invisible to the main system.

“If you are pushing data into an isolated environment from a control plane outside of that environment, that is not isolated, that is a DR (disaster recover) copy of your data," he explained.

"You need to be pulling the data - because this replication is at the storage level, none of your production backup software even knows this exists, not Dell's backup software and not our competitors' backup software.

"So when the highly skilled, highly motivated bad guys are looking for your backups, they find all your backups, they try and poison them – they don't know about this one.”

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © nextmedia Pty Ltd. All rights reserved.
Tags:

Log in

Email:
Password:
  |  Forgot your password?