The Digital Transformation Agency (DTA) has published new guidance for Australian Government employees on safe and responsible use of public generative AI tools.
The guidance builds on previous interim guidance and is informed by input from across government.
For Australian Government staff, the guidance outlines key principles, including not putting security classified information – OFFICIAL: Sensitive or above – into these tools; never entering personal information; and checking outputs for fairness, accuracy and bias, "noting generative AI can produce convincing but inaccurate content and reproduce biases from its training data".
For agencies, the guidance offers the following recommendations: providing training and safeguards that build workforce capability; monitoring use, requiring human oversight and recording AI-supported decisions; and prioritising enterprise-grade AI solutions for sensitive or classified material.
The DTA said that a growing number of Australian Government agencies are adopting enterprise generative AI tools, which offer stronger data controls and align with the Australian Government’s security requirements.
Some agencies already allow their staff to access some web-based public generative AI tools; this guidance is designed to encourage more agencies to provide staff access to these public tools.
Alongside the guidance, the Department of Home Affairs has released a Protective Security Policy Framework (PSPF) Policy Advisory on OFFICIAL Information Use with Generative Artificial Intelligence.
The Policy Advisory provides certainty to Australian Government entities that OFFICIAL information can be used with generative AI technologies. It also establishes central guidance covering foreign ownership, control, and influence for 18 Australian and foreign companies under the Hosting Certification Framework, to streamline approval processes within your organisation.
DTA's deputy CEO Lucy Poole said the government doesn’t want to be in a situation where staff, from any agency, are using these tools without proper advice.
“Ensuring staff have clear guidance on what information they can share with these services, and how, is critical to minimise risks and maximise the opportunities that AI presents to the public service," she said.
"Generative AI is here to stay. This guidance gives our workforce the confidence to use public generative AI tools in their roles while keeping security and public trust at the centre of everything we do.”
Earlier this year, the DTA released the Australian Government’s AI technical standard, a new resource to support government agencies in delivering services through their use of AI across the public sector.