Sydney-based secure coding platform Secure Code Warrior has expanded the AI capabilities within its Trust Agent product, aiming to provide CISOs with security traceability, visibility and governance over developers’ use of AI coding tools.
This offering, collectively referred to as Trust Agent: AI, leverages a combination of key signals, including AI coding tool usage, vulnerability data, code commit data and developer secure coding skills, to provide visibility into how AI development tools are impacting risk within the software development lifecycle (SDLC).
The solution also claims to offer integrated governance at scale through identification of unapproved LLMs, including visibility into the actual vulnerabilities LLMs introduce; flexible policy controls to log, warn or block pull requests from developers using unsanctioned tools, or developers with insufficient secure coding knowledge; and output analysis that surveys how much code is AI-generated and where it's located across repositories.
General availability is expected in 2026, but an early access list for the beta program is now available.
“AI allows developers to generate code at a speed we’ve never seen before, however using the wrong LLM by a security-unaware developer, the 10x increase in code velocity will introduce 10x the amount of vulnerabilities and technical debt,” said Pieter Danhieux, Secure Code Warrior co-founder and CEO.
"Trust Agent: AI produces the data needed to plug knowledge gaps, filter security-proficient developers to the most sensitive projects, and, importantly, monitor and approve the AI tools they use throughout the day. We’re dedicated to helping organizations prevent uncontrolled use of AI on software and product security.”