Rubrik has unveiled its Semantic AI Governance Engine (SAGE), which it claims is the industry’s first AI governance engine designed to secure and control autonomous agents in real-time.
SAGE intends to solve a legacy system governance bottleneck, where software stacks rely on deterministic rules and can’t comprehend natural language or adapt to the dynamic and unforeseen actions taken by AI agents.
It does this by using the firm’s custom Small Language Model (SLM) to interpret the semantic meaning of policies and thereby providing a real-time command centre for agentic actions.
SAGE translates natural language instructions into machine logic, recognising context that static filters miss according to the company, while also proactively identifying ambiguous guardrails and suggesting refinements to administrators before a violation occurs.
In the event of an agent error, SAGE triggers Rubrik Agent Rewind to instantly undo destructive actions and restore data integrity.
Rubrik stated it demonstrated the capabilities of its SLM by doing a head-to-head with OpenAI’s GPT 5.2 in a comparative analysis using a standardised set of user interactions.
The comparative analysis found Rubrik’s custom SLM processed messages five times faster, detected violations correctly more often, achieved a higher accuracy rate in detecting policy violations compared to generalised LLMs and "significantly reduced" the compute overhead associated with real-time AI monitoring.
"SAGE marks a pivotal moment in AI security as we shift the focus from if agents can be deployed to how they can be governed at scale," said Devvret Rishi, general manager AI, Rubrik.
"With SAGE, we can move beyond simple monitoring to a future where AI helps us govern AI agents. Now, we give CISOs the guardrails they need to let their AI agents run at full speed without compromising the security and integrity of the enterprise."




