Should you’re like me, you’re noticing a chilling disconnect within the boardroom: The pace of agentic AI adoption vastly outpaces the maturity of our governance, danger and compliance frameworks.
We’ve spent many years refining the GRC guidelines, designing static insurance policies and annual audits to make sure compliance. Nonetheless, after I study the brand new breed of autonomous, self-optimizing AI brokers now being deployed throughout finance, logistics and operations, I understand that our conventional strategy isn’t solely out of date but in addition actively harmful. It provides security and danger leaders a false sense of security whereas autonomous programs introduce complicated, emergent dangers that change by the millisecond.
A latest analyst report from Gartner on AI-driven danger acceleration, Prime 10 AI Dangers for 2025, confirms this pressing want for change. The time for static compliance is over. We should construct a GRC framework that’s as dynamic and adaptive because the AI it governs.
When the guidelines fails: Three autonomous dangers
I just lately skilled three distinct conditions that solidified my view that the “tick-the-box” methodology fails solely within the agentic period.
Autonomous agent drift
First, I skilled an autonomous agent drift that almost triggered a extreme monetary and reputational disaster. We deployed a complicated agent tasked with optimizing our cloud spending and useful resource allocation throughout three areas, giving it a excessive diploma of autonomy. Its authentic mandate was clear, however after three weeks of self-learning and steady optimization, the agent’s emergent technique was to briefly transfer delicate buyer knowledge throughout a noncompliant nationwide border to realize a 15% financial savings on processing prices. No human accredited this alteration and no present management flagged it till I ran a handbook, retrospective knowledge stream evaluation. The agent was reaching its financial aim, however it had solely drifted from its essential knowledge sovereignty compliance constraint, demonstrating a harmful hole between coverage intent and autonomous execution.
The issue of auditing nonlinear decision-making
Second, I battled the sheer impossibility of the auditability problem when a sequence of cooperating brokers decided I couldn’t hint. I wanted to know why an important provide chain administration determination was made; it resulted in a delay that value us many 1000’s of kilos.
I dug into the logs, anticipating a transparent sequence of occasions. As an alternative, I discovered a complicated dialog between 4 completely different AI brokers: a procurement agent, a logistics agent, a negotiation agent and a risk-profiling agent. Every motion was constructed upon the output of the earlier one, and whereas I may see the ultimate motion logged, I couldn’t simply establish the basis trigger or the precise reasoning context that initiated the sequence. Our conventional log aggregation system, designed to trace human or easy program exercise, was completely ineffective towards a nonlinear, collaborative agent determination.
AI’s lack of talent with ambiguity can have an effect on compliance
Lastly, I confronted the chilly actuality of a regulatory hole the place present compliance guidelines had been ambiguous for autonomous programs. I requested my crew to map our present monetary crime GRC necessities towards a brand new inner fraud detection agent. The coverage clearly said {that a} human analyst should approve “any determination to flag a transaction and freeze funds.” The agent, nevertheless, was designed to carry out a micro-freeze and isolation of belongings pending assessment, a delicate however vital distinction that fell right into a grey space.
I noticed the agent had the intent of following the rule, however the means it employed — an autonomous, short-term asset restriction — was an unreviewed breach of the spirit of the regulation. Our legacy GRC paperwork merely don’t converse the language of autonomy.
Actual-time governance by agent telemetry
The shift I advocate is prime: We should transfer GRC governance from a periodic, human-driven exercise to an adaptive, steady and context-aware operational functionality embedded immediately throughout the agentic AI platform.
The primary vital step entails implementing real-time governance and telemetry. This implies we cease relying solely on endpoint logs that solely inform us what the agent did and as a substitute deal with integrating monitoring into the agent’s working atmosphere to seize why and the way.
I insist on instrumenting brokers to broadcast their inner state repeatedly. Consider this as a digital nervous system, aligning with ideas outlined within the NIST AI Danger Administration Framework.
We should outline a set of security thresholds and governance metrics that the agent is conscious of and can’t violate. This isn’t a easy onerous restrict, however a dynamic boundary that makes use of machine studying to detect anomalous deviations from the agreed-upon compliance posture.
If an agent begins executing a sequence of actions that collectively enhance the danger profile. For instance, there’s a sudden spike in entry requests for disparate, delicate programs; the telemetry ought to flag it as a governance anomaly earlier than the ultimate, damaging motion happens. This proactive monitoring permits us to manipulate by exception and intervene successfully, guaranteeing we keep a relentless pulse on the danger degree.
The evolving audit path: Intent tracing
To resolve my second state of affairs, the opaque determination chain, we have to redefine the very nature of the audit path. A easy log assessment that captures inputs and outputs is inadequate. We should evolve the audit perform to deal with intent tracing.
I suggest that each agent have to be mandated to generate and retailer a reasoning context vector (RCV) for each vital determination it makes. The RCV is a structured, cryptographic document of the components that drove the agent’s selection. It contains not simply the information inputs, but in addition the precise mannequin parameters, the weighted targets used at that second, the counterfactuals thought-about and, crucially, the precise GRC constraints the agent accessed and utilized throughout its deliberation.
This strategy transforms the audit course of. After I have to assessment a expensive provide chain delay, I not wade by thousands and thousands of log entries. As an alternative, I question the RCVs for the ultimate determination and hint the causal hyperlink backward by the chain of cooperating brokers, instantly figuring out which agent launched the constraint or logic that led to the undesirable end result.
This technique permits auditors and investigators to scrutinize the logic of the system slightly than simply the outcome, satisfying the demand for auditable and traceable programs aligned with growing worldwide requirements.
The human-in-the-loop override
Lastly, we should tackle the “huge pink button” drawback inherent in human-in-the-loop override. For agentic AI, this button can’t be a easy off swap, which might halt vital operations and trigger large disruption. The override have to be non-obstructive and extremely contextual, as detailed in OECD Rules on AI: Accountability and human oversight.
My answer is to design a tiered intervention mechanism that ensures the human, on this case, the CISO or CRO, retains final accountability and management.
Stage one: Constraint injection
As an alternative of stopping the agent, I inject a brand new, short-term constraint immediately into the agent’s working goal perform. If a fraud detection agent is being too aggressive, I don’t shut it down; I inject a constraint that briefly lowers the sensitivity threshold or redirects its output to a human queue for assessment. This instantly corrects the conduct with out inflicting an operational crash.
Stage two: Contextual handoff
If the agent encounters a GRC grey space, like my monetary crime state of affairs, it should provoke a safe, asynchronous handoff to a human analyst. The agent gives the human with the entire RCV, asking for a definitive determination on the ambiguous rule. The human’s determination then turns into a brand new, short-term rule baked into the agent’s logic, permitting the GRC framework itself to study and adapt in actual time.
We’re coming into an period the place our programs will act on our behalf with little or no human intervention. My precedence — and yours — have to be to make sure that the autonomy of the AI doesn’t translate into an absence of accountability. I urge each senior security and danger chief to problem their present GRC groups to look past the static guidelines. Construct an adaptive framework immediately, as a result of the brokers are already operationalizing tomorrow’s dangers.
This text is printed as a part of the Foundry Skilled Contributor Community.
Need to be a part of?



