A finance staff deploys an AI agent to deal with bill approvals. At first, it saves hours of guide work. Then in the future, it approves a collection of fraudulent invoices. By the point anybody notices, the losses are materials. Nobody disputes that one thing went incorrect. The true drawback is determining who’s accountable — and what which means for the enterprise.
Accountability is now not theoretical
For years, accountability in AI felt summary. Programs assisted, however people determined. Agentic AI adjustments that. When methods can take actions independently — approving funds, interacting with clients — accountability turns into operational. And more and more, monetary. This shift is forcing corporations to confront a query they might beforehand defer:
Who owns the end result of an AI resolution?
Legal responsibility exists, however doesn’t map cleanly
There’s a false impression that AI creates a authorized gray zone. It doesn’t. Generally, the organisation deploying the system is accountable. AI can’t be held liable, however agentic methods complicate how that duty performs out. Selections emerge from a number of layers — fashions, prompts, integrations, and context. That makes it tough to hint why one thing occurred, even when legal responsibility is evident. The result’s a niche between obligation and sensible accountability — and that hole is the place danger builds.
Additionally Learn: AI brokers gained’t repair what you haven’t found out but
From instruments to actors
The deeper subject is that AI is now not only a device. It behaves extra like an actor. Conventional methods do precisely what they’re programmed to do.
AI brokers are given targets and decide how you can obtain them. That breaks current accountability fashions. It’s now not sufficient to ask whether or not the system labored appropriately. You need to ask why it made a selected resolution — and whether or not anybody may have predicted or prevented it.
Singapore is operationalising accountability
Whereas many markets are nonetheless debating AI accountability, Singapore has targeted on implementation. The Mannequin AI Governance Framework, launched by the Infocomm Media Improvement Authority (IMDA), units clear expectations round transparency, explainability, and human accountability.
Extra importantly, Singapore is translating these rules into apply. By means of AI Confirm, corporations can check and reveal how their methods meet governance requirements. The federal government has additionally been specific about steadiness. As Deputy Prime Minister Gan Kim Yong put it, the purpose is to search out guidelines which might be “neither too tight nor so free that trade gamers run wild.”
That framing positions accountability as an enabler of belief, not a constraint on innovation. It additionally indicators a shift: governance is transferring from coverage to infrastructure — one thing embedded into how methods function.
Additionally Learn: AI Brokers and the tip of the all-in-one worker
Belief is changing into a aggressive edge
Accountability is now not nearly compliance. It impacts whether or not AI will get adopted in any respect. In sectors like finance and healthcare, the flexibility to elucidate and management AI choices determines whether or not methods might be deployed at scale. Even in much less regulated industries, customers don’t separate AI errors from firm duty. If an agent fails, the corporate owns that failure. It makes accountability a aggressive variable. Corporations that may reveal management might be trusted to maneuver quicker.
Possession is the lacking piece
Accountability solely works if somebody owns it.
Who’s answerable for this agent’s choices? Who screens it? Who can intervene?
If these solutions aren’t clear, accountability doesn’t exist in apply. It turns into distributed — and ineffective.
What operators ought to do now
Agentic AI makes accountability unavoidable.
Three shifts matter.
Assign clear possession for each system. Construct auditability so choices might be defined. And deal with governance as infrastructure, not coverage. Singapore’s method affords a sign of the place that is heading. Accountability is changing into one thing you construct into methods — not one thing you deal with after failure. As and when one thing goes incorrect, the true danger isn’t simply the choice. It’s not realizing who was accountable.
—
Editor’s notice: e27 goals to foster thought management by publishing views from the neighborhood. You may as well share your perspective by submitting an article, video, podcast, or infographic.
The views expressed on this article are these of the writer and don’t essentially replicate the official coverage or place of e27.
Be a part of us on WhatsApp, Instagram, Fb, X, and LinkedIn to remain related.
The put up As AI brokers acquire autonomy, legal responsibility is shifting from concept to rapid enterprise danger appeared first on e27.









