For the past eighteen months, I've watched boards across finance, energy, government, and healthcare wrestle with the same question. They know AI agents can compress timelines, reduce costs, and extend human capacity. What they don't know is how to answer their auditor, their regulator, or their board when something goes wrong and the question is: 'Who authorized this decision, what information did the agent use, and can you prove it?'
Autonomy is outpacing accountability
Every serious technology wave of the past two decades — cloud, mobile, SaaS, open banking — eventually reached the same inflection point. The technology worked. Enterprises adopted it. And then the question became: who is accountable? Not in a moral sense, but in a legal, auditable, reconstructible sense. Cloud had SOC 2 and ISO 27001. Mobile had app store review. Payments had PCI DSS. Each of them only scaled enterprise-wide once the accountability framework caught up.
Autonomous AI agents are now at that inflection point — and the accountability framework hasn't caught up. An agent that reads a contract, proposes a financial action, or triages a support case is making decisions. If those decisions are wrong, somebody carries the cost. Today, when we ask 'how did the agent decide?' the honest answer is usually 'we have a log file, probably, if it wasn't rotated.' That is not an answer an auditor will accept. It is not an answer the EU AI Act will accept when high-risk systems fall under its scope. And it is not an answer a board should accept.
What does 'verifiable' actually mean?
A verifiable AI agent is one for which any authorized third party — internal auditor, regulator, client — can answer three questions with cryptographic evidence, not with the vendor's word:
- Identity — Which agent took this action, on whose behalf, with what delegated authority? Not a service name in a log line, but a signed, revocable credential that binds an action to an authorized identity.
- Provenance — What inputs did the agent use? Which model version, which context, which tools, which data sources? The 'bill of materials' for every decision.
- Action trail — What exactly did the agent do, in what order, with what results? Not the happy-path summary, but the full sequence of tool calls, intermediate reasoning, and state transitions.
- Integrity — Has the record been tampered with since the action occurred? This is where traditional databases fail: whoever administers them can rewrite history. A verifiable system makes tampering detectable by design.
None of these are new concepts. We've been building audit trails for banking, aviation, and clinical trials for decades. What is new is the combination: agents act faster and at a higher volume than humans; their reasoning is probabilistic; and they invoke tools across dozens of systems in seconds. The audit substrate has to keep up.
Why blockchain is the right substrate — and when it's the wrong one
Let me be direct: I do not believe blockchain is the answer to most problems. Fernando Boiero has written a candid comparison of when a database outperforms a ledger, and I agree with every word. But the verifiable-agent use case is exactly where blockchain's properties align with the problem.
A blockchain — permissioned or public — provides three properties simultaneously that no centralized system provides together: a shared source of truth that no single party can unilaterally rewrite, a cryptographic audit trail that third parties can verify without access to the underlying database, and a native mechanism for signed identity and delegation. For agent governance, those three properties are exactly what's missing.
In practice, you don't need to — and shouldn't — put raw agent conversations on-chain. That would be expensive, privacy-destroying, and technically unnecessary. What you put on-chain is a set of attestations: hashes of the inputs, outputs, and state transitions of each significant agent action, signed by the agent's identity key, anchored with a verifiable timestamp. The raw data stays in your secure storage. The proof of what happened, and when, and by whom, lives on a ledger anyone authorized can audit.
The regulatory picture is moving fast
From my seat as a lawyer and as the CEO of a company that works across Europe, LATAM, and the United States, I can tell you the regulatory arc is unmistakable. Three frameworks matter most right now.
The EU AI Act, now in force, requires that high-risk AI systems maintain logs enabling traceability throughout their lifecycle, including who used the system and what outputs it produced. For autonomous agents operating on regulated data, the spirit of the law is clear: you must be able to reconstruct decisions under audit. ISO/IEC 42001, the first management system standard for AI, formalizes the governance processes organizations must put in place — including documented identity, accountability, and change management for AI systems. NIST AI RMF, the de facto reference in North America, asks essentially the same questions through a risk-management lens.
In Latin America, we are seeing movement in Brazil with the AI Bill under discussion, in Argentina with sectoral guidance from the Central Bank and securities regulator, and in Mexico and Chile with frameworks that echo the European approach. The regional specifics vary. The core demand — that automated decisions be traceable, explainable, and auditable — does not.
Here is the part most leadership teams miss: regulators are not asking for a new compliance report. They are asking for a system property. A system that is built to be verifiable from day one produces compliance artifacts as a byproduct of operation. A system that is not has to generate those artifacts after the fact, often imperfectly, at recurring cost. The difference compounds.
How we approach this at Xcapit
Our position is simple: any agent we ship into production carries its accountability with it. That means three things by default.
First, every agent has a cryptographic identity issued at deployment. When an agent invokes a tool, calls another agent, or writes to a system of record, that action is signed by the agent's key and bound to the human or organization that authorized it. This is not theoretical — it's how our ArgenTor multi-agent framework handles delegation and why our security review platform AiSec can tell a client not just 'the finding exists' but 'this specific agent, with this specific model version, produced this finding at this timestamp, and here is the evidence chain.'
Second, every significant action generates a signed attestation — a small, structured record hashed and anchored on a verifiable timeline. For regulated clients, that timeline is a permissioned blockchain they co-govern. For others, it's a public anchor that preserves integrity without disclosing data. Either way, the audit story is the same: the raw data stays private, the proof is independently verifiable.
Third, and this is where the legal lens matters: we treat verifiability as contractual. If an enterprise is going to accept agent decisions inside a regulated process, the verifiability properties are written into the agreement, not promised in a marketing deck. The same discipline we bring to data processing addenda and security schedules applies to agent governance.
What leadership teams should be doing now
- Stop treating 'AI governance' as a policy document. It is a set of system properties that either exist in your stack or don't. A policy that references capabilities your technology doesn't have is worse than no policy — it is exposure.
- Require your vendors to answer the three questions. Before an agent touches a regulated workflow, ask the vendor to demonstrate identity, provenance, and action trail with third-party verifiable evidence. If they can't, you are inheriting their risk.
- Invest in verifiability as a platform, not per project. The organizations that win the next decade will have one accountability fabric under all their agents, not forty isolated logging schemes.
- Align your legal, risk, and technology teams early. Verifiable agents are a governance question as much as an engineering one. The companies getting this right have their general counsel, CISO, and CTO in the same conversation — not sequenced across quarters.
- Treat compliance as a product feature. A system built for auditability accelerates enterprise sales, shortens procurement, and turns regulatory change from a threat into a narrowing of the competitive field.
The companies adopting autonomous agents today are setting the accountability precedent for the decade. If your agents can answer the auditor tomorrow, you have a franchise. If they can't, you have a liability masquerading as a productivity story.
Fernando has written, in parallel, about the technical protocols that make multi-agent systems interoperable. Verifiable governance is the other half of that conversation — without it, interoperability scales the problem instead of solving it.
If your team is designing the accountability layer for AI agents and wants a partner that has shipped this in regulated environments, let's talk.
José Trajtenberg
CEO & Co-Founder
Lawyer and international business entrepreneur with over 15 years of experience. Distinguished speaker and strategic leader driving technology companies to global impact.
Stay Updated
Get insights on AI, blockchain, and cybersecurity delivered to your inbox.
We respect your privacy. Unsubscribe anytime.
Need intelligent AI agents for your business?
We build spec-driven autonomous agents that deliver real results.
You Might Also Like
Shadow AI Is Already in Your Company: How Enterprise IT Can Take Back Control Without Saying No
Your employees are already using ChatGPT from their phones — without policy, without audit, without governance. This article lays out the three tensions enterprises face with AI adoption, why current solutions fall short, and the architecture that lets IT say yes to AI without losing control: multi-LLM orchestration, signed audit chains, and on-premise deployment.
Asset Tokenization in Argentina 2026: Regulatory Framework, Technical Architecture, and Why the Fideicomiso Is the Legal Vehicle
Argentina's CNV has an active tokenization sandbox expiring in August 2026. This article covers the full regulatory landscape (CNV, PSAV, BCRA), the technical architecture for tokenizing real-world assets using ERC-3643 and blockchain-native signatures, and why the fideicomiso financiero is the strongest legal vehicle available today.
Agent-to-Agent Protocols: How MCP and A2A Are Redefining Multi-Agent Interoperability
A technical walkthrough of the two protocols reshaping how AI agents connect: MCP for agent-to-tool communication and A2A for agent-to-agent orchestration. When to use each, how they compose, and what we've learned shipping them to production.