In 2023, "AI agent" meant a chatbot with memory. In 2026, it means a system that can open a browser, log into your accounts payable portal, verify an invoice, and initiate a transfer, all without a human reviewing the step. Agentic payments are no longer a research demo. They are in production at companies of every size.
The insurance market has not caught up. Most policies underwritten before 2025 treat AI as an input tool: a human makes the decision, AI assists. Once the agent makes the decision, you are in legal territory that existing coverage was not designed to address.
The three liability vectors in agentic payment flows
When an AI agent initiates or approves a financial transaction, liability can arise from at least three directions simultaneously, and they can compound.
1. Erroneous execution
The agent acts on a misread invoice, a hallucinated vendor detail, or a stale data source. A wire goes to the wrong account or the wrong amount. This is the most common scenario underwriters are seeing in early claims data. Standard cyber policies cover unauthorized access by third parties; they do not cover your own AI making an authorized but wrong payment.
2. Prompt injection and adversarial manipulation
An external actor embeds instructions in a document or email that the agent processes. The agent, following what it reads as legitimate instructions, initiates a fraudulent transfer. MITRE ATLAS catalogs this as a primary attack vector for agentic systems. Most social engineering clauses in existing policies require a human to be deceived. When the deceived party is an AI, coverage gaps are routine.
3. Downstream third-party liability
California AB 316 holds deployers directly liable for AI agent actions, including actions taken through downstream third-party integrations. If your agent uses a payment API connected to a partner system, and that chain produces a loss, liability can flow upstream to you even if the fault lies in the integration.
"The moment an AI agent can initiate an irreversible financial action, the risk profile of that deployment changes fundamentally. It is no longer a productivity tool; it is a principal in a financial transaction."
What standard policies exclude
Before placing agentic payment infrastructure in production, it is worth mapping exactly what your current policies do and do not cover. Based on policy language we have reviewed across major carriers:
- Cyber (E&O): Covers data breaches and unauthorized access. Excludes losses from authorized but erroneous AI actions in most post-2024 policy forms.
- Crime / Fidelity: Covers employee theft and some social engineering fraud. Does not extend to AI-initiated transactions in standard forms; some carriers are now adding explicit AI exclusions.
- General Liability: The Verisk/ISO CGL AI exclusion, effective January 2026, removes AI-related financial losses from this policy entirely.
- D&O: Covers directors and officers. Berkley's absolute AI exclusion strips coverage when AI is in the decision chain, which in an agentic payments context it almost always is.
What purpose-built coverage addresses
AI liability coverage underwritten specifically for agentic deployments treats the agent as the risk unit rather than an ancillary tool. The policy language covers:
- Erroneous autonomous transactions, including misdirected payments and incorrect amounts
- Third-party losses arising from AI-initiated actions in connected systems
- Defense costs in regulatory proceedings where AI decision-making is under scrutiny
- Business interruption from agent failure in payment-critical workflows
How to assess your exposure before you buy
The underwriting question for agentic payment deployments centers on a few dimensions that most standard intake processes do not capture: Can the agent initiate irreversible actions? What is the maximum single-transaction value it can approve? Is there a human-in-the-loop checkpoint before execution, or only after? What does your incident response process look like when an agent acts incorrectly?
These are the questions Quark's intake process is built around. They are also the questions a regulator will ask after an incident. Having structured answers documented before a claim is filed changes the trajectory of every subsequent conversation.
Deploying AI in payment or financial workflows?
Get an underwriting view on your agentic exposure in 72 hours.