← Guides
Compliance guide

EU AI Act Compliance: What AI Companies Need to Do Now

The EU AI Act entered force in August 2025. This guide covers which systems are in scope, what a conformity assessment requires, and how your risk documentation connects directly to your insurance program.

Updated: April 2026 Read time: 12 min Topics: EU AI Act · Regulation · Compliance

The EU AI Act is the first comprehensive legal framework for AI systems in any major jurisdiction. It entered into force on 1 August 2024, with most obligations applying from August 2025 onwards. If your AI product is used by customers in the EU — even if your company is based outside the EU — you are in scope.

Is your system in scope?

The Act applies to providers (companies that develop or place AI systems on the market), deployers (companies that use AI systems in a professional context), importers, and distributors. Scope is determined by where the output of the AI is used, not where the company developing it is based.

Key point

A US-based company selling an AI product to European enterprise customers is a "provider" under the Act and must comply with provider-level obligations, including conformity assessments for high-risk systems.

The four risk tiers

The Act classifies AI systems into four tiers. Your obligations — and the potential fines — depend entirely on which tier applies.

TierExamplesKey obligation
Unacceptable risk Social scoring, real-time biometric surveillance in public Prohibited outright
High risk AI in hiring, credit scoring, medical devices, law enforcement tools Conformity assessment, registration, ongoing monitoring
Limited risk Chatbots, deepfake generators Transparency obligations (disclose AI use)
Minimal risk Spam filters, AI-enabled games No mandatory obligations; voluntary codes of conduct

Key obligations for high-risk systems

If your system qualifies as high-risk, the compliance checklist is substantial. The Act does not prescribe specific technical methods; it specifies outcomes you must demonstrate.

  • Risk management system: A documented, ongoing process for identifying and mitigating risks throughout the system's lifecycle.
  • Data governance: Training, validation, and test datasets must be relevant, representative, and free from errors that could lead to discriminatory outputs.
  • Technical documentation: Detailed documentation of the system's design, development process, and intended purpose — sufficient for a third party to assess conformity.
  • Logging and record-keeping: Automatic logs of system operations, sufficient to enable post-incident review of decisions made by the system.
  • Transparency to deployers: Providers must give deployers clear instructions on use, risks, and monitoring requirements.
  • Human oversight: The system must be designed to allow natural persons to effectively oversee, intervene in, and override its operation.
  • Accuracy, robustness, and cybersecurity: Documented and tested performance against appropriate metrics.

Fine structure

The Act sets three tiers of fines, each calculated against global annual turnover — not EU revenue.

Violation typeMaximum fine
Prohibited AI practices (unacceptable risk systems)€35M or 7% of global annual turnover
Non-compliance with high-risk system obligations€15M or 3% of global annual turnover
Providing incorrect or misleading information to authorities€7.5M or 1.5% of global annual turnover

What to document before a compliance review

Regulators have signaled that documentation quality will be the primary audit surface in the early enforcement period. Companies with structured, up-to-date technical documentation will have a very different experience than those reconstructing records after a request. We recommend having the following ready before any compliance review:

  1. A system card describing the AI's intended purpose, inputs, outputs, and deployment context
  2. Training data lineage and bias evaluation records
  3. Test results across accuracy, robustness, and fairness metrics
  4. Human oversight procedures, including what decisions are reserved for human review
  5. Incident response plan specific to AI failures
  6. Ongoing monitoring logs demonstrating continuous performance tracking

How this connects to your insurance program

The documentation the EU AI Act requires is largely the same documentation that enables accurate AI liability underwriting. Companies with a risk management system in place, human oversight documented, and monitoring logs running get meaningfully better terms. Carriers want to see the same things regulators will look for: proof that you knew your system's risks and had controls in place.

Quark's monitoring platform generates GDPR and EU AI Act-aligned evidence reports as part of continuous scanning. The 32-page output maps directly to the technical documentation requirements above, so the same artifact that satisfies your insurer can be presented to a regulator.

Need EU AI Act-aligned documentation?

Quark's monitoring generates framework-mapped evidence your auditor, insurer, and regulator can act on.

Start an assessment