Section 01 · Why Now
What changes in August 2026 and why it matters
The EU AI Act has been phasing in since August 2024. August 2026 is when the remaining requirements — including all high-risk AI system obligations — become fully enforceable.
Quick answer
The short answer: August 2026 is the deadline for full EU AI Act compliance. High-risk AI systems face strict documentation, human oversight, and audit requirements. Most enterprise LLMs are limited risk, but any system used in hiring, credit, healthcare, or legal decisions is high risk.
The EU AI Act's phased rollout has given organizations time to prepare, but the August 2026 deadline is real. Enforcement is handled by national market surveillance authorities in EU member states and by the European AI Office for general-purpose AI models. Penalties are substantial: up to €35 million or 7% of global annual turnover for the most serious violations, up to €15 million or 3% for other violations.
The Act also has extraterritorial reach. If your AI system is deployed in the EU — if it affects EU residents, regardless of where your company is based — it falls under the Act. This means AI systems built by companies in Pakistan, the US, or anywhere else that process or affect EU users are subject to compliance.
Section 02 · Risk Classification
What tier does your system fall under?
| Risk tier | Examples | Key obligations |
|---|---|---|
| Unacceptable (prohibited) | Social scoring, real-time biometric surveillance, subliminal manipulation | Banned — cannot deploy in EU |
| High risk | Hiring tools, credit scoring, healthcare diagnosis, educational assessment, law enforcement | Full documentation, human oversight, accuracy requirements, audit trail, conformity assessment |
| Limited risk (GPAI) | General-purpose chatbots, coding assistants, document summarization | Transparency obligations, copyright disclosure, AI-generated content labeling |
| Minimal risk | Spam filters, simple recommendation systems, AI in games | No mandatory requirements |
General-purpose AI models — including GPT-5.4, Claude Sonnet 4.6, and Gemini 2.5 — fall under the GPAI (General Purpose AI) provisions, which require transparency and copyright disclosure but are less burdensome than high-risk requirements. If you build an application on top of a GPAI model and that application is used for a high-risk purpose, the high-risk obligations apply to your application.
Section 03 · The Governance Checklist
Five governance layers every production LLM system needs
Acceptable use policy
A written policy that defines what the AI system is permitted to do, what use cases are prohibited, and what data it is allowed to process. This is the foundation. Without it, you cannot demonstrate to a regulator or auditor that you have thought about the system's risk surface. Document it before August 2026.
Data containment architecture
Production LLM systems should not pass personal data or sensitive information into the model context unless necessary. Implement PII detection before context construction, data residency controls for EU-resident data, and clear documentation of what data the model sees and why. RAG architectures where documents are retrieved selectively are more auditable than systems that pass entire databases.
Human review checkpoints for high-stakes decisions
For any AI-assisted decision that affects a person's life — hiring, credit, healthcare, education — implement a human review step before the decision is executed. The human review does not need to be exhaustive. It needs to be documented, logged, and meaningfully able to override the AI recommendation.
Incident logging
Every incident where the AI system produces an output that is wrong, harmful, or unexpected must be logged with the date, the input, the output, the context, and the resolution. This log is required by ISO 42001 and is the primary evidence base for EU AI Act compliance audits. Start logging now, before August.
Audit trail for AI actions
For agentic systems that take actions — sending messages, updating records, triggering workflows — every action must be logged with the agent's reasoning trace, the tool called, the parameters passed, and the human approvals (if required). This is the control that separates a governable agent from an ungovernable one.
Section 04 · ISO 42001
ISO 42001: the management system that makes governance auditable
ISO 42001, published in late 2023, is the first international standard specifically for AI management systems. It provides a structured framework for how organizations build, deploy, and govern AI systems responsibly. Major auditors — BSI, DNV, TÜV — now certify against it.
The practical relationship between ISO 42001 and the EU AI Act: ISO 42001 gives you the documentation and process framework that the EU AI Act's compliance requirements assume exists. An organization with ISO 42001 certification is well-positioned for EU AI Act audits because the underlying governance infrastructure is already in place.
| EU AI Act obligation | ISO 42001 clause | Priority |
|---|---|---|
| Risk management system | 6.1 — Risk assessment | High |
| Technical documentation | 9.1 — Monitoring, measurement, analysis | High |
| Human oversight mechanism | 8.4 — AI system design controls | High |
| Accuracy and robustness testing | 9.1 — Performance evaluation | Medium |
| Data governance | 8.2 — AI system data management | High |
| Incident logging | 10.1 — Nonconformity and corrective action | High |
FAQ
Frequently asked questions
When does EU AI Act full enforcement begin?
August 2, 2026. This is when the remaining requirements take effect, including all high-risk AI system obligations. The Act has been phasing in since August 2024: prohibited AI systems were banned from February 2025, GPAI model obligations applied from August 2025, and the full high-risk regime applies from August 2026.
Does the EU AI Act apply to companies outside the EU?
Yes. The EU AI Act has extraterritorial reach: it applies to any AI system that is deployed in the EU or affects EU residents, regardless of where the provider is based. A company in Pakistan, the US, or anywhere else that operates AI systems affecting EU users must comply.
What is ISO 42001 and how does it relate to the EU AI Act?
ISO 42001 is the international AI management system standard published in 2023. It provides the documentation and process framework that EU AI Act compliance requires. An organization with ISO 42001 certification has the governance infrastructure — risk registers, incident logs, human oversight procedures — that EU AI Act audits look for.
What is the minimum I need to implement before August 2026?
An acceptable use policy, an incident log, and documentation of who is responsible for AI governance decisions. These three items are required for all non-minimal-risk systems, take days to implement, and are the first things a regulator asks for. Start there, then layer in the technical controls.
What are the penalties for EU AI Act non-compliance?
Up to €35 million or 7% of global annual turnover for violations involving prohibited AI practices or GPAI model obligations. Up to €15 million or 3% of turnover for other violations. Up to €7.5 million or 1.5% of turnover for providing incorrect information to supervisory authorities.