The EU AI Act is the world’s first comprehensive AI regulation, enforced from August 2024, with key compliance deadlines in 2025 and 2026.
Categorises AI systems by risk level (Minimal, Limited, High-Risk, Prohibited)
Requires high-risk AI systems — like those used in healthcare, finance, or recruitment — to maintain detailed records, including decision logs, bias audits, and human oversight documentation
Imposes fines up to €35 million or 7% of global revenue for non-compliance
Mandates recordkeeping under Article 12, ensuring traceability and accountability for AI outputs
The Act applies to developers, providers, and deployers of AI systems in the EU. Non-EU companies must comply if their AI outputs are used within the EU.
Fines can reach up to 7% of global annual turnover, making compliance critical, even for early-stage ventures.
General-purpose AI models (GPAI) are usually in-scope, for example:
The training materials, decision logs, model documentation, prompts, and users' consent are all likely within scope. Find out more.
Need some help? Ask our experts for free, no-obligation advice.
The Act classifies AI systems into four risk categories, each with different compliance obligations:
To determine your category:
Assess the purpose of your AI system (e.g., decision-making, surveillance, personalisation)
Evaluate the context in which it’s deployed (e.g., public vs. private, critical sectors)
Check for potential harm to users or fundamental rights
Need some help? Ask our experts for free, no-obligation advice.