The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 as the world's first horizontal AI law. Most coverage focuses on what it means for AI providers. This is what it means for you, the person on the receiving end of an AI decision.
The Phase-In Calendar (What's Already Live, What's Not)
The Act phases in over three years. As of April 2026, two big tranches are already in force: Regulation 2024/1689
- 2 February 2025 — Prohibited AI practices (Article 5) become enforceable. Some AI uses are banned outright in the EU.
- 2 August 2025 — Obligations for general-purpose AI models (GPT-style foundation models), governance bodies (the AI Office), and penalties start applying.
- 2 August 2026 — Most obligations for high-risk AI systems under Annex III (incl. Article 86 right to explanation) become enforceable.
- 2 August 2027 — Final tranche: high-risk AI embedded in regulated products under Annex I.
Prohibited Practices (Article 5) — Live Now
Article 5 bans certain AI uses entirely. If a service deploys one of these against you in the EU, it is not a question of "the AI got it wrong" — the deployment itself is unlawful. The bans include:
- Manipulative or deceptive subliminal techniques that materially distort behaviour and cause significant harm
- Exploiting vulnerabilities of age, disability, or socio-economic situation to materially distort behaviour
- Social scoring by public authorities or on their behalf, leading to detrimental treatment
- Predictive policing based solely on profiling of individuals
- Untargeted scraping of facial images from the internet or CCTV to build face-recognition databases
- Emotion recognition in workplaces and educational institutions (with narrow medical/safety exceptions)
- Biometric categorisation to infer race, political opinion, sexual orientation, religion, etc.
- Real-time remote biometric identification in public spaces for law enforcement (with narrow judicially-authorised exceptions)
High-Risk AI Systems (Annex III) — The Ones You'll Actually Meet
Annex III lists eight categories of high-risk AI. These are the systems most likely to make a decision about you:
- Biometric ID and categorisation systems (other than the ones banned)
- Critical infrastructure (energy, water, transport)
- Education and vocational training (admissions, grading, exam-cheating detection)
- Employment, worker management, and access to self-employment (CV screening, performance evaluation)
- Access to essential private and public services (credit scoring, public benefits eligibility, emergency dispatch triage, life and health insurance pricing)
- Law enforcement
- Migration, asylum, and border control
- Administration of justice and democratic processes
If an AI in one of these categories makes a decision that legally or significantly affects you, the deployer (the company using the AI) has to meet a list of obligations — and you get the rights below.
Article 86: The Right to a Clear Explanation (from 2 August 2026)
Article 86 is the consumer-facing core of the Act. From 2 August 2026, any person subject to a decision made by a deployer on the basis of a high-risk AI system listed in Annex III (with the narrow exception of point 2 — critical infrastructure) has the right to "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken." AI Act Art.86
The decision must (a) produce legal effects, or (b) similarly significantly affect the person in a way they consider adverse to their health, safety, or fundamental rights. Two practical examples:
- A bank's AI declines your loan application — Article 86 right is engaged (Annex III, access to essential services)
- An employer's AI rejects your CV during automated screening — Article 86 right is engaged (Annex III, employment)
This sits alongside, and is broader than, GDPR Article 22, which only covers solely automated decisions. Article 86 covers AI-assisted decisions where a human formally signed off but the AI clearly drove the outcome.
Article 85: The Right to Lodge a Complaint
From 2 August 2026, any person can complain to the national market-surveillance authority of the EU member state where the AI is being deployed. AI Act Art.85 Each member state designates an authority — many will use existing regulators (data protection authorities, telecoms regulators, sector-specific bodies). The Commission's AI Office maintains the central register.
Penalties That Make This Real
Fines under the AI Act are tiered and substantial: AI Act Art.99
- Up to €35 million or 7% of global annual turnover for prohibited-practice violations
- Up to €15 million or 3% for breach of high-risk obligations (including Article 86)
- Up to €7.5 million or 1% for supplying incorrect information to authorities
SMEs and start-ups get capped fines at the lower of the two figures.
What to Do Today (Before 2026 Obligations Kick In)
- If an AI made a significant decision about you, ask in writing whether the system is classified as high-risk under Annex III and what role it played.
- Combine with a GDPR Article 15 DSAR for the underlying personal data and any profiling logic.
- Keep the response. If the company misclassifies the system or refuses to explain, the response itself becomes evidence for a complaint after August 2026.
Related Guides
Fix AI is built around the gap the AI Act is trying to close: AI systems making consequential decisions and refusing to explain why. Practice the explanation request before the law starts biting.
Practice AI Disputes Free →