Chapter 6 of 10
Module 6 – The AI Act: Risk-Based Regulation in Practice
Explains the EU’s Artificial Intelligence Act as a flagship example of a new legislative framework for a technology area, including its risk-based structure and phased application.
1. Why the AI Act Matters (and How It Fits Your Existing Knowledge)
In this module, you connect what you already know about EU product and digital regulation to a concrete, cutting‑edge case: the EU Artificial Intelligence Act.
Where we are coming from:
- Module 4 (NLF): You learned how the New Legislative Framework (NLF) provides a horizontal toolbox (conformity assessment, CE marking, notified bodies, market surveillance) reused across many product laws.
- Module 5 (Digital Rulebook): You saw how the Digital Services Act (DSA) and Digital Markets Act (DMA) regulate online platforms and gatekeepers as part of the EU digital strategy.
Where we are now:
- The AI Act – formally Regulation (EU) 2024/1689 – is the EU’s flagship law on artificial intelligence.
- It is a horizontal, risk‑based regulation that cuts across sectors (health, transport, education, law enforcement, etc.).
- It entered into force in August 2024 and is being applied in phases up to 2026–2027.
By the end of this 15‑minute module, you should be able to:
- Describe the AI Act’s risk‑based structure and the four main risk categories.
- Explain how the AI Act uses phased implementation and how this compares to other complex EU laws.
Keep in mind: the AI Act is not just about ChatGPT‑like systems. It covers a wide variety of AI uses, from medical diagnostics to CV‑screening tools, and it interacts with both the NLF toolbox and the digital rulebook (DSA/DMA).
2. Scope and Objectives of the AI Act
The AI Act is Regulation (EU) 2024/1689, adopted in 2024 and in force since August 2024. As of today (late 2025), its application is partially in effect and still being phased in.
2.1 What does the AI Act try to achieve?
The AI Act has three core objectives:
- Ensure safety and fundamental rights are protected when AI is used in the EU.
- Foster trustworthy AI innovation and uptake in the EU single market.
- Create a common EU rulebook for AI, avoiding 27 different national regimes.
2.2 Who and what is covered? (Scope)
The AI Act applies to:
- Providers of AI systems (developers who place AI systems on the EU market or put them into service).
- Deployers (users) of AI systems in the EU, especially organizations (e.g., banks, hospitals, public authorities).
- Importers and distributors of AI systems.
- Some rules also touch general‑purpose AI models (GPAI), especially systemic risk models.
It covers AI systems used:
- In the EU market, regardless of where the provider is established.
- Outside the EU, if the AI system’s output is used in the EU (similar extraterritorial logic as the GDPR and DSA).
> Historical note: Before the AI Act, AI issues were handled via sector‑specific laws (e.g., medical devices, product safety) and general rules like GDPR. The AI Act does not replace those, but layers a horizontal set of AI‑specific rules on top, often aligned with the NLF toolbox you saw in Module 4.
3. The Core Idea: A Risk‑Based Approach to AI
The AI Act is built around a risk‑based regulatory model – the higher the risk, the stricter the rules.
It distinguishes four main categories of AI systems:
- Unacceptable risk – prohibited uses.
- High‑risk – allowed but under strict requirements.
- Limited risk – lighter rules, mainly transparency obligations.
- Minimal (or low) risk – largely unregulated by the AI Act (but still subject to other laws like GDPR, consumer law, etc.).
You can visualize it as a pyramid:
- Top (smallest): Unacceptable risk – banned.
- Below: High‑risk – heavily regulated.
- Next: Limited risk – some transparency.
- Base (largest): Minimal risk – no specific AI Act duties.
This is similar in spirit to risk‑based product safety under the NLF, but with a stronger focus on fundamental rights and context of use (e.g., AI in law enforcement vs AI in a game).
4. Unacceptable vs High‑Risk AI: Concrete Examples
To make the abstract categories more tangible, let’s look at concrete examples.
4.1 Unacceptable Risk AI (Prohibited)
These are AI practices considered incompatible with EU values and fundamental rights. They are generally banned.
Examples (simplified):
- Social scoring by public authorities:
- An AI system that aggregates data from multiple sources to rate citizens (e.g., financial behaviour, social media activity) and then uses that score to decide access to services or benefits.
- Manipulative AI exploiting vulnerabilities:
- AI toys using voice assistants that covertly encourage children to take dangerous actions.
- Real‑time remote biometric identification in public spaces for law enforcement, with only very narrow exceptions (e.g., searching for victims of specific crimes, preventing an imminent terrorist threat) under strict conditions.
These systems are not allowed to be placed on the EU market or used, except under the limited exceptions defined in the Act.
4.2 High‑Risk AI (Heavily Regulated but Allowed)
High‑risk AI systems are permitted, but only if they comply with strict requirements. These are usually AI systems that:
- Are used in products covered by EU product safety laws (e.g., machinery, medical devices), or
- Are used in specific sensitive areas listed in the Act (Annex III), such as:
- Biometric identification and categorisation (non‑prohibited uses).
- Education and vocational training (e.g., AI that grades exams or allocates spots).
- Employment and workers’ management (e.g., CV‑screening tools, AI scheduling systems).
- Access to essential services (e.g., credit scoring systems, welfare eligibility).
- Law enforcement, migration, border control, administration of justice.
High‑risk example scenario:
- A bank uses an AI system to decide whether to grant consumer loans.
- This affects access to an essential service (credit).
- The AI system would likely be classified as high‑risk.
- The provider must meet requirements on risk management, data quality, documentation, human oversight, and more.
When you see a use case, ask: Does this significantly affect people’s rights, access to key services, or safety? If yes, it probably falls into high‑risk or unacceptable categories.
5. Limited‑Risk and Minimal‑Risk AI (Transparency vs No Specific Rules)
Not all AI is scary or high‑stakes. The AI Act explicitly leaves a large space for low‑risk innovation.
5.1 Limited‑Risk AI: Transparency Duties
These systems are not high‑risk, but users should know they are interacting with AI or that AI is used in a certain way.
Typical transparency obligations include:
- Informing people when they interact with an AI chatbot rather than a human.
- Labeling deepfakes (synthetic audio, image, or video content) as artificially generated or manipulated, except in certain legitimate contexts (e.g., law enforcement operations, satire with clear context).
- Disclosing that emotion recognition or biometric categorisation is being used in certain contexts.
Example:
- A customer support chatbot on an e‑commerce site:
- Must clearly inform users that they are interacting with an AI system, not a human.
5.2 Minimal‑Risk AI: No AI‑Act‑Specific Obligations
These are the vast majority of AI applications, such as:
- AI‑powered video game NPCs.
- AI used for spam filters.
- Photo enhancement filters in consumer apps.
For these, the AI Act does not impose specific obligations. But:
- Other laws still apply (e.g., GDPR, consumer protection, copyright).
- Providers are encouraged to follow voluntary codes of conduct or good practices.
This layered approach is intended to avoid over‑regulation of everyday or experimental AI, while focusing legal effort where risks are highest.
6. Classify the Risk: Short Thought Exercise
Try to classify each scenario into one of the four categories: Unacceptable, High‑Risk, Limited‑Risk, Minimal‑Risk.
Write down your answers before checking the suggested classification at the end.
Scenario A
A city government uses AI‑based facial recognition in real time on CCTV feeds in public spaces to identify everyone in a crowd and track their movements for general crime prevention.
Your classification: `...`
---
Scenario B
A university uses an AI tool to automatically grade entrance exams and rank applicants, with minimal human review.
Your classification: `...`
---
Scenario C
An e‑commerce website uses a recommender system to suggest products based on browsing history.
Your classification: `...`
---
Scenario D
A mobile game uses AI to control non‑player characters (NPCs) that adapt to the player’s style.
Your classification: `...`
---
Suggested classifications (compare with yours):
- Scenario A: Unacceptable or very strictly limited – real‑time remote biometric identification in public spaces for law enforcement is generally prohibited, with only narrow exceptions.
- Scenario B: High‑Risk – AI in education and vocational training that affects access to education is typically high‑risk.
- Scenario C: Usually Minimal‑Risk under the AI Act (though other laws like consumer protection and GDPR still apply). It might be Limited‑Risk if it involves certain profiling scenarios requiring transparency.
- Scenario D: Minimal‑Risk – typical entertainment use with limited impact on rights or safety.
7. Timeline: Entry into Force and Phased Application
Complex EU regulations like the AI Act rarely apply fully from day one. Instead, they use staggered timelines so that:
- Institutions can set up governance structures.
- Industry can adapt and build compliance processes.
7.1 Key Milestones (relative to today)
- August 2024 – AI Act entered into force (20 days after publication in the Official Journal). From this date, institutions and companies knew the final text and could start preparing.
- End of 2024–2025 – Early application of prohibited practices (unacceptable risk) and some governance provisions begins (Member States and EU bodies start setting up structures).
- 2025–2026 – Progressive application of rules for:
- General‑purpose AI models (GPAI), especially those posing systemic risks.
- Transparency obligations (e.g., chatbots, deepfake labeling).
- 2026–2027 – Core high‑risk AI obligations become fully applicable (risk management, data governance, human oversight, CE marking, etc.), similar in style to other NLF‑aligned product rules.
> Exact dates for specific provisions are detailed in the final Articles and transitional provisions of the AI Act. Different obligations kick in at different times, but as of late 2025, stakeholders are already under pressure to prepare for the high‑risk AI compliance wave.
7.2 Why the Phased Approach Matters
- It mirrors what you saw in other large EU acts (e.g., DSA, GDPR): entry into force first, application later.
- It gives:
- Regulators time to set up notified bodies, market surveillance authorities, and the new EU AI Office.
- Providers and deployers time to perform risk classification, gap analysis, and system redesign.
Understanding this pattern helps you read future EU tech regulations: don’t just ask what the rules are; always ask when each rule applies.
8. Quick Check: Entry into Force vs Application
Test your understanding of the AI Act’s timeline and how it mirrors other EU regulations.
Which statement best describes the AI Act’s timeline?
- All obligations, including for high‑risk AI, started applying immediately in August 2024 when the Act entered into force.
- The AI Act entered into force in August 2024, but different sets of obligations (e.g., bans, transparency, high‑risk rules) apply in stages over several years.
- The AI Act will only start to apply after all Member States pass national implementing laws.
Show Answer
Answer: B) The AI Act entered into force in August 2024, but different sets of obligations (e.g., bans, transparency, high‑risk rules) apply in stages over several years.
Regulations like the AI Act are directly applicable EU law, so they do not require national transposition. The AI Act entered into force in August 2024, but its obligations are phased in: some (like bans on unacceptable AI practices) apply earlier, while others (especially detailed high‑risk AI requirements) apply later, up to around 2026–2027.
9. Applying the Risk‑Based Approach: A Mini Case Study
Imagine you are working for a health‑tech startup that develops an AI system to support radiologists in detecting tumours in medical images.
Step 1 – Identify the AI system
- Your product uses machine learning to analyse MRI scans and highlight suspicious areas.
Step 2 – Determine the risk category
- It is used in healthcare and affects patient diagnosis and safety.
- It is likely integrated into a medical device.
- Under the AI Act, this typically falls under high‑risk AI (AI used as a safety component of medical devices).
Step 3 – Understand key obligations (high‑risk)
As a provider of a high‑risk AI system, you must ensure, among other things:
- A risk management system across the lifecycle.
- High‑quality, relevant and representative training data.
- Technical documentation and record‑keeping.
- Transparency and information to deployers (hospitals, clinics).
- Human oversight measures (radiologists remain in the loop).
- Accuracy, robustness, and cybersecurity.
- Conformity assessment (often involving a notified body) and, where relevant, CE marking.
Step 4 – Check interaction with other laws
- The AI Act layers on top of existing frameworks like the Medical Devices Regulation (MDR).
- Your compliance strategy must integrate both:
- MDR requirements (clinical evaluation, post‑market surveillance, etc.).
- AI Act high‑risk requirements (data governance, human oversight, etc.).
This is exactly where your knowledge from Module 4 (NLF) becomes practical: the AI Act reuses the conformity assessment and CE‑marking logic of the NLF, but tailors it to AI‑specific risks.
10. Flashcard Review: Key AI Act Concepts
Flip through these flashcards to reinforce the main concepts from this module.
- AI Act (Regulation (EU) 2024/1689)
- The EU’s horizontal regulation on artificial intelligence, in force since August 2024, using a risk‑based approach with four main risk categories and phased application of obligations.
- Unacceptable‑Risk AI
- AI practices that are prohibited under the AI Act because they are considered incompatible with EU values and fundamental rights (e.g., social scoring by public authorities, most real‑time remote biometric identification in public spaces for law enforcement).
- High‑Risk AI
- AI systems that significantly affect safety or fundamental rights (e.g., AI in medical devices, employment, education, essential services, law enforcement). They are allowed but subject to strict requirements (risk management, data quality, human oversight, etc.).
- Limited‑Risk AI
- AI systems that mainly trigger transparency obligations (e.g., chatbots, deepfakes, some emotion recognition). Users must be informed that AI is being used or that content is artificially generated.
- Minimal‑Risk AI
- AI systems with low impact on safety or fundamental rights (e.g., game NPCs, spam filters). The AI Act does not impose specific obligations on them, though other laws still apply.
- Entry into Force vs Application
- Entry into force is when a regulation becomes legally valid (for the AI Act, August 2024). Application refers to when specific obligations actually start to bind stakeholders, which can be staggered over several years.
- Provider vs Deployer
- A provider is the developer or entity that places an AI system on the market or puts it into service under its name. A deployer is the user (often an organisation) that uses the AI system in practice (e.g., a bank using a credit‑scoring AI).
- General‑Purpose AI (GPAI)
- AI models that can be used for a wide range of tasks (e.g., large language models). The AI Act introduces specific rules for GPAI models, especially those posing systemic risks, with phased application of obligations.
11. Connect the Dots: AI Act, NLF, and the Digital Rulebook
To consolidate your understanding, take a moment to map connections between this module and the previous ones.
Task
On a blank sheet (or a notes app), draw three circles labeled:
- NLF (Module 4)
- Digital Rulebook: DSA/DMA (Module 5)
- AI Act (Module 6)
For each connection, jot down at least one sentence:
- NLF ↔ AI Act
- How does the AI Act reuse or align with NLF concepts (e.g., conformity assessment, CE marking, notified bodies)?
- Digital Rulebook ↔ AI Act
- How might the AI Act interact with the DSA/DMA in practice (e.g., AI recommender systems on very large online platforms, content moderation tools)?
- NLF ↔ Digital Rulebook
- How do product‑safety‑style rules (NLF) differ from platform‑oriented rules (DSA/DMA), and where does the AI Act sit between them?
Reflection prompts
- In your view, why did the EU choose a risk‑based structure for AI instead of a one‑size‑fits‑all set of obligations?
- Which risk category do you think will be most challenging for regulators and companies to manage in practice, and why?
Use this as a mini‑summary exercise. If you can explain these links in your own words, you have a solid grasp of how the AI Act fits into the broader EU regulatory ecosystem.
Key Terms
- Deployer
- The person or organisation that uses an AI system in the course of its activities, often the customer or operator of the system.
- Provider
- The natural or legal person who develops an AI system or has it developed and places it on the market or puts it into service under their name or trademark.
- High‑Risk AI
- AI systems that can significantly affect individuals’ safety or fundamental rights (e.g., in healthcare, employment, education, essential services, law enforcement), allowed only under strict regulatory conditions.
- Entry into Force
- The date on which a regulation becomes legally valid and part of EU law (for the AI Act, August 2024).
- Limited‑Risk AI
- AI systems subject mainly to transparency obligations, requiring that users are informed about AI involvement (e.g., chatbots, deepfakes).
- Minimal‑Risk AI
- AI systems with low impact on safety or rights; they face no specific obligations under the AI Act, though other legal frameworks still apply.
- Risk‑Based Approach
- A regulatory method where obligations depend on the level of risk an activity or system poses; in the AI Act, higher‑risk AI systems face stricter requirements.
- Unacceptable‑Risk AI
- AI practices that are prohibited under the AI Act because they seriously violate EU values or fundamental rights, such as social scoring by public authorities.
- Digital Markets Act (DMA)
- Regulation (EU) 2022/1925, part of the EU’s digital rulebook, imposing obligations on large online platforms designated as gatekeepers to ensure fair competition in digital markets.
- Digital Services Act (DSA)
- Regulation (EU) 2022/2065, part of the EU’s digital rulebook, setting obligations for online intermediaries and platforms regarding content, transparency, and systemic risk management.
- General‑Purpose AI (GPAI)
- AI models designed for a wide range of tasks and uses, including large language models; the AI Act introduces specific rules for such models, especially when they pose systemic risks.
- Application (of a Regulation)
- The date or dates from which specific provisions of a regulation start to impose obligations on stakeholders; often phased over time in complex regulations like the AI Act.
- New Legislative Framework (NLF)
- The EU’s horizontal framework for product regulation, providing common tools such as conformity assessment, CE marking, and market surveillance, reused in many sector‑specific laws and partially mirrored in the AI Act.
- AI Act (Regulation (EU) 2024/1689)
- The European Union’s comprehensive regulation on artificial intelligence, in force since August 2024, establishing a risk‑based framework for AI systems and general‑purpose AI models.