Chapter 8 of 12
Myth 7: “AI Is the Wild West—There Are No Rules”
Examine the fast‑evolving landscape of AI regulation and governance to counter the belief that AI is completely unregulated or lawless.
Myth 7: “AI Is the Wild West—There Are No Rules”
When people say “AI is the Wild West”, they usually mean:
- There are no laws about AI.
- Companies can do whatever they want.
- Governments are too slow to matter.
As of March 2026, this is not true.
What is true:
- AI is already covered by many existing laws (privacy, discrimination, consumer protection, safety, IP, etc.).
- New, AI‑specific rules are being passed in different countries and states.
- The real problem is not “no rules” but a regulatory maze that is complex and still changing.
In this 15‑minute module you will:
- See how AI is already regulated in practice.
- Learn about recent AI laws and policies in the US and worldwide.
- Practice telling the difference between “no regulation” and “evolving / inconsistent regulation.”
Step 1 – Start with a Simple Idea: AI Is Not Above the Law
AI systems are tools used by people and organizations. Laws usually apply to:
- What people do with AI (e.g., use it to discriminate in hiring).
- How companies build and deploy AI (e.g., whether they protect user data).
So even if a country has no special “AI Act”, AI is already covered by:
- General laws: privacy, contracts, safety, human rights.
- Sector laws: health care, finance, education, transportation.
> Visualize this: Imagine a robot standing in the middle of overlapping circles labeled Privacy Law, Consumer Protection, Anti‑Discrimination, IP, Safety. AI sits inside these circles, not outside them.
Step 2 – Existing Laws That Already Hit AI (Concrete Examples)
Here are four big areas where old laws already shape how AI can be used:
1. Privacy and Data Protection
- EU/EEA & UK: The GDPR (in force since 2018) and the UK GDPR limit how AI systems can use personal data.
- Example: A company training a face recognition model on people’s photos without consent can face huge fines.
- US (state level): Laws like the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA amendment) give people rights over their data, including data used to train AI.
2. Consumer Protection & Deceptive Practices
- US: The Federal Trade Commission (FTC) has said clearly: AI is still covered by laws against unfair or deceptive practices.
- Example: If a company lies about what its AI can do (e.g., claims it is 100% accurate in medical decisions), the FTC can investigate.
3. Anti‑Discrimination & Civil Rights
- Employment: If an AI hiring tool rejects women more often than men for the same job, that can violate anti‑discrimination laws.
- Lending: If an AI system offers worse loan terms to certain racial groups, that can break fair lending laws.
4. Intellectual Property (IP) & Copyright
- Training data: Courts are currently deciding how copyright law applies to training generative AI.
- Outputs: If someone uses AI to generate art that copies a famous character, that may still infringe copyright or trademarks.
These examples show: AI is not in a legal vacuum. It is squeezed by many existing rules, even if the word “AI” never appears in the law.
Step 3 – Spot the Law: Quick Thought Exercise
Match each AI scenario to the main type of existing law that is most obviously involved.
Write down or say your answers before checking the hints.
- A school uses an AI proctoring system that records students’ faces and rooms during tests, then stores the videos for years.
- A bank uses an AI model that approves far fewer loans for one racial group, even when income and credit score are the same.
- A shopping website uses AI to fake customer reviews to boost sales.
- A startup trains an AI image generator on millions of online artworks without asking artists.
Think: Which area fits best?
- A. Privacy / Data Protection
- B. Anti‑Discrimination / Civil Rights
- C. Consumer Protection / Deceptive Practices
- D. Intellectual Property / Copyright
Suggested answers (no peeking until you try):
> 1 → A (Privacy / Data Protection)
> 2 → B (Anti‑Discrimination / Civil Rights)
> 3 → C (Consumer Protection / Deceptive Practices)
> 4 → D (Intellectual Property / Copyright)
Notice how none of these rely on a special “AI law.” They use laws that already existed.
Step 4 – New AI‑Specific Rules in the United States
On top of older laws, the US has added AI‑specific rules and policies, especially since 2023–2025.
1. Federal Level (National)
- White House Executive Order on Safe, Secure, and Trustworthy AI (Oct 2023)
- Not a law passed by Congress, but a powerful policy order to federal agencies.
- Focuses on: safety testing (“red‑teaming”), privacy protections, civil rights, worker impact, and government use of AI.
- NIST AI Risk Management Framework (2023)
- A voluntary framework from the US National Institute of Standards and Technology.
- Helps organizations identify and manage AI risks.
- Sector Guidance
- Agencies like the Department of Education, FDA, CFPB, EEOC have issued guidance on AI in schools, medical devices, finance, and hiring.
2. State‑Level AI Laws (Examples)
US states have moved faster than Congress in some areas:
- Colorado AI Act (signed 2024, taking effect in stages)
- Focuses on high‑risk AI systems (e.g., those affecting employment, credit, health, housing).
- Requires risk management, impact assessments, and certain disclosures to users.
- California & other states
- Considering or adopting rules on automated decision tools, deepfakes in elections, and AI in hiring.
- California already has strong privacy rules that affect AI training and use.
> Key idea: In the US, AI is governed by a patchwork: federal policies + agency guidance + state laws. This is one reason AI feels like a regulatory labyrinth instead of a single clear rulebook.
Step 5 – International & Regional AI Rules (EU, UK, Others)
Outside the US, several regions have created comprehensive AI frameworks.
1. European Union – The EU AI Act
- The EU AI Act was politically agreed in 2023 and entered into force in 2024–2025, with phased application.
- It is the first major, binding AI‑specific regulation in the world.
- Uses a risk‑based approach:
- Unacceptable risk: Certain AI uses are banned (e.g., social scoring by governments, some types of manipulative systems).
- High‑risk AI: Systems used in areas like jobs, education, credit, law enforcement must follow strict rules: risk management, data quality, documentation, human oversight, post‑market monitoring.
- Limited risk: Transparency duties (e.g., telling users they are interacting with AI, or that content is AI‑generated).
- Minimal risk: Many everyday AI systems, with fewer obligations.
2. United Kingdom
- The UK has taken a more “pro‑innovation” approach.
- Instead of one big AI law, it uses existing regulators (like the Information Commissioner’s Office and Competition and Markets Authority) and provides principles for safe AI.
- The UK has held AI Safety Summits (like the 2023 Bletchley Park summit) to coordinate international approaches.
3. Other Countries and Regions
- Canada: Working on the Artificial Intelligence and Data Act (AIDA) as part of a broader digital charter.
- China: Has issued rules on recommendation algorithms, deep synthesis (deepfakes), and generative AI services, focusing on security, content control, and transparency.
- OECD & UNESCO: Provide international AI principles (like transparency, fairness, accountability) that many countries reference when writing laws.
> Takeaway: Around the world, AI is being regulated through risk‑based frameworks and transparency requirements, not left totally alone.
Step 6 – Quick Check: Risk‑Based Regulation
Test your understanding of the risk‑based approach to AI regulation.
In a risk‑based AI regulation system (like the EU AI Act), which statement is MOST accurate?
- All AI systems are banned until they are proven perfectly safe.
- AI systems are treated differently depending on how much harm they could cause.
- Only entertainment AI (like game bots) is regulated; other AI is ignored.
Show Answer
Answer: B) AI systems are treated differently depending on how much harm they could cause.
Risk‑based systems focus on the **level of potential harm**. High‑risk AI (e.g., in jobs, credit, policing) faces stricter rules, while low‑risk AI has lighter obligations. They do NOT ban all AI, and they certainly do not regulate only entertainment AI.
Step 7 – The Real Problem: A Regulatory Labyrinth
If AI is not a lawless Wild West, why does it often feel chaotic?
Because instead of no rules, we have:
- Many overlapping rules
- One AI system might need to follow privacy law, consumer protection law, anti‑discrimination law, sector rules, and a new AI‑specific act.
- Different rules in different places
- An AI startup operating in the US, EU, and UK faces different definitions, different risk categories, and different paperwork.
- Rules that are still evolving
- Courts are still deciding how to apply old laws (like copyright) to new AI technologies.
- Legislatures keep proposing new AI bills, some of which conflict.
> Visual description: Imagine a maze drawn on a page. Each wall is labeled “Privacy,” “Copyright,” “State AI Law,” “EU AI Act,” “Consumer Protection.” The AI developer is in the middle, trying to find a path that does not break any rule.
So the real challenge is:
- Not “there are no rules”, but
- “there are many rules, and they do not always fit together neatly.”
Step 8 – Apply It: Is This ‘No Rules’ or ‘Too Many Rules’?
Read each scenario and decide whether it shows (A) no regulation or (B) complex / evolving regulation.
- A company launches a global chatbot without checking any privacy or consumer laws, insisting, “There are no AI laws anywhere.”
- A health‑tech startup wants to use AI to read X‑rays. They must consider medical device rules, patient privacy, and new AI risk frameworks, which differ by country.
- A social media app uses AI to recommend content. It must follow platform rules, youth protection laws, and new rules on recommender systems in some regions.
Think through your answers before reading below.
> Suggested answers:
> 1 → Shows a belief in no regulation, but it is incorrect in reality.
> 2 → Complex / evolving regulation (health + privacy + AI‑specific).
> 3 → Complex / evolving regulation (multiple overlapping rules).
Notice that in real life, almost all serious AI use cases fall into (B).
Step 9 – Review Key Terms
Flip these cards (mentally or with a partner) to review the most important ideas from this myth.
- Risk‑based AI regulation
- An approach where AI systems are regulated more or less strictly depending on the **potential harm** they can cause (e.g., high‑risk vs. low‑risk systems).
- High‑risk AI system
- An AI system used in sensitive areas such as **employment, education, health, credit, or law enforcement**, where mistakes can seriously affect people’s lives.
- Transparency requirement
- A rule that organizations must **inform users** when they are interacting with AI, or when content (like images or text) is AI‑generated.
- Regulatory labyrinth
- A situation where there are **many overlapping laws and rules** (old and new), making it hard to understand and follow all the requirements for AI.
- Executive Order (US context)
- A directive issued by the **US President** to federal agencies, which shapes how they develop and enforce rules (including around AI), even though it is not a law passed by Congress.
- Existing law vs. AI‑specific law
- Existing laws (like privacy, discrimination, consumer protection) already apply to AI, while AI‑specific laws are **new rules written with AI directly in mind** (like the EU AI Act or state AI acts).
Step 10 – Final Check: Busting the ‘Wild West’ Myth
One last question to see if the myth is truly busted for you.
Which statement best captures the reality of AI regulation as of March 2026?
- AI operates in a total legal vacuum; there are no meaningful rules yet.
- AI is heavily shaped by existing laws, and new AI‑specific rules are emerging, but they form a complex and sometimes inconsistent patchwork.
- AI is fully controlled by one single global AI law that every country follows.
Show Answer
Answer: B) AI is heavily shaped by existing laws, and new AI‑specific rules are emerging, but they form a complex and sometimes inconsistent patchwork.
AI is **not** unregulated, and there is **no single global AI law**. Instead, we have many existing laws (privacy, discrimination, consumer protection, IP, sector rules) plus new AI‑specific frameworks (like the EU AI Act and state AI laws), creating a complex and evolving regulatory patchwork.
Key Terms
- EU AI Act
- A comprehensive European Union regulation that classifies AI systems by risk level and imposes specific obligations on high‑risk and other types of AI.
- Existing law
- A law that was created before current AI systems but still applies to AI uses (for example, privacy law, consumer protection law, or anti‑discrimination law).
- AI‑specific law
- A newer law or regulation written directly to address AI systems and their risks (for example, the EU AI Act or certain US state AI acts).
- Executive Order (US)
- A directive from the US President to federal agencies that guides how they interpret and enforce laws and develop policies, including those related to AI.
- Regulatory labyrinth
- A complicated and sometimes confusing mix of overlapping laws, regulations, and guidelines that organizations must navigate when using AI.
- High‑risk AI system
- An AI system used in sensitive or impactful areas (such as jobs, credit, education, health, or policing) where errors can seriously affect people’s rights or opportunities.
- Transparency requirement
- A legal or policy rule that organizations must clearly tell users when AI is being used or when content is AI‑generated.
- Risk‑based AI regulation
- A method of regulating AI where systems are treated differently based on how much harm they could cause; higher‑risk uses face stricter obligations.
- NIST AI Risk Management Framework
- A voluntary US framework published by the National Institute of Standards and Technology to help organizations identify, assess, and manage AI risks.
- Civil rights / Anti‑discrimination law
- Laws that protect people from unfair treatment based on protected characteristics (such as race, gender, religion) in areas like employment, housing, and lending; they also apply when AI is used to make these decisions.