Get the App

Chapter 1 of 12

What AI Really Is (and Isn’t)

Introduce what we mean by “artificial intelligence” today, distinguishing real systems from science‑fiction robots and superintelligence narratives.

15 min readen

1. What Do We Mean by “AI” Today?

When people say AI today, they usually mean software systems that can perform tasks that normally need human intelligence—like recognizing speech, translating languages, generating images, or answering questions.

A useful working definition:

> Artificial Intelligence (AI) is the field of computer science focused on building systems that can perform tasks that seem intelligent because they can adapt, make predictions, or choose actions based on data.

Two important points for 2026

  1. Most real AI is narrow and task‑specific.
  • It is very good at one thing (like recommending videos) and very bad at anything outside that.
  1. Modern AI ≠ sci‑fi robots.
  • No current system has human‑like common sense, emotions, or consciousness.

You’re currently interacting with a large language model (LLM)—a type of AI that predicts text based on patterns in huge amounts of data. It can sound human, but that does not mean it is human‑like inside.

2. AI vs. Machine Learning vs. Deep Learning

These three terms are related but not identical:

  • Artificial Intelligence (AI) – the broad field: any technique that makes computers perform tasks that seem intelligent.
  • Machine Learning (ML) – a subfield of AI where systems learn patterns from data instead of being explicitly programmed with rules.
  • Deep Learning (DL) – a subfield of ML that uses deep neural networks (many layers of artificial neurons) to learn very complex patterns, especially in images, audio, and text.

You can imagine it as:

  • AI = the whole tree
  • ML = a big branch of the tree
  • Deep Learning = a branch on that branch

Most of the impressive AI systems you hear about in the news since around 2016–2026 (chatbots, image generators, AlphaGo, etc.) are based on deep learning.

3. Real-World AI You Already Use

Here are some narrow AI systems you probably interact with regularly:

  1. Search engines (Google, Bing, etc.)
  • Use ML to guess which links are most relevant to your query.
  1. Recommendation systems (YouTube, TikTok, Netflix, Spotify)
  • Learn from your past clicks, watch time, or likes to predict what you might enjoy next.
  1. Face unlock on phones
  • Uses a trained neural network to check if the camera image matches your stored face data.
  1. Spam filters in email
  • Classify messages as “spam” or “not spam” based on patterns in text, sender, and behavior.
  1. Image generators (e.g., DALL·E, Midjourney, Stable Diffusion)
  • Turn text prompts into images by learning patterns from millions of images and captions.
  1. Chatbots and assistants (like this one)
  • Predict the next word or sentence based on patterns in huge text datasets.

In all these cases, the AI is:

  • Specialized (good at one category of tasks)
  • Data‑driven (learned from examples)
  • Not self‑aware (no inner experience or feelings)

4. Narrow AI vs. Artificial General Intelligence (AGI)

It’s crucial to distinguish between what exists now and what is hypothetical.

Narrow AI (what we have today)

  • Also called weak AI.
  • Designed for specific tasks: play chess, recognize faces, translate text, drive a car, generate code, etc.
  • Can be superhuman in its narrow domain (e.g., chess engines) but fails badly outside it.

Artificial General Intelligence (AGI)

  • Sometimes called strong AI or human‑level AI.
  • A hypothetical system that could:
  • Learn any intellectual task a human can
  • Transfer knowledge flexibly between very different tasks
  • Adapt to new situations with broad, common‑sense reasoning

As of today (March 2026):

  • No system is widely accepted as true AGI.
  • Even the most advanced models (like GPT‑style systems, cutting‑edge multimodal models, and robotics controllers) are still narrow or at most “narrow but very capable”.

Some researchers argue that we are getting closer to AGI; others strongly disagree. But there is no consensus that AGI has been achieved yet.

5. Spot the Narrow AI

For each scenario, decide whether it describes narrow AI (real today) or AGI (hypothetical). Think before checking the answers at the end.

  1. A system that beats the world champion at Go but cannot play checkers without being retrained.
  2. A robot that can learn any school subject you teach it, then teach that subject to others, and also cook new recipes from scratch.
  3. A phone app that can turn spoken English into written Spanish.
  4. An AI that can read your country’s laws, understand new court cases, and argue legal cases in court across any area of law with human‑level skill.

Your task: For each number (1–4), write down:

  • N if you think it’s narrow AI
  • G if you think it’s AGI‑like

Scroll down for suggested answers.

---

Suggested answers:

  1. N – Superhuman at one game, but still narrow.
  2. G – This requires very broad, flexible intelligence (AGI‑like).
  3. N – This is a language translation model (narrow task).
  4. G – This would require very general legal reasoning and adaptation across many domains.

6. How Modern AI Learns: Training on Data

Most current AI systems learn using machine learning. The basic idea:

  1. Collect data
  • Examples: images with labels ("cat", "dog"), text from websites and books, audio recordings with transcripts, driving videos, etc.
  1. Choose a model
  • For deep learning, this is usually a neural network with many layers and millions or billions of adjustable parameters.
  1. Train the model
  • The model makes predictions (e.g., "this is a cat"), compares them to the correct answers, and adjusts its internal parameters to reduce errors.
  • This process is repeated millions or billions of times.
  1. Evaluate and fine‑tune
  • Test the model on new data it has never seen.
  • Adjust and retrain until performance is good enough.
  1. Deploy
  • Put the model into an app, website, or device so people can use it.

For large language models (LLMs) like the one you’re using:

  • They are trained on massive text datasets (books, code, web pages, etc., often up to a certain cutoff date).
  • The core training goal is usually “predict the next token (word or piece of a word) given the previous ones.”
  • From this simple objective, they learn grammar, facts (up to their training date), and common patterns of reasoning—but as statistical patterns, not as human‑style understanding.

7. A Tiny Example of Machine Learning in Code

This very small Python example (using scikit‑learn) shows the pattern of training a model. It is much simpler than real AI systems but follows the same steps: prepare data → train → predict.

8. Why “Pattern Learning” ≠ Human Understanding

Even though modern AI can do impressive things, it does not understand the world the way humans do.

Key differences:

  1. No consciousness
  • There is no evidence that current AI systems have subjective experience (no inner “I”, no feelings, no awareness of being an AI).
  1. No grounded experience
  • Humans learn from living in the world: touching, seeing, moving, feeling pain and joy, interacting socially.
  • Most current AI only learns from data representations (text, images, audio) rather than direct physical experience.
  1. Statistical, not semantic
  • LLMs like this one generate answers by following learned statistical patterns in text.
  • They do not have an internal model of meaning in the human sense; they don’t know what “it feels like” to be cold or embarrassed.
  1. Can be confidently wrong
  • Because they predict likely text, they can produce plausible‑sounding but false or outdated information (often called hallucinations).
  1. No built‑in goals or desires
  • Current AI systems do not “want” anything. They follow the objectives given by their training and by the prompts or instructions we provide.

This is why many AI researchers emphasize: “These systems are powerful pattern machines, not digital people.”

9. Quick Check: Does AI Understand Like Humans?

Test your understanding of how current AI systems relate to human‑like understanding.

Which statement best describes most advanced AI systems in 2026 (like large language models and image generators)?

  1. They truly understand language and images in the same way humans do, including emotions and consciousness.
  2. They learn patterns from large datasets and can generate impressive outputs, but there is no strong evidence they have consciousness or human-like understanding.
  3. They are already full Artificial General Intelligence (AGI) that can flexibly do any intellectual task a human can do.
Show Answer

Answer: B) They learn patterns from large datasets and can generate impressive outputs, but there is no strong evidence they have consciousness or human-like understanding.

Option 2 is correct. Modern AI systems learn statistical patterns from vast datasets and can perform many tasks impressively, but there is no solid evidence that they have consciousness, emotions, or full human-like understanding. They are still considered narrow or task-focused AI, not true AGI.

10. Thought Exercise: When Might AI Feel Human?

Reflect on these questions to deepen your understanding. You do not need to write full essays—bullet points are enough.

  1. Chatbot illusion
  • Think of a time when a chatbot, game character, or voice assistant felt almost human to you.
  • What specific behaviors made it feel that way (tone, speed, humor, empathy words)?
  • Which of those behaviors could be explained by pattern learning rather than real feelings?
  1. Misplaced trust
  • Imagine a friend saying: “This AI really understands me and cares about me.”
  • List two risks of believing that a current AI system has feelings or loyalty.
  1. Better mental model
  • In one or two sentences, write how you would explain to a younger student (maybe 11–12 years old) what AI really is, without making it sound like a person.

Take 2–3 minutes to think or jot down notes. This will help you keep a realistic, safe view of what AI can and cannot do.

11. Key Term Review

Flip through these flashcards (mentally or with a partner) to review the main concepts from this module.

Artificial Intelligence (AI)
The field of computer science focused on building systems that can perform tasks that seem intelligent, such as recognizing patterns, making predictions, or choosing actions based on data.
Machine Learning (ML)
A subfield of AI where systems learn patterns from data instead of being explicitly programmed with fixed rules.
Deep Learning
A type of machine learning that uses deep neural networks (many layers of artificial neurons) to learn complex patterns in data like images, audio, and text.
Narrow AI (Weak AI)
AI designed for a specific task or a limited set of tasks (e.g., translation, face recognition, game playing) without general human-like intelligence.
Artificial General Intelligence (AGI)
A hypothetical form of AI that could understand, learn, and apply knowledge across a very wide range of tasks at a human-like level or beyond.
Large Language Model (LLM)
A deep learning model trained on massive text datasets to predict the next token (word or piece of a word), enabling it to generate and analyze human-like text.
Training Data
The collection of examples (text, images, audio, etc.) used to teach an AI model to recognize patterns and make predictions.
Hallucination (in AI)
When an AI system, especially a language model, produces information that sounds plausible but is factually incorrect, made up, or unsupported by its training data.

Key Terms

Narrow AI
AI systems that are specialized for a specific task or limited set of tasks, such as translation, image recognition, or game playing.
Deep Learning
A subset of machine learning that uses multi-layered neural networks to learn complex patterns in large datasets, especially for images, audio, and text.
Training Data
The set of labeled or unlabeled examples used to teach an AI model to recognize patterns and make predictions.
Neural Network
A computing model inspired by the brain, made up of layers of interconnected nodes ("neurons") that can learn to approximate complex functions from data.
Hallucination (AI)
The production of plausible-sounding but false or unsupported information by an AI system, especially large language models.
Machine Learning (ML)
A subfield of AI where algorithms learn from data to improve their performance on a task, rather than being explicitly programmed with all the rules.
Large Language Model (LLM)
A deep learning model trained on very large text datasets to predict the next token in a sequence, enabling it to generate, summarize, and analyze human-like text.
Artificial Intelligence (AI)
The broad field focused on building systems that can perform tasks that appear intelligent, such as recognizing patterns, making predictions, or selecting actions based on data.
Consciousness (in this context)
Subjective experience or awareness; current AI systems are not considered conscious, even if they produce human-like responses.
Artificial General Intelligence (AGI)
A hypothetical type of AI that could understand, learn, and apply knowledge across many domains at a level comparable to or beyond human intelligence.