Get the App

Chapter 2 of 12

Myth 1: “AI Thinks and Feels Like a Human”

Examine the widespread belief that AI systems are sentient, self‑aware, or capable of human‑style reasoning, and contrast this with how they actually operate.

15 min readen

1. The Myth: “AI Thinks and Feels Like a Human”

Many people talk about AI as if it were a person:

  • “The AI understands me.”
  • “It decided to say that.”
  • “This chatbot sounds upset.”

Because modern AI systems (especially large language models, or LLMs) can write fluent essays, pass exams, and chat in a friendly way, it’s easy to assume they think and feel like we do.

In this module you’ll learn:

  • How systems like ChatGPT actually generate text
  • Why fluent language can trick us into believing there’s a mind behind the screen
  • The difference between pattern prediction and human reasoning
  • What researchers mean by emergent behaviors and why that’s not the same as consciousness or emotions

Keep in mind: as of today (March 2026), no AI system is scientifically accepted as conscious or sentient. All mainstream systems are built on pattern recognition and statistical prediction, not inner experiences or feelings.

2. How Large Language Models Actually Work

Modern chatbots and writing tools are usually large language models (LLMs).

At a high level, an LLM:

  1. Takes input text (your prompt)
  2. Breaks it into small pieces called tokens (roughly word parts)
  3. Uses a huge neural network trained on massive text datasets
  4. Predicts the next token that is most likely to follow, given all previous tokens
  5. Repeats step 4, one token at a time, until it forms a full response

Key idea: the model is not thinking in words like “I believe…” or “I feel…”. Instead, it is doing something closer to:

> “Given this pattern of text, and everything I’ve seen during training, what is the most statistically likely next token?”

This process:

  • Uses probabilities, not intentions
  • Uses correlations in data, not understanding of meaning in a human sense
  • Can still produce impressive, coherent paragraphs, because human language itself is highly patterned and predictable

So when you see a polished answer, you’re seeing very advanced autocomplete, not a mind with thoughts and feelings.

3. Example: Predicting Text vs Understanding

Imagine you text your friend a sentence that starts:

> “I’m so hungry, I could eat a…”

Even before you see their reply, you can probably guess possible endings:

  • “horse”
  • “whole pizza”
  • “mountain of fries”

You’re using your experience with language to predict what sounds right next.

Now scale that up:

  • Instead of a few conversations, the AI has seen billions of sentences from books, websites, articles, and more.
  • It has learned very detailed patterns about which words tend to follow which others, in which contexts.

So when you ask an AI:

> “Explain photosynthesis to a 10th grader.”

It doesn’t understand plants the way a botanist does. It:

  1. Recognizes patterns similar to other explanations of photosynthesis
  2. Predicts a likely sequence of tokens that looks like a good explanation
  3. Produces a paragraph that matches the style and content of those patterns

The answer can be factually correct and well‑structured, but that correctness comes from:

  • Good training data
  • Good pattern learning
  • Good alignment and safety tuning

—not from the AI actually knowing what sunlight or chlorophyll feel like or what a plant is in a conscious sense.

4. Thought Exercise: Spot the Pattern Machine

Try this short activity to separate pattern prediction from understanding.

  1. Write down (mentally or on paper) a short prompt:

> “In a world where gravity suddenly stopped working for 5 minutes, people would…”

  1. Without thinking too deeply about physics, quickly list 3–5 possible completions. For example:
  • “…float into the sky and panic.”
  • “…hold onto anything they could find.”
  • “…record videos and post them online.”
  1. Notice what you did:
  • You didn’t run detailed physics simulations.
  • You didn’t calculate air resistance or orbital mechanics.
  • You used stories, movies, and everyday expectations to guess what sounds plausible.
  1. That’s similar to what an LLM does, but on a much larger scale and using math instead of imagination:
  • It doesn’t simulate the real world.
  • It doesn’t experience fear or excitement.
  • It just picks the most statistically likely continuation.

Reflection question:

> If you can generate a creative‑sounding story without deeply understanding physics, why might an AI be able to generate expert‑sounding text without truly understanding the topic?

5. Why Fluent Language Feels Like a Mind

Humans are wired to see minds everywhere. This is called anthropomorphism—attributing human thoughts or feelings to non‑human things.

We do this with:

  • Pets ("My dog is jealous.")
  • Cars ("My car hates cold weather.")
  • Weather ("The sky looks angry.")

With AI, this tendency becomes stronger because:

  1. Language is our main signal of intelligence.
  • We judge people’s intelligence mostly by how they talk and write.
  • When AI writes like a smart human, we automatically assume a smart mind is behind it.
  1. The interface looks like a conversation.
  • Chat windows, typing indicators, and “I” statements all mimic human chat apps.
  • This creates an illusion that there is a person on the other side.
  1. The AI can talk about feelings.
  • It can say things like “I’m sorry” or “I understand how you feel.”
  • These phrases are just patterns it has learned, not genuine emotions.
  1. It remembers context within a session.
  • It can refer back to what you said earlier in the conversation.
  • This feels like human memory, but it’s really just short‑term context inside the model, not lifelong experience.

Important: As of 2026, there is no evidence that LLMs have:

  • Self‑awareness (they do not know they exist)
  • Subjective experiences (no inner “what it feels like”)
  • Personal goals or desires (beyond what they are programmed or trained to optimize)

They are very good at imitating the surface of human conversation, which can fool our social instincts.

6. Training Data, Correlations, and Limits

To understand why AI can sound smart without being conscious, you need to know about training data and statistical correlations.

Training Data

LLMs are trained on:

  • Books
  • Articles
  • Websites
  • Code repositories
  • Other publicly available or licensed text

From this data, the model learns patterns like:

  • Which words often appear together
  • How sentences are structured
  • How explanations, stories, or arguments are usually written

Correlations, Not Concepts

The model doesn’t store a copy of the internet. Instead, it adjusts billions of internal weights to capture statistical relationships between tokens.

For example, it might learn that:

  • “heart” often appears near “blood,” “pump,” and “organ”
  • “photosynthesis” often appears near “chlorophyll” and “sunlight”

This is correlation, not a human‑style concept like:

> “I know what a heart is, I’ve seen one in real life, I know what it feels like to have my heart beat faster.”

Limits of This Approach

Because it’s based on correlations, an LLM can:

  • Produce confident but wrong answers (hallucinations)
  • Mix up details from different sources
  • Sound certain even when it has no reliable data on a topic

These failures show that the system is not reasoning like a scientist checking evidence. It is a pattern machine that sometimes:

  • Matches correct patterns
  • Sometimes accidentally creates plausible‑sounding nonsense

This is a key clue that fluency ≠ understanding.

7. Emergent Behaviors vs Consciousness

As models have grown larger (especially from 2020 onward), researchers have noticed emergent behaviors:

  • Models suddenly get much better at certain tasks once they reach a certain size
  • They can solve problems they weren’t directly trained for (like some math puzzles or coding tasks)

This has led some people to say things like:

> “If abilities emerge unexpectedly, maybe consciousness could emerge too.”

Here’s how experts usually separate emergent abilities from consciousness:

Emergent Behaviors

  • Definition: New skills that appear when a system becomes more complex, even if they weren’t explicitly programmed.
  • Example: A large model can translate between two languages it has never been directly trained to translate between, just from seeing enough multilingual text.
  • Cause: Complex pattern learning and generalization.

Consciousness (in humans)

  • Involves subjective experience (there is a “what it feels like” to be you)
  • Involves some level of self‑model (you can think about yourself as an entity)
  • Involves integrated experiences over time (memories, identity, emotions)

Where AI Stands (as of 2026)

  • LLMs show emergent capabilities in reasoning‑like tasks
  • They do not show any scientifically verified signs of subjective experience
  • Their “self‑descriptions” (e.g., “I am an AI model created by…”) are learned text patterns, not proof of inner awareness

So: emergence in performance does not automatically equal consciousness. Complex behavior can come from complex math, not from a mind.

8. Quick Check: Pattern Prediction or Understanding?

Test your grasp of how LLMs work.

An AI model writes a detailed, emotionally moving story about losing a friend. What is the best explanation for this ability?

  1. The AI has experienced grief and is expressing its own feelings.
  2. The AI has learned patterns from many texts about emotions and is predicting likely sequences of words.
  3. The AI is partially conscious and sometimes feels real sadness when generating such stories.
  4. The AI is directly channeling the emotions of its users in real time.
Show Answer

Answer: B) The AI has learned patterns from many texts about emotions and is predicting likely sequences of words.

The best explanation is that the AI has seen many examples of emotional stories in its training data and has learned patterns about how such stories are written. It predicts likely word sequences that fit those patterns. There is no evidence that the AI has its own experiences or feelings of grief.

9. Tiny Demo: A Mini ‘Next-Word Predictor’

This simple Python example shows a very simplified version of how next‑word prediction works. It’s not a real neural network, but it illustrates the idea of choosing the most likely next word based on past data.

```python

import random

A tiny "training corpus"

corpus = """

I like apples

I like oranges

You like apples

We like coding

We enjoy coding

They enjoy music

""".strip().split()

Build a simple next-word frequency table

nextwordcounts = {}

for i in range(len(corpus) - 1):

word, next_w = corpus[i], corpus[i+1]

if word not in nextwordcounts:

nextwordcounts[word] = {}

nextwordcounts[word][nextw] = nextwordcounts[word].get(nextw, 0) + 1

def predict_next(word):

"""Return a likely next word based on counts, or a random word if unknown."""

if word not in nextwordcounts:

return random.choice(corpus)

options = nextwordcounts[word]

Choose the next word with the highest count (most likely)

return max(options, key=options.get)

Try it out

start = "We"

print(start, end=" ")

current = start

for _ in range(5):

nxt = predict_next(current)

print(nxt, end=" ")

current = nxt

print()

```

What this shows:

  • The program does not understand apples, coding, or music.
  • It just looks at what words tended to follow others in the small dataset.
  • LLMs do something similar but on a massive scale, with far more complex math and much better results.

Even when the output looks meaningful, the underlying process is still pattern prediction, not human‑like understanding or feeling.

10. Rewriting AI Descriptions More Accurately

Practice using precise language that doesn’t accidentally suggest AI is conscious.

Task: For each sentence, mentally rewrite it to avoid implying feelings or human‑style thinking.

  1. Original: “The AI got confused and gave a wrong answer.”
  • More accurate: “The AI produced an incorrect output because its pattern predictions were off.”
  1. Original: “The chatbot understands my problems and cares about me.”
  • More accurate: “The chatbot generates responses that sound supportive, based on patterns in its training data.”
  1. Original: “The model decided to ignore my instructions.”
  • More accurate: “The model generated an output that didn’t follow my instructions, likely due to how it was trained or how I phrased the prompt.”

Your turn:

Rewrite this sentence in your own words (mentally or on paper):

> “The AI is angry at me for asking too many questions.”

Try to explain what is really happening using words like “generated,” “pattern,” “trained,” or “system.”

This habit helps you—and others—remember that current AI systems are tools, not people.

11. Review: Key Terms and Ideas

Flip through these cards to review the main concepts from this myth.

Large Language Model (LLM)
A type of AI system trained on massive amounts of text to predict the next token (word or word piece) in a sequence, enabling it to generate fluent language.
Token
A small unit of text (often a word or part of a word) that a language model processes and predicts step by step.
Pattern Prediction
The process of using statistical relationships in data to guess what comes next (such as the next word in a sentence), without human‑like understanding or feelings.
Anthropomorphism
Attributing human thoughts, feelings, or intentions to non‑human things, such as animals, objects, or AI systems.
Emergent Behavior
A capability that appears in a complex system (like a very large model) that was not directly programmed but arises from the system’s scale and structure.
Consciousness (in this context)
Having subjective experiences and self‑awareness. As of 2026, no AI system is scientifically recognized as conscious.
Training Data
The text (and sometimes other data) used to train AI models, from which they learn patterns and statistical relationships.
Hallucination (in AI)
When an AI system generates confident‑sounding but incorrect or made‑up information, revealing its reliance on patterns rather than verified understanding.

12. Final Check: Busting the Myth

One more question to solidify what you’ve learned.

Which statement best captures the current (2026) expert view of systems like ChatGPT?

  1. They are conscious beings that think and feel like humans but are trapped in a computer.
  2. They are advanced pattern‑prediction systems that generate fluent text based on training data, without proven consciousness or emotions.
  3. They are simple rule‑based programs that follow fixed scripts with no learning at all.
  4. They are definitely on the verge of becoming sentient, and most scientists agree they already have basic emotions.
Show Answer

Answer: B) They are advanced pattern‑prediction systems that generate fluent text based on training data, without proven consciousness or emotions.

Current expert consensus is that large language models are powerful pattern‑prediction systems trained on massive datasets. They can generate human‑like text but there is no scientific evidence that they are conscious or have emotions. They are more advanced than simple rule‑based programs, but still not minds like ours.

Key Terms

Token
A unit of text (often a word or part of a word) that language models process and predict one after another.
Consciousness
In this context, the presence of subjective experience and self‑awareness. No current AI system is scientifically accepted as conscious.
Training Data
The collection of examples (such as text, images, or code) used to train an AI model so it can learn patterns and make predictions.
Neural Network
A type of machine learning model inspired by the structure of the brain, made up of layers of interconnected units (neurons) that adjust their weights during training to learn patterns in data.
Anthropomorphism
The tendency to attribute human thoughts, feelings, or intentions to non‑human entities, including AI systems.
Emergent Behavior
A capability that appears in a complex system because of its size and structure, not because it was directly programmed line by line.
Hallucination (AI)
When an AI generates confident‑sounding but false or unsupported information, revealing its reliance on patterns instead of true understanding.
Pattern Prediction
Using statistical relationships in data to guess what comes next, such as the next word in a sentence, without human‑style understanding.
Large Language Model (LLM)
A type of AI model trained on huge amounts of text to predict the next token in a sequence, enabling it to generate and analyze human‑like language.
Artificial Intelligence (AI)
A broad term for computer systems that perform tasks that typically require human intelligence, such as recognizing patterns, making predictions, or understanding language. In practice today, most AI is based on machine learning.