Get the App

Chapter 3 of 9

Inside the Language: How Sign Languages Work

Examine core linguistic features of sign languages, including phonology, grammar, and the role of space and facial expressions.

15 min readen

1. From Seeing Language to Inside the System

In earlier modules, you saw that:

  • Sign languages are natural human languages used primarily by Deaf communities.
  • There are many different sign languages (ASL, BSL, Libras, LSF, etc.), not just one "universal sign language".

Now we zoom inside the language system itself.

In this module you will:

  • Break a sign into its building blocks (similar to sounds in spoken language).
  • See how sign languages use 3D space and facial grammar.
  • Explore how the brain processes sign and spoken language in similar ways.

We will mostly use ASL (American Sign Language) for examples, but the general ideas apply to many sign languages. Remember: specific forms vary by language, but the linguistic principles are widely shared.

2. Phonology in the Hands: The 5 Key Parameters

Spoken languages build words from phonemes (like /p/, /t/, /a/). Sign languages also have a kind of phonology: small units that change meaning but are not meaningful by themselves.

A single sign can be analyzed with five main parameters:

  1. Handshape – which fingers are extended, bent, spread, etc.
  2. Location – where the sign is produced (forehead, chest, neutral space, etc.).
  3. Movement – how the hands move (straight, circular, repeated, wiggling, etc.).
  4. Orientation – which way the palm and fingers face (up, down, toward signer, away, etc.).
  5. Non-manual signals – facial expressions, mouth shapes, head and body movements.

Changing one parameter can change the meaning, just like changing one sound can turn bat into pat.

> Key idea: These parameters are part of linguistic structure, not just "style" or "emotion."

3. Visualizing Parameters with ASL Examples

You do not need to know ASL to follow this. Focus on how the signs differ.

Example A: ASL signs often glossed as `MOTHER` and `FATHER`

> Note: In written linguistics, signs are often written in ALL CAPS as glosses.

  • MOTHER (ASL)
  • Handshape: Open hand, fingers spread; thumb extended.
  • Location: Thumb touches the chin.
  • Movement: Usually a light tap or contact; minimal movement.
  • Orientation: Palm sideways or slightly forward.
  • Non-manual: Neutral face in citation form (dictionary form).
  • FATHER (ASL)
  • Handshape: Same as MOTHER.
  • Location: Thumb touches the forehead.
  • Movement: Similar light tap.
  • Orientation: Similar.
  • Non-manual: Neutral face in citation form.

Here, location alone (chin vs. forehead) distinguishes the two signs.

Example B: Minimal pair with movement

In ASL, there is a well-known pair often described as:

  • `SIT` vs. `CHAIR`

Both use similar handshapes and location, but:

  • `SIT`: Single movement.
  • `CHAIR`: Repeated movement.

This is like English tap vs. tapping—a small change in movement changes the meaning.

> Takeaway: You can treat handshape, location, movement, orientation, and non-manuals as organized, rule-governed units, not random gestures.

4. Parameter Hunt: Break a Sign into Parts

Do this as a thought exercise. You can use any sign language you know or a video dictionary (for example, an online ASL dictionary) if available.

  1. Pick one sign you know or can see in a video (for example, a sign glossed as `BOOK` or `HOUSE`).
  2. Pause the video or visualize it, then answer:
  • Handshape: Which fingers are extended? Bent? Are there one or two hands?
  • Location: Is the sign near the head, torso, or in neutral space in front of the body?
  • Movement: Is there a straight movement, a circle, a tap, a repeated motion?
  • Orientation: Where does the palm face (up, down, toward you, away from you)?
  • Non-manuals: Is there a specific facial expression, head tilt, or body lean that seems required?
  1. Now imagine changing just one parameter:
  • What happens if you keep everything the same but change location?
  • Or keep everything the same but change movement (e.g., from single to repeated)?
  1. Reflect:
  • Does the sign turn into a different sign in that language?
  • Or does it become incorrect / meaningless?

Write down your analysis in this simple template:

```text

Sign gloss:

Handshape:

Location:

Movement:

Orientation:

Non-manuals:

Possible change (which parameter?):

Result (new sign / nonsense / unclear):

```

5. Space as Grammar: Pronouns and Reference Points

One of the biggest differences from spoken languages: sign languages use physical space as part of grammar, not only for pointing.

Pronouns in space

In many sign languages (including ASL):

  • The signer points to themselves for something like `I / ME`.
  • The signer points to another person for something like `YOU`.
  • For someone who is not physically present, the signer can assign a location in space.

Example (described in ASL-style usage):

  1. Signer introduces a person: `WOMAN` and then point to a spot on the right side in the signing space.
  2. From then on, that right-side location stands for that woman.
  3. Later, a simple point to that location works like a third-person pronoun (`SHE` in English), even if the actual person is not there.

This process is often called establishing a referent in space. The space itself becomes part of the discourse memory.

> Key idea: Space is not just physical; it is grammatical space.

6. Verb Agreement and Movement in Space

Many sign languages use movement through space to show who did what to whom. This is often called agreement or directionality.

Using ASL-style descriptions again:

  1. The signer points to themselves (location A) to represent I.
  2. The signer points to a location on their right (location B) to represent YOU.

Now consider a verb like `GIVE` in ASL:

  • A sign glossed as `GIVE(A→B)` can move from location A (the signer) toward location B (the addressee).
  • A sign glossed as `GIVE(B→A)` moves from B toward A.

So:

  • `GIVE(A→B)` ≈ "I give to you."
  • `GIVE(B→A)` ≈ "You give to me."

The start and end points of the movement encode subject and object roles.

Not all verbs in all sign languages behave this way, but for those that do, the path of movement is grammatically meaningful, similar to verb endings in spoken languages (like give vs. gives vs. given).

7. Facial Grammar: Not Just Emotion

Facial expressions and body posture in sign languages are not only about emotion. They are also grammatical markers.

Common grammatical uses (documented in many sign languages, including ASL, BSL, and others):

  1. Yes–No questions
  • Often signers use raised eyebrows, slightly forward head tilt, and sometimes widened eyes.
  • This non-manual pattern can stretch over the whole question.
  1. Wh-questions (who, what, where, why, how)
  • Often involve furrowed / lowered eyebrows, sometimes a slight head tilt.
  1. Negation
  • A head shake can function as a grammatical negation marker, sometimes spreading over part of the sentence.
  1. Conditionals and topics
  • A raised eyebrow plus a slight head tilt on the first part of a sentence can mark something like "IF" or a topic (roughly: "As for X...").

These patterns are language-specific:

  • A facial expression that looks like "confusion" to a hearing observer may actually be a standard wh-question marker.

> Key idea: In sign languages, the face and body are part of the sentence structure, not just add-ons.

8. Quick Check: Parameters and Grammar

Test your understanding of the building blocks and the role of space and facial grammar.

Which statement best reflects how sign languages use space and facial expressions?

  1. Space is only used for pointing to real objects, and facial expressions are only emotional.
  2. Space and facial expressions are part of the grammatical system, marking things like pronouns, agreement, and questions.
  3. Space shows grammar, but facial expressions are always optional and have no grammatical role.
Show Answer

Answer: B) Space and facial expressions are part of the grammatical system, marking things like pronouns, agreement, and questions.

Research in many sign languages shows that signers use specific locations in space for referents (like pronouns) and move verbs between these locations for agreement, while non-manual signals such as eyebrow position, head movement, and mouth shapes can mark questions, negation, and other grammatical categories. So option 2 is correct.

9. The Brain and Sign Languages: What Neuroscience Shows

Over the past few decades, brain imaging (fMRI, PET, EEG) and lesion studies have built a strong picture of how sign and spoken languages are processed.

Key findings (consistent across many studies up to at least the early 2020s):

  1. Same core language areas
  • For Deaf native signers, using a sign language activates classic left-hemisphere language regions (often called Broca's area and Wernicke's area, plus surrounding networks), similar to hearing speakers using spoken language.
  • Damage to these regions can cause aphasia in sign, with parallel effects to spoken aphasia (e.g., difficulty producing grammatically correct signs, or understanding complex signed sentences).
  1. Modality-specific plus shared networks
  • Sign languages also engage visual and spatial brain areas, because they are seen and produced in space.
  • But the core grammatical processing overlaps strongly with spoken languages, supporting the idea that the brain has a modality-independent language system.
  1. Early exposure matters
  • Studies of Deaf adults who got access to sign language late in childhood show long-lasting effects on grammatical processing, similar to late exposure to spoken language.
  • This supports the idea of a sensitive period for language, regardless of whether the language is signed or spoken.

Overall conclusion: Sign languages are full human languages in the brain, not "workarounds" or "gestures." They are processed in largely the same language networks as spoken languages, with additional recruitment of visual-spatial regions.

10. Apply It: Analyze a Signed Sentence

If you have access to a short video of a signed sentence (in any sign language), try this structured analysis. If not, imagine a simple ASL-style sentence like:

> `WOMAN` (right location established) `MAN` (left location established) `GIVE(left→right)`

  1. Identify parameters for one key sign (for example, the verb `GIVE`):
  • Handshape: What shape do the hands take?
  • Location: Where does the sign start and end in space?
  • Movement: What is the path of the movement?
  • Orientation: Does the palm orientation change as it moves?
  • Non-manuals: Are the eyebrows raised/lowered? Is the head tilted or shaking?
  1. Interpret space grammatically:
  • Who is the giver and who is the receiver based on start and end locations?
  • How were those locations established earlier in the sentence (e.g., by first mentioning WOMAN and MAN and pointing to the right/left)?
  1. Interpret non-manuals grammatically:
  • Do raised eyebrows suggest a yes–no question?
  • Do lowered eyebrows suggest a wh-question?
  • Is there a head shake that might signal negation?
  1. Summarize in one or two sentences:

```text

In this sentence, space is used to show _.

Non-manuals are used to mark _.

Together, this shows that sign language grammar is _.

```

11. Flashcard Review: Key Concepts

Flip the cards (mentally or with a partner) to review the main terms from this module.

Handshape
The configuration of the fingers and thumb in a sign (which are extended, bent, spread, etc.). A core phonological parameter in sign languages.
Location
Where a sign is produced in relation to the body or signing space (e.g., forehead, chin, chest, neutral space). Changing location can change meaning.
Movement
The path or type of motion in a sign (straight, circular, repeated, tapping, etc.). Movement often carries grammatical or lexical contrasts.
Orientation
The direction the palm and/or fingers face during a sign (up, down, toward signer, away, etc.). Orientation differences can distinguish signs.
Non-manual signals
Linguistic use of facial expressions, mouth shapes, eye gaze, head and body movements in sign languages. They mark grammar (questions, negation, topics) as well as affect.
Spatial reference (locus)
A specific location in signing space associated with a referent (person, object, idea). Later points or movements to that locus function like pronouns or agreement markers.
Verb agreement / directionality
A pattern where the movement of a verb sign begins and ends at locations associated with subject and object, encoding who acts on whom.
Facial grammar
Systematic use of eyebrows, eye gaze, mouth, and head position to mark grammatical categories such as yes–no questions, wh-questions, topics, and conditionals.
Modality-independent language system
The idea, supported by brain research, that core language areas process grammar and structure similarly for both signed and spoken languages, despite different sensory modalities.

12. Final Check: Are Sign Languages Equally Complex?

One last question to connect linguistic structure and brain evidence.

Which combination of evidence best supports the claim that sign languages are full, complex human languages?

  1. They use hand movements and can be translated into spoken languages word-for-word.
  2. They have systematic phonological parameters and use space and non-manuals grammatically, and brain studies show they activate core language areas similar to spoken languages.
  3. They are iconic and therefore easier to learn than spoken languages.
Show Answer

Answer: B) They have systematic phonological parameters and use space and non-manuals grammatically, and brain studies show they activate core language areas similar to spoken languages.

The strongest evidence comes from internal structure (phonology, grammar, use of space and facial grammar) and external evidence from neuroscience (activation of core language networks and similar patterns of aphasia). This is captured in option 2.

Key Terms

Aphasia
A language disorder caused by brain damage (often in left-hemisphere language regions) that can affect production and comprehension in both spoken and signed languages.
Location
The place on or near the body, or in the signing space, where a sign is articulated (e.g., forehead, chin, chest, neutral space).
Modality
The sensory channel through which a language is expressed and perceived (e.g., auditory–vocal for spoken languages, visual–manual for sign languages).
Movement
The type and path of motion in a sign (straight, circular, repeated, tapping, etc.), which can change the meaning or grammatical function.
Handshape
The configuration of the hand(s) in a sign, including which fingers are extended, bent, or spread. A key phonological parameter in sign languages.
Orientation
The direction the palm and/or fingers face during a sign (e.g., up, down, toward the signer, away from the signer).
Facial grammar
The rule-governed use of eyebrows, eyes, mouth, and head position to mark grammatical categories such as questions, negation, topics, and conditionals.
Non-manual signals
Facial expressions, mouth shapes, eye gaze, head movements, and body posture used linguistically in sign languages to mark grammar and affect.
Spatial reference (locus)
A specific point in signing space associated with a referent; later points or verb movements to that point function like pronouns or agreement markers.
Phonology (in sign languages)
The level of linguistic structure dealing with contrastive form units (handshape, location, movement, orientation, and some non-manuals) that distinguish signs but do not carry meaning on their own.
Verb agreement (directionality)
A grammatical pattern where the movement of a verb sign begins and ends at spatial locations associated with its arguments (subject, object), encoding who does what to whom.
Modality-independent language system
The concept that core language networks in the brain handle grammar and structure for both signed and spoken languages, even though they use different modalities.