Get the App

Chapter 5 of 8

Human–AI Collaboration Patterns in Coding

Analyze concrete collaboration patterns between developers and AI, including when to rely on AI, when to override it, and how to combine human strengths with machine strengths.

15 min readen

1. Overview: How Humans and AI Actually Code Together

In this module, you’ll move from "AI is a magic autocomplete" to a more precise view: AI is a collaborator with specific strengths and weaknesses.

By the end, you should be able to:

  • Identify which coding tasks are well-suited to AI vs. manual work
  • Use an iterative prompting loop to refine AI-generated code
  • Describe at least three collaboration patterns (autocomplete, pair-programmer, agent)
  • Reason about how AI changes your cognitive load and attention while coding

We’ll assume you’ve already seen tools like GitHub Copilot, ChatGPT-based coding models, or IDE-integrated assistants (as of 2025 these are common in VS Code, JetBrains IDEs, and cloud IDEs).

Key idea for this module:

> Treat AI as a junior teammate who is fast and tireless, but not fully trustworthy and often missing domain context.

You remain the system designer, reviewer, and decision-maker.

2. Three Core Collaboration Patterns

There are many ways to work with AI, but three patterns show up most in real projects:

1. AI-as-Autocomplete (Inline Assistant)

  • Where it lives: Inside your editor, suggesting code as you type.
  • Typical use: Short completions, boilerplate, common idioms.
  • You control: Every accepted line; you can ignore or edit suggestions.

2. AI-as-Pair-Programmer (Conversational Helper)

  • Where it lives: Chat panel in the IDE or browser-based assistant.
  • Typical use: Explaining code, designing APIs, debugging, refactoring, learning unfamiliar libraries.
  • You control: Problem framing, which suggestions to adopt, and integration into your codebase.

3. AI-as-Agent (Task-Level or Multi-Step Executor)

  • Where it lives: Tools that can run commands, edit multiple files, call APIs, or orchestrate other tools (e.g., some 2024–2025 agent-style coding tools).
  • Typical use: Larger tasks like generating a new module, migrating code between frameworks, or setting up CI configs.
  • You control: Goals, constraints, permissions (e.g., which repos or commands it can access), and final review.

In the next steps, we’ll connect these patterns to task suitability and division of labor.

3. Task Suitability: When to Rely on AI vs. Manual Coding

Think in terms of task type and risk level rather than "always use AI" or "never use AI".

A. Tasks AI is usually strong at

  1. Boilerplate and repetitive structures
  • Example: Writing CRUD endpoints in a familiar web framework.
  • Why: Patterns are common in training data; structure is predictable.
  • Pattern: AI-as-autocomplete or pair-programmer.
  1. Glue code and integration scaffolding
  • Example: Setting up a REST client, wiring a logger, writing DTOs.
  • Why: Follows standard library or framework patterns.
  1. Mechanical refactors with clear rules
  • Example: "Convert all callbacks in this file to async/await".
  • Why: Well-defined transformation; easy to verify by tests.

B. Tasks AI is mixed at (use with caution)

  1. Complex business or domain-specific logic
  • Example: Tax calculation rules for a specific country in 2025, custom grading rules for your university.
  • Risk: AI may hallucinate rules or miss edge cases.
  • Strategy: You design the logic; AI helps with implementation details.
  1. Security-sensitive or compliance-related code
  • Example: Auth flows, cryptography usage, GDPR/FERPA-related data handling.
  • Risk: AI often suggests outdated or insecure patterns.
  • Strategy: Use AI only for drafts; validate against current official docs and security guidelines.

C. Tasks better done (mostly) manually

  1. Architecture and high-level design decisions
  • Example: Choosing between microservices vs. monolith, defining service boundaries.
  • Reason: Requires deep context, trade-offs, and long-term thinking.
  1. Final acceptance of critical algorithms
  • Example: Grading logic, financial calculations, safety-related code.
  • Reason: You are accountable; AI is not. You must fully understand and validate the algorithm.

Rule of thumb:

  • Use AI to generate and explore.
  • Use your own judgment to decide and validate.

4. Example: AI-as-Autocomplete vs Manual Control

Below is a simple example where AI-as-autocomplete is helpful, but you still need to validate the result.

Imagine you’re writing a function to paginate a list of items in Python.

```python

from typing import List, Any

You start typing:

def paginate(items: List[Any], page: int, page_size: int) -> List[Any]:

"""Return a slice of items for the given page (1-based)."""

At this point, an AI autocomplete might suggest:

start = (page - 1) * page_size

end = start + page_size

return items[start:end]

```

This suggestion is:

  • Great for: Quickly writing correct indexing boilerplate.
  • Still your job to check:
  • What if `page` is 0 or negative?
  • What if `page_size` is 0 or huge?
  • Do you need to raise errors or just return an empty list?

You might refine it manually:

```python

def paginate(items: List[Any], page: int, page_size: int) -> List[Any]:

"""Return a slice of items for the given page (1-based).

Raises:

ValueError: If page or page_size are not positive.

"""

if page <= 0 or page_size <= 0:

raise ValueError("page and page_size must be positive integers")

start = (page - 1) * page_size

end = start + page_size

return items[start:end]

```

Pattern:

  • AI: Writes the obvious core.
  • You: Add validation, edge cases, and documentation.

5. AI-as-Pair-Programmer: Iterative Prompting & Refinement Loop

When you use AI as a pair-programmer, you’re not just asking once; you’re running a loop.

The 5-step refinement loop

  1. Frame the task clearly
  • Include: language, framework, constraints, and context.
  • Example: "In TypeScript with React 18, create a reusable pagination component that works with server-side data fetching."
  1. Get an initial draft
  • Let the AI produce code or a design.
  • Don’t expect it to be perfect; expect a starting point.
  1. Run and observe
  • Execute tests, run the app, or mentally simulate.
  • Capture concrete issues: errors, missing features, unclear code.
  1. Give targeted feedback
  • Replace vague prompts like "Improve this" with precise ones:
  • "Remove external state library; use React’s built-in hooks only."
  • "Add TypeScript types for props and state."
  1. Repeat until acceptable
  • Stop when:
  • You understand the code.
  • Tests (or manual checks) pass.
  • The design fits your project’s style.

Important:

  • Each loop should narrow the gap between current and desired code.
  • You manage the loop; the AI only responds to your steering.

6. Thought Exercise: Designing Your Prompt & Feedback

Imagine you’re building a small web API in Node.js/Express to manage a todo list.

You want to use AI as a pair-programmer.

Part 1 – Initial prompt (design it)

Write down (mentally or in notes) an initial prompt to an AI assistant that:

  • Specifies the language and framework
  • States the goal
  • Mentions constraints (e.g., no database yet, just in-memory storage)

Example structure (don’t copy verbatim; adapt it):

> "I’m using Node.js with Express 4. Create a simple REST API for todos with in-memory storage only. Support endpoints to list, create, update, and delete todos. Use modern JavaScript (ES modules) and include basic input validation. Don’t use any database or ORM."

Part 2 – Feedback after first draft

Suppose the AI’s first version:

  • Uses `require`/CommonJS instead of ES modules
  • Lacks input validation
  • Doesn’t handle 404 errors when a todo isn’t found

Write a follow-up prompt that:

  • Points out these issues explicitly
  • Asks for a revised version

Example structure:

> "Revise the previous code: (1) use ES module syntax (`import`/`export`), (2) add input validation so `title` is a non-empty string, returning HTTP 400 when invalid, and (3) return HTTP 404 with a JSON error when a todo id is not found for GET/PUT/DELETE. Show the full updated server file."

Reflect for a moment:

  • How did making the feedback concrete change the interaction?
  • How many loops do you think you’d need before you’d trust the code enough to run it in a small class project?

7. AI-as-Agent: Delegating Multi-Step Tasks Safely

Newer tools (2024–2025) can act as agents: they can edit multiple files, run tests, or call external tools. This is powerful but risky if you let them act unchecked.

Typical agent-style tasks

  • Project scaffolding: Set up a new app with routing, linting, tests.
  • Bulk changes: Migrate from one logging library to another across many files.
  • Config & automation: Generate CI workflows (e.g., GitHub Actions), Dockerfiles, or basic infrastructure-as-code templates.

Safe delegation pattern

  1. Define a clear, bounded goal
  • Good: "In this repo, replace all uses of `axios` with `fetch` in the `src/api` folder only."
  • Bad: "Refactor everything to be better."
  1. Set guardrails
  • Restrict which directories or commands the agent can touch.
  • Require human approval for file changes before commit.
  1. Review diffs like a code reviewer
  • Check for:
  • Incomplete migrations
  • Broken imports
  • Style or pattern regressions
  1. Run tests and sanity checks
  • Automated tests (unit/integration)
  • Linting and type checks
  1. Roll back easily
  • Use version control branches.
  • Never let agents commit directly to `main` without review.

Mindset:

  • The agent is a bulk editor, not an architect.
  • You stay in charge of scope, review, and merging.

8. Quick Check: Task–Pattern Matching

Match each task to the most appropriate primary collaboration pattern, assuming a typical undergraduate project.

Question:

You need to migrate all `console.log` calls in a medium-sized TypeScript project to a custom `logger.info`/`logger.error` API, and update imports accordingly. You have tests and version control. What’s the best primary pattern to start with?

Which collaboration pattern is most appropriate as your primary approach?

  1. AI-as-autocomplete inside the editor, changing each call manually as you encounter it
  2. AI-as-pair-programmer in chat, asking it for a single code example and then doing all changes by hand
  3. AI-as-agent to propose bulk edits across the codebase, followed by your manual review and running tests
Show Answer

Answer: C) AI-as-agent to propose bulk edits across the codebase, followed by your manual review and running tests

Using an **AI-as-agent** fits best for systematic, repetitive changes across many files. The agent can propose bulk edits, but you should still review diffs and run tests. Autocomplete or a single example from a pair-programmer is slower and more error-prone for a large, mechanical migration.

9. Division of Labor: Exploration, Prototyping, and Refactoring

You can think of coding work as three phases where AI and humans contribute differently:

1. Exploration (What could we do?)

  • AI strengths:
  • Quickly list options, libraries, and patterns.
  • Summarize docs or compare approaches.
  • Your strengths:
  • Understanding constraints (course requirements, team skills, deadlines).
  • Evaluating trade-offs and feasibility.

2. Prototyping (What might work?)

  • AI strengths:
  • Generate first-pass implementations.
  • Stub out modules, routes, or components.
  • Your strengths:
  • Deciding what to prototype.
  • Plugging prototypes into your actual system.
  • Ensuring the prototype matches real requirements.

3. Refactoring & Hardening (What will we ship?)

  • AI strengths:
  • Suggest refactors (e.g., extract functions, reduce duplication).
  • Convert patterns (e.g., callbacks → promises, class-based → function-based components).
  • Your strengths:
  • Enforcing style guides, architecture, and performance constraints.
  • Writing or maintaining tests.
  • Making final decisions on readability and maintainability.

Practical takeaway:

  • Let AI do wide exploration and fast drafts.
  • You do narrow selection, integration, and final polishing.

10. Cognitive Load and Attention in AI-Enhanced Workflows

AI changes what you pay attention to while coding.

How AI can reduce cognitive load

  • Less time on syntax and boilerplate.
  • Fewer context switches to Google/Stack Overflow for simple patterns.
  • Faster feedback when exploring unfamiliar APIs.

How AI can increase cognitive load

  • You must constantly evaluate suggestions:
  • Is this correct?
  • Does it match our style and constraints?
  • Is it secure and up-to-date?
  • You may be tempted to accept code you don’t fully understand.
  • Long AI outputs can be harder to hold in working memory.

Strategies to manage attention

  1. Chunk your tasks
  • Ask AI for small, focused changes instead of huge rewrites.
  1. Limit suggestion scope
  • In autocomplete, don’t accept large blocks you haven’t read.
  • In chat, ask for shorter answers when you’re overwhelmed.
  1. Use tests as cognitive offloading
  • Let tests check correctness while you focus on intent and design.
  1. Pause to summarize
  • After accepting AI code, paraphrase it in your own words:
  • "This function paginates items and throws on invalid page/page_size."
  • If you can’t summarize it, you don’t really understand it yet.

Your goal is to keep mental ownership of the code, even when AI writes many of the lines.

11. Review: Key Terms and Patterns

Flip these cards (mentally or with your own notes) to reinforce the core concepts from this module.

AI-as-autocomplete
A collaboration pattern where AI suggests small, inline code completions as you type, mainly for boilerplate and common idioms. You accept or reject suggestions line by line.
AI-as-pair-programmer
A conversational collaboration pattern where AI helps you design, explain, debug, and refactor code through back-and-forth prompts and feedback, similar to working with a human peer.
AI-as-agent
A more autonomous pattern where AI can perform multi-step tasks like editing many files, running tools, or orchestrating commands under your constraints and review.
Iterative prompting and refinement loop
A cycle of (1) framing a clear task, (2) getting an AI draft, (3) running/inspecting it, (4) giving targeted feedback, and (5) repeating until the result is acceptable and understood.
Task suitability
Evaluating whether a coding task is appropriate for AI assistance based on its nature (boilerplate vs domain logic), risk level, and ease of verification.
Cognitive load in AI-assisted coding
The mental effort required to understand, evaluate, and integrate AI-generated code, including the trade-off between reduced boilerplate work and increased need for critical review.

Key Terms

AI-as-agent
An AI system that can autonomously perform multi-step coding tasks, such as editing multiple files, running tests, or calling external tools, under human-defined goals and constraints.
Cognitive load
The amount of mental effort required to process information and make decisions; in coding, this includes understanding code, tracking state, and evaluating AI suggestions.
Boilerplate code
Repetitive, standard code required to set up or structure a program or framework, which rarely contains unique business logic.
Task suitability
An assessment of how appropriate a task is for AI assistance, considering complexity, domain specificity, risk, and how easily you can verify correctness.
AI-as-autocomplete
A pattern where an AI tool embedded in your editor suggests inline completions for code as you type, usually for small snippets and boilerplate.
Iterative prompting
The practice of repeatedly refining your prompts and providing feedback based on the AI’s previous responses to gradually improve the output.
AI-as-pair-programmer
Using AI in a chat-like interface to discuss code, get explanations, generate drafts, and iteratively refine solutions, similar to working with a human programming partner.
Domain-specific logic
Code that encodes rules, policies, or behaviors specific to a particular problem domain, organization, or application context.