Agents, Subagents, and Commands in OpenCode: When to Use What

Part 3 of Customizing OpenCode. See also: Custom Tools, Shell Scripts, and The Full Picture.

OpenCode gives you three ways to customize how the AI works: agents, subagents, and commands. They overlap in some ways, which can be confusing. Here's how I think about them and when to use each.

The Quick Version

  • Agents — Persistent AI personalities with specific system prompts and tool access. You switch between them.
  • Subagents — Specialists that primary agents can delegate to. They run in isolation and return results.
  • Commands — Shortcuts that inject a prompt template. They trigger an action, not a personality.

Agents: Your Primary Assistants

OpenCode comes with two built-in agents:

  • Build — Full access, makes changes
  • Plan — Read-only, analyzes without modifying

You switch between them with Tab. Each has its own system prompt and tool permissions.

Creating a Custom Agent

Define agents as markdown files in .opencode/agent/ (project) or ~/.config/opencode/agent/ (global).

# .opencode/agent/reviewer.md
---
description: Code review specialist focused on quality and maintainability
mode: primary
model: anthropic/claude-sonnet-4-20250514
tools:
  write: false
  edit: false
  bash: false
---

You are a senior code reviewer. Focus on:
- Code clarity and readability
- Potential bugs and edge cases
- Performance implications
- Security concerns

Provide constructive feedback without making direct changes.
Ask clarifying questions when intent is unclear.

Now you can Tab to this agent when you want a review perspective.

Key Agent Options

description: When to use this agent (shown in UI)
mode: primary | subagent | all
model: provider/model-name
temperature: 0.7
tools:
  read: true
  write: false
  bash: false
  mcp:*: false  # Wildcards work

Subagents: Isolated Specialists

Subagents are different. They don't take over your session—they run in isolation, do a specific task, and return results. The primary agent's context stays clean.

Why Subagents Matter

When your Build agent needs to do something specialized—security audit, test generation, documentation—it can delegate to a subagent. The subagent gets its own context, does its thing, and hands back the result. Your main conversation doesn't get polluted with all that intermediate work.

Creating a Subagent

# .opencode/agent/security-auditor.md
---
description: Analyzes code for security vulnerabilities
mode: subagent
tools:
  read: true
  grep: true
  glob: true
  write: false
---

You are a security specialist. When invoked:

1. Identify the code to analyze
2. Check for common vulnerabilities (injection, auth issues, data exposure)
3. Review dependencies for known CVEs
4. Provide a severity-ranked list of findings
5. Suggest specific fixes

Be thorough but concise. Focus on actionable findings.

Invoking Subagents

Two ways:

Manual — Use @ mention in the chat:

@security-auditor review the authentication module

Automatic — The primary agent decides to delegate based on the subagent's description. If your Build agent sees a security-related task and knows about security-auditor, it may invoke it automatically.

In your primary agent's system prompt, you can guide this:

# .opencode/agent/build.md
---
description: Main development agent
mode: primary
tools:
  write: false
---

When security review is needed, delegate to: security-auditor
When writing tests, delegate to: test-writer
When documenting code, delegate to: docs-writer

Note: In system prompts, reference subagents by name without the @ prefix.

Commands: Prompt Shortcuts

Commands are simpler. They're just prompt templates you trigger with /commandname.

# .opencode/command/test.md
---
description: Run tests and analyze failures
agent: build
---

Run the full test suite:

!`npm test`

Analyze any failures and suggest fixes.

Now /test injects that prompt with the live test output.

Commands Can Use Subagents

Here's where it gets interesting. You can configure a command to run as a subtask:

# .opencode/command/security-check.md
---
description: Run security audit
agent: security-auditor
subtask: true
---

Audit the current codebase for security vulnerabilities.
Focus on: $ARGUMENTS

With subtask: true, this runs in isolation even if security-auditor is configured as a primary agent. Your main context stays clean.

Agents vs Commands: When to Use Which

Use an Agent When:

  • You want a persistent personality that shapes all interactions
  • You need specific tool restrictions (read-only reviewer, no-bash analyst)
  • You're switching modes of work (building vs planning vs reviewing)
  • The AI needs ongoing context about its role

Use a Command When:

  • You have a repeatable prompt you run often
  • You want to inject dynamic content (shell output, file contents)
  • You need a one-shot action, not a personality shift
  • You want to trigger a subagent with specific arguments

Use a Subagent When:

  • A task requires specialized focus (security, testing, docs)
  • You want context isolation — keep the main conversation clean
  • Multiple specialists need to collaborate on a complex task
  • You're building workflows where agents hand off to each other

Practical Examples

Example 1: Code Review Workflow

Agent: A read-only reviewer you switch to

# .opencode/agent/reviewer.md
---
description: Code review mode
mode: primary
tools:
  write: false
  edit: false
---

Review code for quality, bugs, and maintainability.
Never make changes directly.

Command: Quick review of staged changes

# .opencode/command/review-staged.md
---
description: Review staged git changes
agent: reviewer
---

Review these staged changes:

!`git diff --staged`

Provide feedback on code quality and potential issues.

Example 2: Test Generation Pipeline

Subagent: Test writer specialist

# .opencode/agent/test-writer.md
---
description: Generates comprehensive tests
mode: subagent
tools:
  read: true
  write: true
---

You write tests. When invoked:
1. Analyze the code to understand behavior
2. Identify edge cases
3. Generate tests with good coverage
4. Use the project's existing test patterns

Command: Generate tests for a specific file

# .opencode/command/gen-tests.md
---
description: Generate tests for a file
agent: test-writer
subtask: true
---

Generate comprehensive tests for: @$1

Match the existing test style in this project.

Usage: /gen-tests src/utils/parser.ts

Example 3: Multi-Agent Feature Development

Build agent delegates to specialists:

# .opencode/agent/build.md
---
description: Main development agent
mode: primary
---

You build features. For complex tasks, delegate:
- Security concerns → security-auditor
- Test coverage → test-writer  
- API documentation → api-documenter

Coordinate their outputs into a cohesive implementation.

The subagents handle their domains, Build synthesizes the results.

Custom Tools: The Missing Piece

Agents and commands control how the AI thinks. Custom tools control what it can do. They work together.

If you haven't read my previous post on custom tools, here's the short version: you write shell scripts, then wrap them as tools the LLM can call.

Giving Agents Specialized Tools

You can restrict which tools an agent can access. Combined with custom tools, this gets powerful.

Example: A deployment agent that can only use your deploy script:

# .opencode/agent/deployer.md
---
description: Handles deployments to staging and production
mode: subagent
tools:
  read: true
  deploy: true    # Your custom tool
  write: false
  bash: false     # No arbitrary commands
---

You handle deployments. Use the deploy tool to push changes.
Verify the environment and version before deploying.
Never run arbitrary bash commands.

The custom tool (from .opencode/tool/deploy.ts):

import { tool } from "@opencode-ai/plugin"

export default tool({
  description: "Deploy to staging or production",
  args: {
    environment: tool.schema.enum(["staging", "production"]),
    tag: tool.schema.string().optional(),
  },
  async execute(args) {
    const tag = args.tag || "latest"
    return await Bun.$`./.opencode/scripts/deploy.sh ${args.environment} ${tag}`.text()
  },
})

Now your deployer agent has one job and one tool to do it. It can't write files or run arbitrary bash—only deploy through your controlled script.

Commands That Use Custom Tools

Commands can trigger agents that have access to specific tools:

# .opencode/command/deploy-staging.md
---
description: Deploy current branch to staging
agent: deployer
subtask: true
---

Deploy the current branch to staging.
Use the latest git tag if available, otherwise use "latest".

Running /deploy-staging triggers the deployer subagent, which uses your custom deploy tool, which runs your tested shell script. Clean chain of control.

The Full Picture

Here's how it all connects:

  1. Shell scripts — Human-runnable, tested, version-controlled
  2. Custom tools — Wrap scripts so the LLM can call them with validated args
  3. Agents/Subagents — Control which tools are available and how the AI approaches tasks
  4. Commands — Shortcuts that trigger specific agents with specific prompts

Each layer adds control without adding complexity to the layers below.

Directory Structure

Here's how I organize it:

.opencode/
├── scripts/             # Shell scripts (human-runnable)
│   ├── deploy.sh
│   ├── db-snapshot.sh
│   └── run-tests.sh
├── tool/                # Tool wrappers (LLM-callable)
│   ├── deploy.ts
│   ├── db-snapshot.ts
│   └── run-tests.ts
├── agent/
│   ├── reviewer.md      # Primary: code review mode
│   ├── deployer.md      # Subagent: deployment specialist
│   ├── security-auditor.md  # Subagent: security specialist
│   └── test-writer.md   # Subagent: test generation
└── command/
    ├── review-staged.md # Quick review of staged changes
    ├── deploy-staging.md    # Deploy to staging
    └── security-check.md    # Run security audit

Tips

Keep subagent descriptions clear. The primary agent uses them to decide when to delegate. Vague descriptions lead to poor routing.

Use subtask: true on commands when you want isolation. Otherwise the command runs in your current context.

Don't over-specialize. A few well-defined subagents beat a dozen narrow ones. Start broad, split when needed.

Test invocation manually first. Use @subagent-name to verify the subagent works before relying on automatic delegation.

Match models to tasks. Fast models (Haiku) for simple subagents, capable models (Sonnet/Opus) for complex reasoning.

# Fast model for simple file operations
model: anthropic/claude-3-5-haiku-20241022

# Capable model for security analysis
model: anthropic/claude-sonnet-4-20250514

Wrapping Up

  • Agents shape how the AI thinks across your session
  • Subagents handle specialized tasks in isolation
  • Commands trigger specific prompts with dynamic content

Start with the built-in agents. Add commands for your common workflows. Introduce subagents when you need specialists that won't clutter your main context.

Want to see it all come together? Check out The Full Picture where I build a complete test-and-fix workflow using all four layers.


Questions? Find me on Twitter or drop me an email.

← Back to blog
Bun + HTMX
Ready
© 2026 Efra