Customizing OpenCode: A 4-Part Guide to Building AI Workflows That Actually Work
OpenCode is an open-source AI coding agent that runs in your terminal. Out of the box, it's good. With some customization, it becomes yours.
This series covers everything I've learned setting up OpenCode for real work—not toy examples, but workflows I actually use every day.
Why Customize?
OpenCode ships with sensible defaults. You can install it, point it at your project, and start coding. So why bother customizing?
Three reasons:
Control. The default agent can run any bash command. That's powerful, but maybe too powerful. Custom setups let you restrict what the AI can do—run tests but not delete files, read code but not modify it.
Consistency. Your team has workflows. Deploy scripts, test commands, linting rules. Wrapping these as tools means the AI uses the same processes you do, not whatever it invents on the fly.
Speed. Once you've built a workflow, triggering it is one keystroke. /test runs your tests and analyzes failures. /review checks your staged changes. No typing out instructions every time.
The Stack
OpenCode's customization has four layers:
Commands → Trigger workflows with /slash
↓
Agents → Control how the AI thinks
↓
Tools → Define what the AI can do
↓
Scripts → Run the actual commands
Each layer builds on the one below. Scripts are the foundation—human-runnable, testable, version-controlled. Tools wrap scripts with validation. Agents control which tools are available. Commands trigger the whole thing.
The Series
Part 1: Custom Tools
How to wrap any CLI command as a tool the LLM can call. The pattern: write a shell script first, then create a thin TypeScript wrapper that validates arguments.
You'll learn:
- Where to put tool definitions
- How to use
Bun.$to call scripts - Argument validation with Zod schemas
- Organizing scripts and tools together
Part 2: Writing Shell Scripts
Using OpenCode itself to write better scripts faster. The key is describing goals, not implementations—let the AI pick the right commands.
You'll learn:
- How to prompt for shell scripts effectively
- Iterating in small, testable steps
- Giving context about your environment
- Going from script to custom tool
Part 3: Agents & Commands
The difference between agents, subagents, and commands—and when to use each. This is where people get confused, so I break it down clearly.
You'll learn:
- Primary agents vs subagents
- How subagents run in isolation
- Commands as prompt shortcuts
- Restricting tool access per agent
Part 4: The Full Picture
Everything comes together in one complete example: a test-and-fix workflow. Script → tool → subagent → command, all four layers working together.
You'll learn:
- Building a complete workflow end-to-end
- How the layers connect at runtime
- Extending the pattern to other workflows
- Directory structure for real projects
Who This Is For
You should read this series if:
- You're using OpenCode and want to go beyond the defaults
- You want the AI to use your team's existing scripts and processes
- You care about controlling what the AI can and can't do
- You're tired of typing the same instructions repeatedly
You don't need to be an OpenCode expert. Basic familiarity is enough—if you've run opencode and had a conversation, you're ready.
What You'll Build
By the end of the series, you'll have:
.opencode/
├── scripts/
│ └── test.sh # Human-runnable test script
├── tool/
│ └── test.ts # Tool wrapper with validation
├── agent/
│ └── test-analyzer.md # Specialized subagent
└── command/
└── test.md # One-keystroke workflow trigger
More importantly, you'll understand the pattern well enough to build your own workflows for whatever you need—linting, deployment, code review, database operations, anything.
Let's Go
Start with Part 1: Custom Tools.
Or if you want to see the end result first, jump to Part 4: The Full Picture and work backwards.
Questions as you go through the series? Find me on Twitter or drop me an email.