How to Turn Any CLI Command into an OpenCode Custom Tool

Part 1 of Customizing OpenCode. See also: Shell Scripts, Agents & Commands, and The Full Picture.

If you're using OpenCode as your AI coding agent, you've probably noticed it comes with solid built-in tools like read, write, and bash. But the real power comes when you create your own.

In this post, I'll show you a pattern I've been using: write a shell script first, then wrap it as an OpenCode tool. This gives you the best of both worlds—scripts you can run manually and tools the LLM can invoke.

Why This Pattern?

You could call CLI commands directly from your tool definition:

async execute(args) {
  return await Bun.$`docker ps -a --filter status=running`.text()
}

But there are good reasons to create a separate shell script instead:

  • Testability — Run and debug your script manually before the LLM touches it
  • Reusability — Your team can use the scripts directly, no OpenCode required
  • Readability — Complex logic lives in bash where it belongs, not crammed into a TypeScript string
  • Version control — Track script changes independently

The Setup

OpenCode looks for custom tools in two places:

  • .opencode/tool/ — Project-level tools
  • ~/.config/opencode/tool/ — Global tools

I like to add a scripts/ folder alongside:

.opencode/
├── scripts/        # Shell scripts (human-runnable)
│   ├── deploy.sh
│   └── db-snapshot.sh
└── tool/           # Tool wrappers (LLM-callable)
    ├── deploy.ts
    └── db-snapshot.ts

A Real Example: Deployment Script

Let's say you have a deployment workflow. First, write the script:

#!/bin/bash
# .opencode/scripts/deploy.sh

set -e

ENV=${1:-staging}
TAG=${2:-latest}

echo "🚀 Deploying $TAG to $ENV..."

docker build -t myapp:$TAG .
docker push registry.example.com/myapp:$TAG

kubectl set image deployment/myapp \
  myapp=registry.example.com/myapp:$TAG \
  -n $ENV

echo "✅ Deployed successfully"

Make it executable:

chmod +x .opencode/scripts/deploy.sh

Test it manually:

./.opencode/scripts/deploy.sh staging v1.0.0

Once it works, wrap it as a tool:

// .opencode/tool/deploy.ts
import { tool } from "@opencode-ai/plugin"

export default tool({
  description: "Deploy the application to staging or production",
  args: {
    environment: tool.schema
      .enum(["staging", "production"])
      .describe("Target environment"),
    tag: tool.schema
      .string()
      .optional()
      .describe("Docker image tag (default: latest)"),
  },
  async execute(args) {
    const tag = args.tag || "latest"
    return await Bun.$`./.opencode/scripts/deploy.sh ${args.environment} ${tag}`.text()
  },
})

Now you can tell OpenCode "deploy version 1.2.0 to production" and it knows exactly what to do.

Another Example: Database Snapshots

Here's one I use constantly—quick database snapshots before running migrations:

#!/bin/bash
# .opencode/scripts/db-snapshot.sh

set -e

DB_NAME=$1
LABEL=${2:-snapshot}
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
OUTFILE="./backups/${DB_NAME}_${LABEL}_${TIMESTAMP}.sql.gz"

mkdir -p ./backups

echo "📸 Creating snapshot of $DB_NAME..."
pg_dump "$DB_NAME" | gzip > "$OUTFILE"
echo "✅ Saved to $OUTFILE"
// .opencode/tool/db-snapshot.ts
import { tool } from "@opencode-ai/plugin"

export default tool({
  description: "Create a compressed snapshot of a PostgreSQL database",
  args: {
    database: tool.schema.string().describe("Database name"),
    label: tool.schema
      .string()
      .optional()
      .describe("Label for the snapshot file (default: snapshot)"),
  },
  async execute(args) {
    const label = args.label || "snapshot"
    return await Bun.$`./.opencode/scripts/db-snapshot.sh ${args.database} ${label}`.text()
  },
})

Multiple Tools, One File

If you have related tools, you can export them from a single file. Each export becomes a separate tool named <filename>_<export>:

// .opencode/tool/git.ts
import { tool } from "@opencode-ai/plugin"

export const feature = tool({
  description: "Create a new feature branch",
  args: {
    name: tool.schema.string().describe("Branch name"),
  },
  async execute(args) {
    return await Bun.$`./.opencode/scripts/git-feature.sh ${args.name}`.text()
  },
})

export const cleanup = tool({
  description: "Delete merged branches",
  args: {},
  async execute() {
    return await Bun.$`./.opencode/scripts/git-cleanup.sh`.text()
  },
})

This creates git_feature and git_cleanup tools.

Tips

Use set -e in your scripts. This makes them fail fast on errors, which gives the LLM better feedback about what went wrong.

Echo progress messages. The LLM sees stdout, so messages like "Building image..." help it understand what's happening.

Keep tool descriptions clear. The LLM uses these to decide when to call your tool. Be specific about what it does and when to use it.

Validate in the tool, not the script. Use Zod schemas to constrain inputs before they hit your script. This prevents the LLM from passing garbage arguments.

args: {
  environment: tool.schema.enum(["staging", "production"]), // Can't pass "prod" or "stage"
  port: tool.schema.number().min(1024).max(65535),          // Valid port range only
}

Wrapping Up

This pattern has made my OpenCode setup way more maintainable. The scripts work standalone, the tool definitions are thin wrappers, and I can test everything manually before letting the AI loose on it.

Next up: Using OpenCode to write shell scripts faster, controlling tool access with agents, and the complete workflow tying it all together.


Got questions or want to share your own custom tools? Find me on Twitter or drop me an email.

← Back to blog
Bun + HTMX
Ready
© 2026 Efra