The most effective CLI distribution strategy for agents isn’t MCP, docs, or –help. It’s shipping the prompt the agent should run upon installation.
Author: James Phoenix | Date: March 2026
The Pattern
A new category of CLI tool is emerging. These tools don’t just expose commands. They ship embedded skill files that teach AI agents how to use the tool, end to end, as a structured onboarding workflow. The installation command doesn’t print “run –help to get started.” It writes .claude/skills/ and .codex/skills/ files that the agent reads and executes autonomously.
# The CLI installs itself AND its agent instructions
domain-scan skills install --claude-code
domain-scan skills install --codex
After this, any Claude Code or Codex session in the project has access to skills like domain-scan-init, domain-scan-scan, domain-scan-match, domain-scan-validate. The agent doesn’t need to read docs. The skills ARE the docs. They contain the full workflow, constraints, validation steps, and error handling that a human developer would otherwise need to learn.
This is not MCP. There’s no JSON-RPC server, no runtime tool registration, no protocol negotiation. It’s simpler than that. The CLI writes markdown files with YAML frontmatter into the directory the agent already knows how to read. The agent pattern-matches on skill descriptions and auto-loads the relevant context when the user asks for something related.
Why This Works Better Than Alternatives
–help is for humans
--help output is designed for terminal scrolling. An agent CAN read it, but it’s optimized for discoverability, not execution. It tells you what flags exist. It doesn’t tell you which order to run commands, what to validate between steps, or what the failure modes are.
Docs are stale and expensive
Loading a README or documentation site into context costs tokens and may be outdated. The agent has to parse prose written for humans, extract the actionable steps, and infer the workflow. Every inference is a potential hallucination point.
MCP is runtime, not onboarding
MCP servers expose tools at runtime. They’re great for typed invocation and schema introspection. But they don’t teach the agent a multi-step workflow. An MCP server says “here are the tools you can call.” A skill file says “here is the 8-step workflow you should follow, in order, with validation gates between each step.”
Skills are deterministic context
A skill file is static, version-controlled markdown. It loads into the agent’s context window at the right moment (when the description matches the user’s request). It contains:
- Sequenced steps with explicit ordering
- Validation gates (“run –dry-run before –write-back”)
- Hard rules (“filePath must be a real directory, verify with ls”)
- Scaling heuristics (“< 100 files = 5 sub-agents, 2000+ = 10”)
- Error recovery (“if download fails, fall back to cargo install”)
This is the kind of procedural knowledge that makes the difference between an agent that completes a task and one that gets stuck halfway through.
The Distribution Mechanic
The pattern has three moving parts:
1. Embedded skills in the binary
The CLI bundles skill files as embedded assets. They’re compiled into the binary or shipped alongside it. domain-scan skills list shows what’s available. domain-scan skills show <name> prints the content. This means the skills are always version-matched to the CLI. No drift between tool version and instructions.
2. Multi-surface install
One command writes skills for every agent framework the user might be running:
domain-scan skills install --claude-code # → .claude/skills/
domain-scan skills install --codex # → .codex/skills/
domain-scan skills install --dir ./custom # → custom path
The CLI doesn’t need to know which agent the user prefers. It installs for all of them. The cost is a few kilobytes of markdown per surface.
3. YAML frontmatter for routing
Each skill file starts with metadata that the agent framework uses for auto-loading:
---
name: domain-scan-init
description: >
Initialize a system.json manifest for codebase architecture mapping.
Use when asked to create, bootstrap, or set up a domain-scan manifest.
---
The agent only loads the skill when the description matches the current request. Irrelevant skills don’t consume context. This is the same routing mechanism described in Agent Skill Bootstrapping, but the skills are vendor-authored rather than agent-created.
Case Study: domain-scan
domain-scan is a structural code intelligence CLI that uses tree-sitter to extract interfaces, services, schemas, and type aliases from source code. It maps them to a manifest of domains, subsystems, and connections that powers a “tube map” visualization.
The interesting part isn’t the CLI itself. It’s the distribution strategy. The skills install command writes 11 skill files covering the entire workflow:
| Skill | Purpose |
|---|---|
domain-scan-cli |
CLI reference and flags |
domain-scan-scan |
How to scan a codebase |
domain-scan-init |
How to create a manifest from scratch |
domain-scan-match |
How to map entities to subsystems |
domain-scan-validate |
How to validate the manifest |
domain-scan-prompt |
The master onboarding prompt |
The domain-scan-prompt skill is particularly notable. It’s a complete multi-step workflow that an agent can execute end to end: install the CLI, scan the codebase, analyze the output, create a manifest, validate it, iterate until coverage exceeds 90%. It includes sub-agent orchestration guidance, scaling rules based on codebase size, and hard constraints the agent must follow.
The result: a user types “set up domain-scan for this codebase” and the agent has everything it needs to execute a 30-minute workflow autonomously. No docs to read, no –help to parse, no MCP server to configure.
When to Use This Pattern
This works best for CLIs with:
- Multi-step workflows where ordering and validation matter
- Agent-heavy user bases where the primary consumer is an AI agent, not a human
- Complex configuration that benefits from guided setup rather than flag discovery
- Quality gates where the agent needs to validate between steps
It works less well for simple, stateless commands. If your CLI is jq or curl, skills are overkill. The user (or agent) can figure it out from –help.
Connection to Existing Patterns
This pattern sits at the intersection of several ideas:
- Rewrite Your CLI for AI Agents identifies “Ship Agent Skills, Not Just Commands” as one of seven retrofitting steps. This pattern makes it the PRIMARY distribution mechanism.
- Agent Skill Bootstrapping describes agents creating skills at runtime. Vendor-shipped skills are the pre-built complement. They handle the common path. Agent bootstrapping handles the gaps.
- Long-Running Agent Patterns establishes skills as a core primitive for extended agent runs. Vendor-shipped skills mean the agent starts with domain expertise instead of building it mid-session.
- Progressive Disclosure of Context is the loading strategy. Skills load on demand via frontmatter matching, so the agent only pays context cost for relevant workflow knowledge.
Key Insight
The next wave of developer tools won’t compete on CLI ergonomics or documentation quality. They’ll compete on how well their shipped prompts guide an agent through the workflow. The prompt IS the product.
The economics are straightforward. Writing 11 skill files costs the vendor a day of work. That investment saves every user’s agent from re-deriving the workflow from scratch. It’s the same compound logic as writing good docs, except the consumer is an LLM that follows structured instructions with near-perfect fidelity.
Related
- Rewrite Your CLI for AI Agents – The broader framework for agent-friendly CLI design
- Agent Skill Bootstrapping – Runtime self-extension as complement to vendor-shipped skills
- Long-Running Agent Patterns – Skills as a core primitive for extended agent runs
- The MCP Abstraction Tax – Why skill files beat MCP for multi-step workflows
- Progressive Disclosure of Context – On-demand context loading via frontmatter routing
- Prompts Are the Asset – Skills as the deliverable, not the CLI itself
- Zero Friction Onboarding – Fast starts for agents and humans

