Skip to content

Week 5: Skills (On-Demand Knowledge)

What Is a Skill?

By now, you've written commands — prompts you invoke by typing /foo. Commands are user-initiated: you decide when to use them.

A skill is the opposite. It's a markdown file (with YAML frontmatter) that an agent discovers and loads automatically when it thinks it's relevant. You don't type anything; the agent decides to pull the skill in because your question matched the skill's description.

The Mental Model

Command: "I'll type /test <file> when I need tests written."

Skill: "If the user mentions release notes or versioning, the agent should load my release-notes skill automatically and use it."

In other words: - Commands = you delegate control to the agent via a keyword. - Skills = the agent delegates to itself on your behalf.

Why Skills Matter

Skills are leverage. With a command, you must remember to invoke it. With a skill: - The agent notices your need and acts. - You can author knowledge once and let any agent (or team) discover it. - Your agent grows smarter without rewriting your main prompt.

Imagine an owasp-checklist skill for security reviews. Every security-focused agent loads it automatically, no configuration needed. Or a db-migration-patterns skill that a database expert creates once and your whole team uses.

Anatomy of a Skill

A skill is a folder with a single SKILL.md file. Let's build one together.

Folder Structure

.opencode/skills/release-notes/SKILL.md

That's it. The folder name (release-notes) must match the name field in frontmatter.

Inside SKILL.md: Frontmatter

Every skill starts with YAML:

---
name: release-notes
description: Use when the user asks to draft release notes, changelog, or version summary for a new release.
---

Two required fields: - name: 1–64 characters, lowercase, hyphens only. Must match the folder name. - description: 1–1024 characters. This is the search index. If your description is vague, agents won't find the skill when it's relevant.

Content After Frontmatter

The rest of the file is plain markdown. This is what the agent reads when it loads your skill. Make it clear and actionable:

---
name: release-notes
description: Use when the user asks to draft release notes, changelog, or version summary for a new release.
---

## Release Notes Structure

Every release needs:
1. **Summary** — 1–2 sentences of what changed.
2. **New Features** — bulleted list.
3. **Breaking Changes** — if any.
4. **Bug Fixes** — grouped by severity.
5. **Deprecations** — upcoming removals.
6. **Security** — CVEs or fixes, separated.
7. **Contributors** — names and roles.

## Template

```markdown
# v1.4.0 – May 2, 2026

## Summary
Added support for Redis caching and fixed a critical bug in the auth middleware.

## New Features
- Redis caching for API responses (30-min TTL)
- New `/admin` dashboard for monitoring
- Multi-language support (EN, ES, FR)

## Breaking Changes
- Removed deprecated `legacy_auth` endpoint
- Changed JSON response format for `/search`

## Bug Fixes
- Fixed race condition in session cleanup (#247)
- Corrected off-by-one error in pagination

## Security
- Patched XSS vulnerability in user bio field
- Updated dependencies for CVE-2026-1234

## Contributors
- Alice (@alice-dev)
- Bob (@bob-qa)

Tips for Good Release Notes

  • Be specific: "fixed bug" is useless. "Fixed race condition in session cleanup" is useful.
  • Separate by audience: Developers care about API changes; end-users care about features.
  • Link to issues: If you have issue IDs, include them.
    Now, when a user says "Write release notes for v2.0," the agent notices the phrase "release notes," loads this skill, reads the structure and template, and drafts accordingly.
    
    ## The Power of Description
    
    Here's the crucial part: **the description is the search index**.
    
    **Bad description**:
    
    description: Release notes skill
    The agent might never find it because "release notes skill" doesn't signal *when* to use it.
    
    **Good description**:
    
    description: Use when the user asks to draft release notes, changelog, or version summary for a new release.
    Now the agent catches "draft release notes," "write a changelog," "version summary," and related phrasings.
    
    This is why writing good descriptions is an underrated skill. Spend 30 seconds on it. Include:
    - The *trigger phrases* ("draft," "write," "create")
    - The *domain* ("release notes," "changelog," "version")
    - The *context* ("for a new release")
    
    ## Skills vs. Commands vs. AGENTS.md
    
    By now you've seen three ways to store knowledge:
    
    | Artifact | Triggered By | Use When | Stored In |
    |----------|----------|----------|----------|
    | **Skill** | Agent detects relevance (via description) | You want the agent to act automatically without you typing a keyword. Good for domain wisdom (patterns, checklists, templates). | `.opencode/skills/<name>/SKILL.md` |
    | **Command** | User types `/foo` | You do this task 3+ times a week and typing saves you time. | `opencode.jsonc` |
    | **AGENTS.md section** | Agent reads it at startup | This is global context about your codebase (the tech stack, naming conventions, build steps). All agents need to know it; it's not optional. | `AGENTS.md` |
    
    **Example decision tree**:
    
    - "Every time someone asks me to review a PR, I should check for SQL injection." → **Skill** (`security-checklist`). The agent loads it when it smells security.
    - "I always run the test suite the same way." → **Command** (`/test`). You type it dozens of times a week.
    - "Our codebase uses Docker and Webpack." → **AGENTS.md**. Global context.
    
    ## How Agents Discover Skills
    
    OpenCode searches for skills in this order:
    
    1. **Project level**: `.opencode/skills/<name>/SKILL.md`
    2. **Global level**: `~/.config/opencode/skills/<name>/SKILL.md`
    3. **Upward in git tree**: If you're in a subdirectory, it searches parent folders too.
    
    When you ask the agent a question, it:
    1. Scans available skills.
    2. Compares your question against all skill descriptions.
    3. Loads the matching ones.
    4. Reads their content and incorporates it into the response.
    
    No explicit invocation needed. No `/skill-name` command. The agent does the work.
    
    ## Example: A Real Workflow
    
    You ask: *"I'm preparing to release v1.5. Can you draft release notes?"*
    
    The agent:
    1. Reads your message.
    2. Searches available skills.
    3. Finds the `release-notes` skill (description mentions "release notes").
    4. Loads `SKILL.md` and reads the structure and template.
    5. Generates release notes that follow your documented format.
    6. Presents them to you.
    
    If the skill didn't exist, the agent would still draft release notes, but without your custom structure. With the skill, it's *on-brand* and *consistent*.
    
    ## Common Mistakes
    
    **Mistake 1: Overly Generic Description**
    
    description: Helpful skill ``` Too vague. The agent can't match it to your question.

Mistake 2: Skill Content Too Long Skills should be concise (500–1000 words). If it's longer, it's probably a doc in AGENTS.md or a lab repo README.

Mistake 3: Forgetting the Folder Name Must Match If your file is .opencode/skills/my-skill/SKILL.md, the frontmatter must say name: my-skill. Mismatch = the skill won't load.

Mistake 4: No Examples in Content The agent reads your skill and uses it. If it has no concrete examples, the agent improvises and may get it wrong. Always include templates or sample output.

Mistake 5: Writing a Skill for a One-Off Task If you only need this knowledge once, don't write a skill. Write a note instead. Skills are for repeatable patterns.

Recap

  • Skill = a markdown file with frontmatter that agents discover automatically.
  • Description is everything — it's your search index.
  • Content should include templates, checklists, or patterns the agent will use.
  • Use skills for domain wisdom and repeatable patterns.
  • Use commands for user-invoked shortcuts.
  • Use AGENTS.md for global context about your codebase.

In the labs this week, you'll author 2 personal skills and debug a broken one. You'll learn that great skill descriptions are a craft, not an accident.