Skip to content

Lab 9.2: Security Review a Peer Config

In this lab, you will review another learner's OpenCode config as if it were about to be shared with a team.

Your job is not to prove the other person did something wrong. Your job is to find risks early, explain them clearly, and recommend safer defaults.

Timebox

60-90 minutes.

Prerequisites

You need a peer's OpenCode setup or a provided sample config.

Ask for only the files needed for review. Do not ask for secrets, tokens, .env values, or provider keys.

Good review targets:

  1. .opencode/agents/.
  2. .opencode/commands/.
  3. .opencode/skills/.
  4. Project opencode.jsonc or equivalent config.
  5. Any MCP server configuration.
  6. Eval prompts, if present.

What You Will Produce

You will write a one-page security review report with:

  1. Executive summary.
  2. Scope reviewed.
  3. Findings ranked by severity.
  4. Recommended fixes.
  5. One positive observation.
  6. Open questions.

Step 1: Confirm Scope

Before reviewing, write down what you are allowed to inspect.

Peer name:
Repo or config reviewed:
Files/folders in scope:
Files/folders out of scope:
Secrets provided? No.

If someone gives you secrets by accident, stop. Tell them to rotate the secret and remove it from the shared material before continuing.


Step 2: Inventory Agents and Permissions

Create a simple table:

| Agent | Job | read | edit | bash | webfetch | MCP/tools | Notes |
|-------|-----|------|------|------|----------|-----------|-------|

For each agent, ask:

  1. Is the job clear?
  2. Are permissions tied to the job?
  3. Is edit needed?
  4. Is bash needed?
  5. Are risky tools set to ask where appropriate?
  6. Could this agent affect files or systems outside its role?

Common findings:

  1. A reviewer has edit: allow.
  2. A docs writer has bash: allow.
  3. A test runner can edit files.
  4. Every tool is allowed because it was the default.
  5. No one documented why an MCP server is needed.

Step 3: Check Prompt Boundaries

Read each shared agent prompt.

Look for clear instructions about:

  1. The agent's role.
  2. What the agent must not do.
  3. When the agent should ask before acting.
  4. Output format.
  5. Scope limits.
  6. Secret handling.

Example issue:

Finding: The `deploy-helper` prompt says "fix and deploy anything needed" but does not define approval steps.
Risk: The agent may run production-impacting commands without review.
Recommendation: Split into a read-only deploy reviewer and a separate deploy executor with `bash: ask`.

Step 4: Review Secrets Handling

Search the reviewed files for secret handling risks.

Do not expose or copy secret values in your report.

Look for:

  1. API keys in config.
  2. Tokens in command examples.
  3. .env content pasted into prompts.
  4. Instructions telling agents to print secrets.
  5. Logs that include credentials.
  6. Agent prompts that ask users to paste private data.

Use placeholders in your report:

Finding: A token-like value appears in `example-command.md`.
Risk: Secrets in prompt assets may be committed and reused.
Recommendation: Remove the value, rotate it if real, and replace it with `<REDACTED_TOKEN>`.

Step 5: Review MCP Supply-Chain Risk

For every MCP server, answer:

  1. What does it connect to?
  2. Who maintains it?
  3. Is the version pinned?
  4. What credentials does it use?
  5. Are credentials scoped narrowly?
  6. Can it write to external systems?
  7. Does it run local commands?
  8. Is there a reason this server is needed?

Example issue:

Finding: GitHub MCP server is configured through `npx` without a pinned package version.
Risk: Future package changes could alter tool behavior without review.
Recommendation: Pin the server version and document the token scope.

Example issue:

Finding: Database MCP server appears to use a production connection string.
Risk: An agent could expose or modify production data.
Recommendation: Use a read-only staging database account, or remove this MCP server from the shared config.

Step 6: Check Evals and Versioning

Production configs need repeatable checks.

Ask:

  1. Are there eval prompts for shared agents?
  2. Do evals cover permission boundaries?
  3. Do evals cover secrets or MCP risks?
  4. Are agents, commands, and skills versioned in git?
  5. Is there a review process for changing permissions?

Finding examples:

Finding: No eval covers the `security-reviewer` refusing to edit files.
Risk: A future prompt change could make the reviewer act as a fixer.
Recommendation: Add a permission-boundary eval before sharing the agent.
Finding: Prompt changes are not reviewed.
Risk: A teammate could loosen permissions or increase cost without visibility.
Recommendation: Require code review for shared `.opencode/` changes.

Step 7: Rank Findings

Use this severity guide:

Severity Meaning Example
Critical Could expose secrets, modify production systems, or run untrusted code with broad access. Production database MCP with write access.
High Could cause repo damage, data exposure, or major cost without approval. Shared agent with bash: allow and vague deploy prompt.
Medium Risky but limited by scope or requires another mistake. webfetch: allow with no source guidance.
Low Hygiene issue that weakens maintainability. No owner listed for a shared skill.

Prefer fewer, clearer findings over a long list of minor complaints.


Step 8: Write the Report

Use this template:

# Security Review Report

Reviewer:
Date:
Config reviewed:

## Executive Summary

<3-5 sentences. Is this ready to share? What is the main risk?>

## Scope

Reviewed:
- <files/folders>

Not reviewed:
- <anything excluded>

## Findings

### 1. <Severity>: <Short finding title>

Evidence:
<File or config area. Do not paste secrets.>

Risk:
<Why this matters.>

Recommendation:
<Specific fix.>

## Positive Observation

<One thing the peer did well.>

## Open Questions

- <Question 1>
- <Question 2>

Your report should be direct, specific, and respectful.


Step 9: Share and Discuss

Walk your peer through the report.

Use this order:

  1. Start with the positive observation.
  2. Explain the highest-risk finding first.
  3. Tie each recommendation to the agent's intended job.
  4. Ask whether any assumption is wrong.
  5. Agree on one fix they will make first.

Security review is collaborative. The goal is a safer config, not a winning argument.


Submission Checklist

Submit:

  1. Your completed one-page report.
  2. The permission inventory table.
  3. At least three findings or a clear statement that you found no material issues.
  4. One positive observation.
  5. One recommended next fix.

You are done when your peer knows exactly what to change and why.