Lab 9.2: Security Review a Peer Config¶
In this lab, you will review another learner's OpenCode config as if it were about to be shared with a team.
Your job is not to prove the other person did something wrong. Your job is to find risks early, explain them clearly, and recommend safer defaults.
Timebox¶
60-90 minutes.
Prerequisites¶
You need a peer's OpenCode setup or a provided sample config.
Ask for only the files needed for review. Do not ask for secrets, tokens, .env values, or provider keys.
Good review targets:
.opencode/agents/..opencode/commands/..opencode/skills/.- Project
opencode.jsoncor equivalent config. - Any MCP server configuration.
- Eval prompts, if present.
What You Will Produce¶
You will write a one-page security review report with:
- Executive summary.
- Scope reviewed.
- Findings ranked by severity.
- Recommended fixes.
- One positive observation.
- Open questions.
Step 1: Confirm Scope¶
Before reviewing, write down what you are allowed to inspect.
Peer name:
Repo or config reviewed:
Files/folders in scope:
Files/folders out of scope:
Secrets provided? No.
If someone gives you secrets by accident, stop. Tell them to rotate the secret and remove it from the shared material before continuing.
Step 2: Inventory Agents and Permissions¶
Create a simple table:
| Agent | Job | read | edit | bash | webfetch | MCP/tools | Notes |
|-------|-----|------|------|------|----------|-----------|-------|
For each agent, ask:
- Is the job clear?
- Are permissions tied to the job?
- Is
editneeded? - Is
bashneeded? - Are risky tools set to
askwhere appropriate? - Could this agent affect files or systems outside its role?
Common findings:
- A reviewer has
edit: allow. - A docs writer has
bash: allow. - A test runner can edit files.
- Every tool is allowed because it was the default.
- No one documented why an MCP server is needed.
Step 3: Check Prompt Boundaries¶
Read each shared agent prompt.
Look for clear instructions about:
- The agent's role.
- What the agent must not do.
- When the agent should ask before acting.
- Output format.
- Scope limits.
- Secret handling.
Example issue:
Finding: The `deploy-helper` prompt says "fix and deploy anything needed" but does not define approval steps.
Risk: The agent may run production-impacting commands without review.
Recommendation: Split into a read-only deploy reviewer and a separate deploy executor with `bash: ask`.
Step 4: Review Secrets Handling¶
Search the reviewed files for secret handling risks.
Do not expose or copy secret values in your report.
Look for:
- API keys in config.
- Tokens in command examples.
.envcontent pasted into prompts.- Instructions telling agents to print secrets.
- Logs that include credentials.
- Agent prompts that ask users to paste private data.
Use placeholders in your report:
Finding: A token-like value appears in `example-command.md`.
Risk: Secrets in prompt assets may be committed and reused.
Recommendation: Remove the value, rotate it if real, and replace it with `<REDACTED_TOKEN>`.
Step 5: Review MCP Supply-Chain Risk¶
For every MCP server, answer:
- What does it connect to?
- Who maintains it?
- Is the version pinned?
- What credentials does it use?
- Are credentials scoped narrowly?
- Can it write to external systems?
- Does it run local commands?
- Is there a reason this server is needed?
Example issue:
Finding: GitHub MCP server is configured through `npx` without a pinned package version.
Risk: Future package changes could alter tool behavior without review.
Recommendation: Pin the server version and document the token scope.
Example issue:
Finding: Database MCP server appears to use a production connection string.
Risk: An agent could expose or modify production data.
Recommendation: Use a read-only staging database account, or remove this MCP server from the shared config.
Step 6: Check Evals and Versioning¶
Production configs need repeatable checks.
Ask:
- Are there eval prompts for shared agents?
- Do evals cover permission boundaries?
- Do evals cover secrets or MCP risks?
- Are agents, commands, and skills versioned in git?
- Is there a review process for changing permissions?
Finding examples:
Finding: No eval covers the `security-reviewer` refusing to edit files.
Risk: A future prompt change could make the reviewer act as a fixer.
Recommendation: Add a permission-boundary eval before sharing the agent.
Finding: Prompt changes are not reviewed.
Risk: A teammate could loosen permissions or increase cost without visibility.
Recommendation: Require code review for shared `.opencode/` changes.
Step 7: Rank Findings¶
Use this severity guide:
| Severity | Meaning | Example |
|---|---|---|
| Critical | Could expose secrets, modify production systems, or run untrusted code with broad access. | Production database MCP with write access. |
| High | Could cause repo damage, data exposure, or major cost without approval. | Shared agent with bash: allow and vague deploy prompt. |
| Medium | Risky but limited by scope or requires another mistake. | webfetch: allow with no source guidance. |
| Low | Hygiene issue that weakens maintainability. | No owner listed for a shared skill. |
Prefer fewer, clearer findings over a long list of minor complaints.
Step 8: Write the Report¶
Use this template:
# Security Review Report
Reviewer:
Date:
Config reviewed:
## Executive Summary
<3-5 sentences. Is this ready to share? What is the main risk?>
## Scope
Reviewed:
- <files/folders>
Not reviewed:
- <anything excluded>
## Findings
### 1. <Severity>: <Short finding title>
Evidence:
<File or config area. Do not paste secrets.>
Risk:
<Why this matters.>
Recommendation:
<Specific fix.>
## Positive Observation
<One thing the peer did well.>
## Open Questions
- <Question 1>
- <Question 2>
Your report should be direct, specific, and respectful.
Step 9: Share and Discuss¶
Walk your peer through the report.
Use this order:
- Start with the positive observation.
- Explain the highest-risk finding first.
- Tie each recommendation to the agent's intended job.
- Ask whether any assumption is wrong.
- Agree on one fix they will make first.
Security review is collaborative. The goal is a safer config, not a winning argument.
Submission Checklist¶
Submit:
- Your completed one-page report.
- The permission inventory table.
- At least three findings or a clear statement that you found no material issues.
- One positive observation.
- One recommended next fix.
You are done when your peer knows exactly what to change and why.