Lab 1.2: Your First Prompts¶
Goal¶
Learn to write effective prompts by experimenting with a free chat LLM. You'll discover what makes a prompt work well and what makes one confusing. This skill transfers directly to working with OpenCode — better prompts = better agent behavior.
Estimated time: 30–45 minutes
What You'll Do¶
- Write 5 prompts to a chat LLM, ranging from vague to specific.
- Rate each response using a simple rubric.
- Reflect on patterns.
Setup¶
Choose any free chat LLM: - ChatGPT (free version, openai.com) - Claude (free version, claude.ai) - Gemini (google.com/gemini) - Or any other AI chat tool
Open it in a browser and get ready to type. No signup required for many.
The 5 Prompts¶
Prompt 1: Vague (Baseline)¶
Write this exactly:
How do I write code?
Send it. Wait for the response.
Example response (you'll get something different):
"Writing code involves using a programming language to give instructions to a computer. You'll start by learning the basics of syntax, then practice writing simple programs."
Rate it using the rubric below, then move to Prompt 2.
Prompt 2: Slightly Better (Add Context)¶
Write this:
I'm a beginner. How do I write a function in Python that checks if a number is even?
Send it.
Example response:
"In Python, you can use the modulo operator (%) to check if a number is even. Here's an example:
A number is even if the remainder of dividing it by 2 is 0."def is_even(n): return n % 2 == 0
Rate it.
Prompt 3: Specific (Add Format Request)¶
Write this:
Write a Python function called `is_even` that returns True if a number is even, False otherwise.
Include a docstring explaining what the function does.
Show two examples of calling the function.
Send it.
Example response:
def is_even(n): """ Check if a number is even. Args: n: An integer Returns: True if n is even, False otherwise """ return n % 2 == 0 # Examples print(is_even(4)) # Output: True print(is_even(7)) # Output: False
Rate it.
Prompt 4: Hyper-Specific (Constraints)¶
Write this:
Write a Python function called `is_even` that:
- Returns True if the input is an even number, False otherwise
- Includes a docstring with description, Args, and Returns sections
- Handles the edge case where input is 0 (should return True)
- Handles the edge case where input is negative (e.g., -4 should return True)
- Includes two doctest examples that show the function in action
Send it.
Example response:
def is_even(n): """ Check if a number is even. Args: n: An integer (positive, negative, or zero) Returns: True if n is even, False otherwise >>> is_even(4) True >>> is_even(-3) False """ return n % 2 == 0
Rate it.
Prompt 5: Add Failure Case (See What Happens)¶
Write this:
Write a Python function that checks if a number is even. Make it intentionally wrong—
let me know you're making it intentionally wrong—so I can learn what a buggy function looks like.
Send it.
Example response (will vary):
This function will say every number is even, which is wrong. The correct logic should checkdef is_even(n): # INTENTIONALLY WRONG: always returns True return Truen % 2 == 0.
Rate it.
Rubric: Rate Each Response¶
For each prompt, rate the response on these dimensions. Give each a score of 1–5:
Clarity (Does the response make sense?)¶
- 1 — Confusing, contradictory, or off-topic
- 3 — Clear enough, but has some fuzzy parts
- 5 — Crystal clear, easy to understand
Usefulness (Can you actually use this?)¶
- 1 — Not useful at all; can't apply it
- 3 — Somewhat useful; you'd need to adapt it
- 5 — Immediately useful; you could copy-paste and run
Completeness (Does it answer the whole question?)¶
- 1 — Missing major parts of what you asked for
- 3 — Answers most of the question but misses some details
- 5 — Fully complete; every part of your request is addressed
Code Quality (If code was included, is it good?)¶
- 1 — Wrong, buggy, or follows bad practices
- 3 — Works, but could be better (unclear variable names, missing comments)
- 5 — Well-written, follows best practices, has comments
Overall Score¶
Average the four scores above. You should end up with something between 1.0 and 5.0.
Scoring Table¶
Fill this in as you go:
| Prompt | Clarity | Usefulness | Completeness | Code Quality | Average |
|---|---|---|---|---|---|
| 1 (Vague) | / | / | / | / | / |
| 2 (Context) | / | / | / | / | / |
| 3 (Specific) | / | / | / | / | / |
| 4 (Hyper-specific) | / | / | / | / | / |
| 5 (Intentional bug) | / | / | / | / | / |
Analysis: What Did You Discover?¶
Answer these questions in writing (a few sentences each):
-
Trend: Did your average score improve as you moved from Prompt 1 to Prompt 4? (Probably yes.) Why do you think that is?
-
Vagueness: In Prompt 1, what assumption did the LLM make about what you wanted? How did Prompt 2 fix that?
-
Constraints: In Prompt 4, you asked for edge cases (zero, negative numbers). Did the LLM remember and handle them? What does that tell you?
-
Failure case: In Prompt 5, the LLM explained that it was intentionally writing buggy code and why. Why is it important for an LLM to be honest when it's going off-script?
-
Your takeaway: In one sentence, what's the most important thing you learned about writing prompts?
Patterns: What Makes a Good Prompt?¶
Pattern 1: Specificity Beats Vagueness¶
Bad: "How do I code?" Good: "Write a Python function that checks if a number is even."
Specific prompts get specific answers. Vague prompts get generic answers.
Pattern 2: Add Context¶
Bad: "Write a function." Good: "I'm a beginner. Write a Python function..."
The LLM uses context to pitch the response at the right level.
Pattern 3: Say What You Want¶
Bad: "Write code." Good: "Write code that includes a docstring, two examples, and handles negative numbers."
Explicit requests (format, structure, edge cases) produce better results.
Pattern 4: Verify and Iterate¶
If the first response isn't what you wanted, follow up:
You: "That's close, but the function should also validate that the input is an integer. Can you fix it?"
The LLM will read your feedback and improve.
Pattern 5: Understand the Limits¶
Even a great prompt can't make the LLM do something it fundamentally can't do. For example:
Bad: "Look up the current stock price of Apple using the internet." (The LLM can't access the internet without tools.)
Good: "Write Python code that would fetch the current stock price of Apple using a financial API like Alpha Vantage." (The LLM can write the code; you run it.)
Key Takeaway¶
A good prompt is like a good specification. If you tell a human developer "fix the bug" with no more details, they'll be lost. But if you say "Fix the login button: it's not accepting valid emails with a + sign," they can get to work.
LLMs are the same. The more specific, the better.
Reflection Questions¶
Think about these before moving to the reflection file:
-
When you write a prompt to an LLM, what are you actually doing? (You're asking a statistical model to predict the most likely next tokens. Does that change how you think about prompts?)
-
If an LLM gave you wrong code confidently, is it lying? Or something else?
-
In Week 2, you'll prompt OpenCode (an agent) to write real code in your repo. What have you learned this week that will help you give OpenCode better instructions?
-
What kind of prompts would you not ask an LLM, and why?
-
The instructions you write for your team members, and the prompts you write to an LLM, have something in common. What?
What's Next¶
Move to reflection.md to synthesize everything you've learned this week, then prepare for Week 2, where you'll install OpenCode and run your first agent end-to-end.