Week 1 Reflection¶
Overview¶
This week, you've been introduced to three big ideas:
- What an LLM is — a statistical text predictor, not a reasoning engine or search engine.
- What "agentic" means — an LLM plus tools plus a feedback loop that lets it read outcomes and adjust.
- The foundation skills — terminal navigation and git version control.
These ideas will anchor everything we do in the next nine weeks. Before moving to Week 2, take 20 minutes to reflect on these questions. There are no right answers — these are for you to build your own mental model.
Reflection Prompts¶
1. Mental Model of an LLM¶
An LLM predicts the next token based on what came before. Given that core fact, answer:
Why do you think an LLM sometimes "hallucinates" — confidently says something false?
(Hint: Think about what happens if the training data had a misconception, or if the next-token prediction is just statistically likely but incorrect. No need to be technical.)
2. What LLMs Are Good At (and Not)¶
This week, we listed things LLMs excel at: explaining code, generating boilerplate, brainstorming, code review, documentation. Think about your own work (school, hobby projects, future job).
Name two tasks you do regularly that you think an LLM could help with, and two that you think it couldn't. For each pair, explain your reasoning in one sentence.
Example: "An LLM could help me write a function signature because it's boilerplate. It couldn't write a novel because novels require emotional coherence over thousands of tokens, which LLMs struggle with."
3. The Agent Loop in Your Life¶
You read → think → act → observe → adjust every day. You test code, see it fail, read the error, and fix it. That loop is how you learn.
The agent loop is the same, but the LLM does the "think" part. What's one advantage of the agent doing the loop vs. you doing it manually?
(Hint: Think about speed, consistency, or how many iterations you can do.)
4. Specificity in Prompts¶
You probably noticed a trend in Lab 1.2: vague prompts got generic answers; specific prompts got better answers. This works for humans too.
Think of a time you asked someone for help (a teacher, a coworker, a friend) and the answer was useless. What would you have done differently — how would you have asked more specifically?
(This is not a test. Just think about clarity.)
5. Trust and Verification¶
You learned that LLMs can hallucinate. When you use OpenCode starting Week 2, it will write code, and you'll see it. You don't have to trust it — you can read the code, run tests, and reject changes if they're wrong.
Why is it important that OpenCode is a tool you supervise, not a tool that runs on its own?
(Think about responsibility, learning, and safety.)
6. Terminal and Git as Muscle Memory¶
This week, you spent time in the terminal and used git status, git add, and git commit. These commands will become automatic.
Why do you think developers prefer the command line for version control over a GUI? (There's no single answer — think about speed, precision, scripting, or anything else that comes to mind.)
7. Looking Ahead¶
In Week 2, you'll install OpenCode and run your first agentic task. You'll prompt it to do something in your code. You've now learned what an LLM is, what "agentic" means, and how to write specific prompts.
What's one thing from Week 1 you think will be most useful when you sit down with OpenCode in Week 2?
How to Use This Reflection¶
- Write out your answers (in a notebook, a text editor, or even a voice memo). Don't just think through them.
- Be honest. If you're confused, say so. If you disagree with something from the lesson, that's fine too.
- Save your answers. In Week 10 (the capstone), you'll look back and see how your thinking has evolved.
Common Threads¶
As you answer these prompts, you might notice three themes:
-
Specificity matters — Vague requests (to humans or LLMs) get vague answers. Specific requests get specific answers.
-
Feedback loops rule — LLMs improve when they see outcomes (your feedback or test results). You improve when you see outcomes (error messages, code reviews). Humans improve the same way.
-
Skepticism is healthy — You shouldn't blindly trust an LLM (it hallucinates) or blindly trust a tool (it can break things). Verification and reading the output is part of responsible development.
What Comes Next¶
Week 2: Installing & First Run
You'll install OpenCode, set up a model provider (Anthropic or OpenAI), and run your first agent command. You'll also learn the difference between plan mode (safe to explore) and build mode (can edit files).
Before then: - Review the lesson and slides if anything feels fuzzy. - Complete both labs if you haven't already. - Keep your answers to these reflection questions — you'll revisit them later.