Week 1: What Is This World?¶
Welcome to OpenCode! Over the next 10 weeks, you'll learn to work with AI-powered agents — software that can read your code, think about it, and make changes — all from your terminal. This week, we're building your mental model of what that even means.
What Is an LLM?¶
You've probably heard the term "AI" or "ChatGPT," but let's be precise about what we're actually working with.
A Large Language Model (or LLM) is a machine learning system trained on vast amounts of text. Here's the crucial bit: an LLM does one thing — it predicts the next token (think: word or small piece of text) based on everything that came before. It's playing a very sophisticated guessing game.
When you ask ChatGPT a question, you type it. The model reads your question, and then generates an answer one token at a time, always predicting "what should come next?" If it predicts well, the answer is coherent and useful. If it predicts poorly, you get nonsense.
What an LLM Is NOT¶
Three things that trip up beginners:
-
Not a search engine. An LLM doesn't look things up on the internet (unless you give it tools to do so, which we'll cover later). It generates text based on what it learned during training. Sometimes it will sound very confident about something that's completely false. We call this a hallucination.
-
Not always right. LLMs are statistical models. They're usually helpful, but they make mistakes. A model trained until 2024 won't know about events in 2025. It might misremember an API signature or give you code that looks good but has a subtle bug.
-
Not thinking like you do. LLMs don't "understand" in the human sense. They don't reason through problems step-by-step like a programmer does. They predict patterns. This matters because sometimes you'll ask an LLM something that seems simple to you, and it will struggle — and vice versa.
Why We Still Use Them¶
Despite those limits, LLMs are extraordinary at: - Explaining code - Generating boilerplate - Brainstorming solutions - Reviewing code for style or obvious bugs - Writing documentation
The key is to use them wisely. We're going to teach you when to lean on an LLM and when to stay skeptical.
What Does "Agentic" Mean?¶
Now here's where it gets interesting. A chat interface — like the one you use in a browser to talk to ChatGPT — is stateless. You type something, the model generates a reply, and that's it. If you want it to do something (like write a file), the model can describe how to do it, but it can't actually do it.
An agentic system is different. An agent is an LLM plus a set of tools plus a loop that lets the model read the results of its actions and decide what to do next.
Here's the skeleton:
- Read — The agent reads your request and can inspect files, run commands, or call APIs.
- Think — The LLM processes what it read and decides what to do.
- Act — The agent executes code, writes files, or calls an API based on the LLM's decision.
- Observe — The agent shows the results back to the LLM.
- Loop — The LLM reads the results, thinks about what comes next, and acts again.
This loop repeats until the task is done or the agent hits a permission boundary.
Chat vs. Agent: A Concrete Example¶
In a chat: - You: "Write me a function that reverses a string in Python." - ChatGPT: [generates code] "Here's a function that reverses a string." - You have to copy-paste the code into a file yourself. You have to run it. You have to verify it works.
In an agent (like OpenCode):
- You: "Write me a function that reverses a string in Python."
- Agent: [reads your project structure] "I'll create a new file."
- Agent: [writes the file] "Done. Let me run the tests to confirm."
- Agent: [runs tests] "Tests pass! Your new function is at utils.py line 42."
- You see the file. You run the code yourself. The agent did the mechanical work.
The agent is autonomous — it can see the outcome of its actions and adjust. If the tests fail, the agent can read the error and fix the code without you asking. That's the power of the loop.
The Agent Loop Diagram¶
Picture this flow:
┌─────────────────────────────────────────────────────┐
│ User Request: "Add login validation to auth.js" │
└────────────────┬────────────────────────────────────┘
│
▼
┌────────────────┐
│ AGENT READS │
│ • auth.js │
│ • tests/ │
│ • package.json│
└────────┬───────┘
│
▼
┌────────────────────────────────┐
│ LLM THINKS │
│ "I need to: │
│ 1. Add validation function │
│ 2. Write tests │
│ 3. Run tests to verify" │
└────────┬───────────────────────┘
│
▼
┌────────────────────────────┐
│ AGENT ACTS │
│ • Writes validation.js │
│ • Writes tests/auth.test │
│ • Runs: npm test │
└────────┬────────────────────┘
│
▼
┌──────────────────────────────┐
│ AGENT OBSERVES │
│ • Tests output: 3 fail │
│ • Error: "email not valid" │
└────────┬─────────────────────┘
│
▼
┌─────────────────────────────────┐
│ LLM THINKS AGAIN │
│ "Tests failed. I need to │
│ fix the regex in my code" │
└────────┬──────────────────────┘
│
▼
┌────────────────────────────┐
│ AGENT ACTS (again) │
│ • Fixes validation.js │
│ • Runs: npm test │
└────────┬────────────────────┘
│
▼
┌──────────────────────────────┐
│ AGENT OBSERVES │
│ Tests pass! ✓ │
└──────────────────────────────┘
The agent keeps looping until it either succeeds, hits an error it can't fix, or you intervene. This is why agentic systems are so much more powerful than static code generation — they can see what they built and fix it.
Terminal Basics (5 Minutes of Power)¶
Before you can work with OpenCode, you need to be comfortable in the terminal. Don't worry — you only need five commands.
What is the terminal?¶
The terminal (also called the command line, shell, or console) is a text interface to your computer. Instead of clicking folders and files, you type commands. It's faster for developers and necessary for working with tools like git and OpenCode.
On macOS, open "Terminal.app" (in /Applications/Utilities/). On Windows, we'll install WSL (Windows Subsystem for Linux) in Week 2.
Five Commands¶
1. pwd — "where am I?"
$ pwd
/Users/santhosh/projects/myapp
2. ls — "what's here?"
$ ls
README.md
src/
tests/
package.json
ls -la to see hidden files and more detail.
3. cd — "go to a folder"
$ cd src
$ pwd
/Users/santhosh/projects/myapp/src
cd .. goes up one level. cd ~ goes to your home folder. cd / goes to the root.
4. cat — "show me this file"
$ cat package.json
{
"name": "myapp",
"version": "1.0.0"
}
5. echo — "print something"
$ echo "Hello, World!"
Hello, World!
That's it. You can do 80% of what you need with these five.
Git Basics (10 Minutes of Version Control)¶
Git is how you save and track changes to code. OpenCode works with git — it creates commits, branches, and pushes to GitHub. You don't need to be a git expert, but you need the fundamentals.
What is Git?¶
Git is a version control system. It's a way to save snapshots of your code over time, see who changed what, and collaborate with others. Every project (including the one we'll use in Week 2) lives in a git repository (or repo — a folder with a .git/ subfolder that tracks changes).
Three Essential Concepts¶
1. Clone — "copy a repo to my computer"
$ git clone https://github.com/opencode/example.git
$ cd example
$ ls
src/ tests/ README.md ...
2. Status — "what have I changed?"
$ git status
On branch main
Changes not staged for commit:
modified: src/app.js
3. Commit — "save a snapshot"
$ git add src/app.js
$ git commit -m "Fix: handle null user in login"
[main abc123] Fix: handle null user in login
1 file changed, 4 insertions(+)
git add marks files you want to save. git commit -m "message" saves a snapshot with a description. The -m flag lets you type the message right there.
(Don't worry about branches, merges, or rebasing yet. We'll cover those in Week 3.)
Demo: Chat vs. Agent¶
This is a critical moment. Let's make the contrast concrete.
Part 1: Chat LLM 1. Open any chat LLM (ChatGPT, Claude, Gemini — free versions are fine). 2. Ask it: "Write a Node.js function that takes an array of numbers and returns the sum." 3. It generates code. You read it. If it's right, you copy it. If it's wrong, you ask it to fix it. Maybe three cycles before it's right. 4. You manually paste it into a file. You manually run it. You manually test it.
Part 2: Agentic System 1. You'll do this in Week 2 with OpenCode, but for now, imagine: - You ask OpenCode: "Add a sum function to my math utils." - OpenCode reads your existing code, your tests, your style. - It writes the function in the right place. - It runs your tests automatically. - If tests fail, it reads the error and fixes the code. - You see the result. It's done.
The difference: the agent does the work; you supervise. The chat just generates words; you do the work.
Recap¶
- LLMs are statistical text predictors. They're powerful but can hallucinate and make mistakes.
- Agentic systems are LLMs plus tools plus a loop that lets the model read outcomes and adjust.
- The agent loop is: read → think → act → observe → repeat.
- Terminal basics —
pwd,ls,cd,cat,echo— let you navigate and inspect your code. - Git basics —
clone,status,commit— let you save and track changes.
What's Next (Week 2)¶
Next week, we'll install OpenCode itself, set up your first model provider (Anthropic or OpenAI), and run your first agentic task end-to-end. We'll also talk about the difference between a "plan-mode" agent (safe to explore) and a "build-mode" agent (can edit files).
For now, complete Lab 1.1 (set up your terminal and git) and Lab 1.2 (learn what makes a good prompt by experimenting).