Day 2: First interactive session

Lesson 2 60 min

Lesson note: Day 2 — First interactive session (inside a real repository)

Aligned with the ClaudeForge “Lesson 1.2” naming in setup.sh and the generated README.md.

What this lesson is about

You move from isolated prompts to working inside a real codebase: navigating files, loading only the context you need, making a focused change, and proving it with tests and scripts. The “interactive session” is the claude CLI session in sample-repo, using patterns like /add, /status, and accepting a diff—while tests and verify_session.sh confirm the outcome.

Prerequisites

  • Git 2.40+

  • Python 3.11+ (3.12 OK)

  • Claude Code CLI: npm install -g @anthropic-ai/claude-codeclaude --version

  • For lesson guards: ANTHROPIC_API_KEY unset (per setup.sh / verify_session.sh); use SKIP_* only if you know why.

Learning objectives

Component Architecture

Lesson 1.2 — System architecture Developer machine (WSL / macOS / Linux) setup.sh Generates project next to script lesson-1-first-session/ start.sh · stop.sh · Makefile · cleanup.sh dashboard/ (index.html, context_inspector.sh) sample-repo/ Python package · tests · scripts · .venv Metrics HTTP server (runtime) src/server.py 127.0.0.1:8765 — GET /api/metrics GET / → dashboard HTML claude CLI Interactive session in sample-repo Browser Dashboard UI pytest tests/unit/ · local .venv optional
  • Work in a git-backed sample-repo with a normal src/ layout and pytest.

  • Learn context discipline: use the context inspector to see which paths to load and rough token cost before a session.

  • Run a verification loop: install deps → run tests → (optional) interactive claude session → verify_session.sh.

  • Understand lesson environment rules for early modules: ANTHROPIC_API_KEY unset (CLI / Pro login flow), claude CLI available.

Technical objectives (what the scaffold provides)

  • Reproducible layout via setup.sh: everything generated under lesson-1-first-session/ next to the script (not dependent on arbitrary cwd).

  • Editable install + venv (sample-repo/.venv) so installs work on PEP 668–style systems.

  • Runnable harness: stdlib metrics HTTP server (src/server.py), dashboard (dashboard/index.html), start.sh / stop.sh, demo_metrics.sh to exercise counters.

  • Makefile shortcuts: make test, make demo, etc.

Success criteria (as documented in the project)

Flowchart

Data & control flow demo_metrics.sh curl POST /api/demo/* server.py in-memory counters + HTTP GET /api/metrics JSON snapshot index.html (browser) fetch() every 1s HTTP POST bump metrics poll pytest imports src.* UserAuthService pure Python (no HTTP) tests assert pass / fail in-process claude CLI /add files Workspace files (CLAUDE.md, src/, tests/) edits applied on disk verify_session.sh pytest + checks read/write validate Metrics path (blue) is separate from domain tests (violet) and interactive editing (green).
  • get_user_by_email present in UserAuthService (and tests green).

  • make test / pytest tests/unit/ passes.

  • With ./start.sh and ./sample-repo/scripts/demo_metrics.sh, dashboard metrics show non-zero demo-related counters.

  • For Modules 1–7 style checks: echo $ANTHROPIC_API_KEY empty when using lesson verification (unless you intentionally skip guards).

Out of scope (by design)

  • Production auth (password hashing is illustrative SHA-256; comments say use bcrypt in production).

  • Heavy frameworks; metrics server is stdlib-only for the lesson harness.

Need help?