![[Pasted image 20260105094434.png|500]] # Python Daily Practice - AI-Adaptive Learning System ## What I Built I'm built a Python practice system that adapts to how I learn in real-time. It's powered by terminal AI (Claude Code) with direct read/write access to my Obsidian vault via MCP (Model Context Protocol). The result? An AI coach that lives inside my Personal Kownledge Management system, generates daily coding problems, tracks my progress across a network of Zettelkasten notes, and adjusts the curriculum based on what I struggle with. --- ## Why I Built This I recently finished Google's Python certification but still felt miles away from being job-ready. I'm a project-based learner; I need a real problem to solve, not abstract exercises. But here's the catch: I wanted to practice Python *every day*, not just when I had a side project going. Traditional coding challenge sites (LeetCode, etc.) are great for interview prep, but they don't adapt to *how I learn*. If I'm struggling with nested dictionaries, they won't adjust future problems to reinforce that concept. They're static. I needed something dynamic- So I built it. --- ## Terminal AI + Obsidian MCP ### The Terminal AI ↔ Obsidian Connection I use **Claude Code** (terminal-based AI) with the **Obsidian MCP server**. MCP (Model Context Protocol) gives the AI direct read/write access to and semantic understanding og my vault—it's not just chatting with me, it's *part of my knowledge system*. **What this enables:** ![[Pasted image 20260105093523.png|500]] 1. **AI reads my project notes to understand context** - I say `"daily problem please"` → AI reads the main project note → sees I'm on Day 5, Phase 1 → generates a problem tailored to the curriculum 2. **AI writes to my vault automatically** - Creates the daily problem note in `zettelkasten/zNotes/` - Generates Python starter file with tests in `python_practice/` - Updates the phase note with completion status - Logs performance metrics in the main project note - All of this happens in one command ![[Pasted image 20260105093653.png|500]] 1. **AI tracks my learning patterns across sessions** - During solving, I ask questions like "Why is nested dict access different?" - AI logs this as a help request in the daily problem note - After I submit my solution, AI analyzes the chat history + my code - If I struggled with a concept, it *modifies the curriculum immediately*—adds a remedial problem, reorders topics, or injects extra practice **The magic:** The AI isn't just answering questions—it's *maintaining a living document system* that evolves with my progress. My notes are the AI's memory, and the AI is the automaton that keeps them organized. ### How My PKM System Makes This Possible ![[Pasted image 20260105093313.png|500]] My knowledge management setup uses Zettelkasten with satellite notes. Here's how it works for this project: **Three-tier hierarchy:** ``` Python Daily Practice (main project note) ├── 6 Phase Notes (curriculum chapters) │ └── 60 Daily Problem Notes └── Python files in python_practice/ ``` Why this structure? If I linked all 60 daily problems directly to the main note, it would be a mess. Phase notes act as "chapters," making the curriculum scannable. The AI knows this structure (it's documented in the agent instructions), so it maintains the hierarchy automatically. **Zettelkasten linking strategy:** - Main project → Phase notes (overview) - Phase notes → Daily problems (specific) - Daily problems → Back to phase + main (bidirectional) This creates a **knowledge graph**, not a linear tutorial. Six months from now, when I'm building an API and forget how async context managers work, I can search my vault for "async context manager" and jump straight to Day 31's problem. The note is atomic, self-contained, and referenceable. --- ## How It Actually Works- Agent Prompt ### Starting a Problem **Me:** `"daily problem please"` **AI (via MCP):** 1. Reads `GTD/Current Projects/Python Daily Practice.md` → sees I'm on Day 2 2. Reads `Python Daily Practice Phase 1 - Foundations.md` → knows the topic is dict manipulation 3. Creates `Python Daily Practice Day 2 - [Problem Title].md` in `zettelkasten/zNotes/` 4. Generates `python_practice/day_02_problem.py` with TODO sections and tests 5. Updates phase note: marks Day 2 as active 6. Updates main note: "Today's Problem" section 7. Replies with a clickable file link: `[python_practice/day_02_problem.py](python_practice/day_02_problem.py)` I click the link → file opens in my IDE (Antigravity in Claude Code) → I start solving. ### Solving with AI Help Here's where it gets interesting. I can ask questions *while solving*: - "What's the difference between `.get()` and bracket notation?" - "Why do I need to chain `.get()` for nested dicts?" The AI answers, but critically, it's **tracking these help requests**. They're logged in the daily problem note under "Help Requested During Solving." Why does this matter? Because after I submit my solution, the AI analyzes: - **What I asked about** (chat history) - **How I solved it** (code quality) - **Whether I struggled** (tests passed? Needed multiple attempts?) This data feeds the adaptive curriculum. ### Submitting & Adapting **Me:** `"I'm done"` **AI:** 1. Reads my solution from `day_02_problem.py` 2. Provides feedback (correctness, pythonic style, edge cases) 3. Asks for difficulty rating (too easy / just right / too hard) 4. Shows production-quality reference solution 5. **Curriculum re-evaluation happens here:** - Analyzes chat history: "User asked 2 questions about nested dicts" - Analyzes solution: "Used chained `.get()` correctly, minor style issues" - Compares to expectations: "Mastered key concept on first try" - Decision: "No curriculum change needed—pacing is correct" - Logs this in "Feedback & Adjustment" section If I had struggled? AI would modify the phase note to add a remedial problem, or skip an easier upcoming problem. **The curriculum is a living document.** ### Progress Tracking ![[Pasted image 20260105094017.png|500]] ![[Pasted image 20260105094131.png|500]] All of this is logged: - **Streak tracking:** 1 day → 2 days → ... (visible in main note) - **Completed problems:** Each day gets a summary (difficulty, concepts, help requests) - **Concepts mastered:** Running checklist (✅ nested dict access, ✅ type hints, etc.) - **Performance metrics:** Difficulty ratings over time, help request trends - **Learning insights:** "User asks conceptual 'why?' questions—builds mental models well" --- ## Why This Approach Works ### 1. Terminal AI as Workflow Automation The AI isn't just a chatbot—it's following **agent instructions** I wrote in the project note. When I say "daily problem please," it's executing a protocol: - Check current progress → Generate problem → Update 3 notes → Provide link This is workflow automation, but flexible. If I say "make Day 3 about async instead," the AI adjusts. It's structured *and* conversational. ### 2. MCP Tools = AI with Persistence Without MCP, the AI would forget everything between sessions. With MCP, my notes *are* the AI's memory. It reads the project note every time, sees where I left off, and picks up seamlessly. **Example:** I start Day 5 a week from now. The AI reads: - Current progress: Day 4 completed - Recent performance: Struggled with regex on Day 3 - Curriculum adjustment: Added extra regex problem for Day 6 It doesn't need me to explain context—the notes *are* the context. ### 3. Zettelkasten = Scalable Knowledge Graph By Day 60, I'll have: - 1 main project note - 6 phase notes - 60 daily problem notes - 60 Python solution files If these were just files in a folder, they'd be useless six months later. But with Zettelkasten linking: - I can search "composition pattern" → Jump to Day 23 - I can view Phase 3's note → See all 10 data processing problems at a glance - I can check main note → See my full learning trajectory (concepts mastered, struggles, streaks) The system is designed for **future retrieval**, not just current learning.