
AI-built courses with the AI built in.
Built for educators running project-based courses — coding, art, writing, debate, anywhere students make real work.
A walkthrough.
Five moments from one week of a course. Setup → lesson → learning → feedback → what goes home. AI is in every step, but teachers sign off on what students and parents actually see. Switch examples to see the same setup running a totally different subject.
A teacher describes the course they want to teach.
What used to take a senior teacher two weekends now takes an afternoon.
The teacher runs the class. The AI Copilot is in the sidebar.
- Recap last week5 min
- Show 3 agent examples in action10 min
- Discuss: when should an agent stop?15 minAI cueOpen the question. Don't define 'stop condition' yourself — let kids surface what makes an agent run forever.
- Pair: write your agent's 5-step loop20 min
- Share + critique10 min
AI surfaces patterns and prompts. The teacher reads the room and decides.
A student opens a concept. The AI Tutor is right there in the material.
What's the agent loop?
The thing that makes an agent an agent. It plans, then it acts using a tool, then it looks at what happened, then it decides what's next. Over and over — until the goal is met, or it stops itself.
AI tutors patiently — Socratically. Teachers set the rules and watch the conversations.
AI drafts a piece of feedback. The teacher reviews and publishes.
Every session, AI auto-drafts a feedback note from what the student actually did in class — graded against the teacher's rubric. The teacher reviews, edits, and signs off before anything reaches a parent.
Drafted from this session's student work + your rubric.
Teacher edits anything — wording, depth, tone.
Sent under the teacher's name. No auto-send.
Every piece of feedback and every parent report goes through a teacher before it reaches anyone. There's no auto-send button — that's the whole point.
A project page for the student. A report for the parent.
Sophia Chen
Sophia's agent kept getting stuck this week — it would plan a study session, then wait forever for her to confirm. She figured out it was missing a stop rule, wrote one in, broke it differently, and fixed it again. Debugging an agent isn't writing code. It's reasoning about how a thing behaves over time. She's learning to think in loops.
- The agent loop: plan → act → observe → reflect
- Tool calls and what to do when a tool returns nothing
- The hardest agent question: when to stop
- Live showcase · Apr 21 (in class)
- Record the final demo and write a one-line reflection
For the student. A live URL the student keeps. Each project they finish lives here. They can share the link, add the next one on top, and revisit it years later.
For the parent. AI-drafted weekly summary plus one handwritten sentence from the teacher. Sent every Friday.
— Mockup · names illustrative —
Two things students and parents actually save. Both have the teacher's name on them.
A course we'd build first.
4-Week AI Agent Project. Kids design, build, and demo a working agent of their own — one that plans, uses tools, and takes actions over time. With an AI Tutor coaching them through every step. This is the design we're starting from. Putting it out here while it's still movable, in case you'd run it differently.
- For
- Ages 11–13 · groups of 8–16
- Duration
- 4 weeks · 60-minute sessions
- Final work
- A working AI agent — student-designed, demoed live
- Skills built
- Agent design · planning loops · tool use · safe-by-default boundaries · testing & iteration
- Tools
- No-code agent starter · pre-wired tools (calendar · timer · search · quiz) · AI Tutor · in-browser playground
- What you get
- A full Teach · Learn · Practice · Evaluate package — lesson plans, learning materials, task cards, rubrics, parent reports, and teacher prep checklists
Meet the agents
60 min · in-class- A 60-min session: kids watch 3 working agents run live — a study planner, a weekend organizer, a household helper. Teacher names the parts as they happen: "that's a tool. that's a plan. that's why it stopped."
- Teacher prep: 3 pre-built example agents, a one-page "how to narrate the trace" cheatsheet, and 3 "poke at it" prompts kids can try in the last 10 min.
- AI Learning Companion: a chat-enabled walkthrough of the 3 agents kids just watched. AI asks "what surprised you?" before introducing any vocabulary.
- 5 AI-adaptive concept cards introduced in plain language: a goal, a plan, a tool, an observation, when to stop. Each card uses the examples kids saw — no abstract definitions.
- Open 3 pre-built agents in the in-browser playground. For each, answer one sentence: "what's its goal? what tools does it use? when does it stop?"
- Change one thing in each agent — a word in its instructions, a tool, the stop rule — and watch what happens. AI Tutor reacts: "interesting choice. what changed?"
- Pick 1 of 20 starter ideas (or invent one) — the agent you'd want to build in Week 3. Just the goal; no design yet.
- No grading this week. A 10-min class discussion: "what's something an agent did this week that surprised you?"
- Teacher note (1 line) on each student's chosen Week 3 idea — one encouragement + one thing to think about.
Student leaves with: A short writeup: 3 agents taken apart, what surprised them, and the agent they want to build.
Try the parts
60 min · in-class- A 60-min session broken into 3 modules — instructions (15 min), tools (15 min), stop conditions (15 min). Each ends with a 5-min "try it" on a shared agent.
- Teacher Copilot tip: "If a student stalls on module 2, skip ahead to module 3 — once they feel a loop stop, the rest clicks."
- AI Learning Companion goes deeper on each part: how instructions shape behavior, why tool calls fail, what makes a stop condition robust.
- 6 AI-adaptive concept cards with worked examples: instruction prompt, tool call, observation, retry, stop condition, refusal — each adapted to the student's chosen idea from Week 1.
- Rewrite a vague instruction prompt to be specific — AI Tutor pushes: "what does 'helpful' mean here?"
- Add one tool to a pre-built agent and watch its behavior shift — predict the change before you run it.
- Deliberately make an agent loop forever, then fix it. AI Tutor explains why your fix worked.
- 3 quick checkpoints (one per module): did the student notice what changed? did they fix it themselves?
- AI drafts a one-line feedback per student per module; teacher reviews, picks one to send.
Student leaves with: Three small wins (one per part) and confidence to design their own loop in Week 3.
Build your own
60 min · in-class- A 60-min build session. Teacher walks the room, unblocking students one-on-one.
- Teacher Copilot side-panel surfaces patterns: "3 students haven't picked a stop condition yet — nudge them." "This student's instructions are still vague — ask what 'help' means in their case."
- Quick-reference cards from Week 2 are one click away inside the build tool.
- AI Tutor answers "why did mine just do this?" with a trace pointer and a Socratic follow-up — never a fix straight out.
- Write your agent's instructions, pick 2–3 tools from the menu, and sketch the 5-step loop.
- Wire it into the starter template — no code; AI-debugged scaffold runs in browser.
- Run 3 test goals (easy, medium, weird edge case) and log every place the agent gets stuck or oversteps.
- AI rehearsal partner plays an impatient, off-topic, or contradictory user — stress-test before showcase. Fix one round of issues.
- Test rubric: 5 dimensions — completion, tool use, safety, recovery from failure, stop quality.
- AI drafts a one-paragraph feedback note per student citing their specific traces; teacher reviews, edits, and sends.
Student leaves with: A working agent + a written list of the one thing they fixed and why.
Polish & showcase
60 min · in-class- A 60-min live showcase: each student gives their agent one real goal, live, while the class watches it plan and act.
- Teacher prep: room setup, screen sharing, recording, plus an AI-generated run-of-show with timing per student.
- AI Learning Companion on demoing an agent: narrate the plan, point at the trace, handle "but what if it does X?" questions.
- 3 AI-adaptive concept cards: demo narrative, trace storytelling, fielding hard questions.
- Polish one weakness flagged in Week 3 — usually a tighter stop condition or a new refusal.
- Run a 3-minute live demo: one real goal, agent works, narrate the trace.
- Watch own demo once; write a one-line reflection — AI helps articulate what they noticed.
- Receive an AI-generated portfolio page (trace highlights + tool list + persona one-liner) — teacher reviews before publishing.
- Final rubric (same 5 dimensions, now scored against the live demo).
- Auto-generated portfolio page per student — teacher reviews before publishing.
- Parent report draft — teacher reviews, adds one handwritten sentence about this student specifically, signs, and sends.
Student leaves with: A working agent, a portfolio page they're proud of, and a parent report sent home.
4-Week Public Speaking Project
A non-STEM sample for ages 9–11 — kids design and deliver a 3-minute persuasive speech. Same Teach · Learn · Practice · Evaluate structure, completely different subject. If the framework really adapts to anything, the second sample should prove it.
Both are being designed. Which one would you want us to ship first? Tell us →
What's in the box.
Seven modules, grouped by Teach · Learn · Practice · Evaluate. Below is roughly the order we'd build them in. We're figuring this out in the open, so if you'd swap the order around, tell us.
Curriculum Builder
The starting point. Tell it a learning goal — say, "a 4-week public speaking course for ages 9–11" — and it generates a full AI-native course package: lesson plans, AI Learning Companions, AI-coached task cards, rubrics, parent report templates.
Teacher Copilot
The companion that turns a senior teacher's instincts into everyone's instincts. Prep chat before class, AI playing a 10-year-old so you can rehearse hard questions, and a quiet in-class sidebar for quick answers mid-lesson.
AI Tutor
The student-facing AI assistant that lives inside every learning material — handbooks, concept cards, task cards. Socratic by default: it asks better questions, doesn't give answers. Teachers set the boundaries.
Student Workspace
Where students submit work — code, screenshots, video, writing, recordings. Organized by project, ready for review.
Feedback Engine
First-pass feedback drafts grounded in your rubric — for students, for teacher review notes, for parent summaries. Plus class-level analysis: upload 20 submissions, get a draft summary of where the class is stuck.
Parent Report & Portfolio
A shareable record of what a student actually built. Every report includes a sentence you wrote yourself, and your name at the bottom.
Institution Console
Classes, teachers, students, your branding, your domain. Minimal at first. Deeper as your school grows.
One rule runs through all of them: AI drafts, teachers decide what students actually see.
We're designing this with educators, not for them.
Flintwise is still being designed. We're putting the draft in front of people who actually teach — before code, before pilots, before anything is locked in. Two ways to get involved:
Join the educators' group
A WhatsApp group of teachers and educators we're designing Flintwise with. Drop a question, a gripe, an idea — or just listen along. The earliest voices shape what we build first.
Join the WhatsApp group