PRE-LAUNCH · BUILDING IN THE OPEN
Flintwise

AI-built courses with the AI built in.

Built for educators running project-based courses — coding, art, writing, debate, anywhere students make real work.

01What it looks like

A walkthrough.

Five moments from one week of a course. Setup → lesson → learning → feedback → what goes home. AI is in every step, but teachers sign off on what students and parents actually see. Switch examples to see the same setup running a totally different subject.

Switch example
01
The setup

A teacher describes the course they want to teach.

Click generate
Course brief
Teacher → AI
Watch a teacher describe a course →
Press generate to begin
Generated course package
AI → Teacher
Output appears here after generate.
Teacher describes
AI generates package
Teacher reviews + publishes

What used to take a senior teacher two weekends now takes an afternoon.

02
The lesson

The teacher runs the class. The AI Copilot is in the sidebar.

Live · in class · Week 2 / Step 3
Teacher view
Today's plan
Week 2
  1. Recap last week5 min
  2. Show 3 agent examples in action10 min
  3. Discuss: when should an agent stop?15 min
    AI cueOpen the question. Don't define 'stop condition' yourself — let kids surface what makes an agent run forever.
  4. Pair: write your agent's 5-step loop20 min
  5. Share + critique10 min
Teacher Copilot · live
live
3 kids stuck on what 'loop' means. Quick fix?
Try the recipe warmup, 2 min. Ask each: describe how you'd plan dinner. Same shape as an agent loop.
Going with that. Thx.
Ask quietly during class…
Teacher leads
AI sits beside
Teacher decides

AI surfaces patterns and prompts. The teacher reads the room and decides.

03
The learning

A student opens a concept. The AI Tutor is right there in the material.

Student view · Week 1 · concept 5 of 6
Student view
Concept · the agent loop
4 of 6

What's the agent loop?

The thing that makes an agent an agent. It plans, then it acts using a tool, then it looks at what happened, then it decides what's next. Over and over — until the goal is met, or it stops itself.

Plan
Act
Observe
Ask the AI Tutor anything about this card.
AI Tutor
Socratic
I don't get the loop.
Let's try. I'll be your agent. Give me a goal.
remind me to drink water.
Good. What's the first thing I need to figure out — before I can act?
Socratic mode · no direct answers · teacher-set boundaries
Ask the tutor anything…
Student asks
AI tutors Socratically
Teacher sets rules

AI tutors patiently — Socratically. Teachers set the rules and watch the conversations.

04
The handoff

AI drafts a piece of feedback. The teacher reviews and publishes.

After every class · auto-drafted

Every session, AI auto-drafts a feedback note from what the student actually did in class — graded against the teacher's rubric. The teacher reviews, edits, and signs off before anything reaches a parent.

AI
AI Draft
Teacher
Teacher Review
Parent
Published
AI Draft
Draft · awaiting review

Drafted from this session's student work + your rubric.

Teacher edits anything — wording, depth, tone.

Sent under the teacher's name. No auto-send.

Every piece of feedback and every parent report goes through a teacher before it reaches anyone. There's no auto-send button — that's the whole point.

05
The proof

A project page for the student. A report for the parent.

flintwise.io/sophia-c/atlas
Project page
Published Apr 14 · always liveFlintwise

Sophia Chen

Project · Week 4

Atlas — my study planner agent

Plans · schedules · quizzes · 4 tools wired · knows when to stop
biology test friday
Got it. I'll plan 5 sessions, 25 min each. First one tonight at 7pm OK?
yes
calendar.add(7pm)· ok
Added to your calendar. I'll quiz you at 7:25 — say "stop" anytime.
Excerpt from Atlas · planning a study week
What this is. A page each student keeps. Every project lives here on its own URL — they can share the link, add the next project on top, and revisit it when they're older.
Reviewed & published by Ms. Linh Tran
FlintwiseWeekly Report

Sophia Chen

Build your own · Week 3 of 4

Sophia's agent kept getting stuck this week — it would plan a study session, then wait forever for her to confirm. She figured out it was missing a stop rule, wrote one in, broke it differently, and fixed it again. Debugging an agent isn't writing code. It's reasoning about how a thing behaves over time. She's learning to think in loops.

What she built this week
Atlas · v3 · 3 study plans run end-to-end
Covered this week
  • The agent loop: plan → act → observe → reflect
  • Tool calls and what to do when a tool returns nothing
  • The hardest agent question: when to stop
Next week
  • Live showcase · Apr 21 (in class)
  • Record the final demo and write a one-line reflection
You're welcome to come watch the showcase Friday.
Reviewed by Ms. Linh Tran

For the student. A live URL the student keeps. Each project they finish lives here. They can share the link, add the next one on top, and revisit it years later.

For the parent. AI-drafted weekly summary plus one handwritten sentence from the teacher. Sent every Friday.

Mockup · names illustrative

Two things students and parents actually save. Both have the teacher's name on them.

02Sample course

A course we'd build first.

4-Week AI Agent Project. Kids design, build, and demo a working agent of their own — one that plans, uses tools, and takes actions over time. With an AI Tutor coaching them through every step. This is the design we're starting from. Putting it out here while it's still movable, in case you'd run it differently.

Spec sheet
Course overview
For
Ages 11–13 · groups of 8–16
Duration
4 weeks · 60-minute sessions
Final work
A working AI agent — student-designed, demoed live
Skills built
Agent design · planning loops · tool use · safe-by-default boundaries · testing & iteration
Tools
No-code agent starter · pre-wired tools (calendar · timer · search · quiz) · AI Tutor · in-browser playground
What you get
A full Teach · Learn · Practice · Evaluate package — lesson plans, learning materials, task cards, rubrics, parent reports, and teacher prep checklists
Week 1

Meet the agents

60 min · in-class
TEACH
Teacher leadsAI: AI narrates 3 live examples
  • A 60-min session: kids watch 3 working agents run live — a study planner, a weekend organizer, a household helper. Teacher names the parts as they happen: "that's a tool. that's a plan. that's why it stopped."
  • Teacher prep: 3 pre-built example agents, a one-page "how to narrate the trace" cheatsheet, and 3 "poke at it" prompts kids can try in the last 10 min.
LEARN
Student exploresAI: AI Tutor narrates what kids saw
  • AI Learning Companion: a chat-enabled walkthrough of the 3 agents kids just watched. AI asks "what surprised you?" before introducing any vocabulary.
  • 5 AI-adaptive concept cards introduced in plain language: a goal, a plan, a tool, an observation, when to stop. Each card uses the examples kids saw — no abstract definitions.
PRACTICE
Student pokes at itAI: AI Tutor reacts to changes
  • Open 3 pre-built agents in the in-browser playground. For each, answer one sentence: "what's its goal? what tools does it use? when does it stop?"
  • Change one thing in each agent — a word in its instructions, a tool, the stop rule — and watch what happens. AI Tutor reacts: "interesting choice. what changed?"
  • Pick 1 of 20 starter ideas (or invent one) — the agent you'd want to build in Week 3. Just the goal; no design yet.
EVALUATE
Teacher listensAI: AI captures notes from discussion
  • No grading this week. A 10-min class discussion: "what's something an agent did this week that surprised you?"
  • Teacher note (1 line) on each student's chosen Week 3 idea — one encouragement + one thing to think about.

Student leaves with: A short writeup: 3 agents taken apart, what surprised them, and the agent they want to build.

Week 2

Try the parts

60 min · in-class
TEACH
Teacher leads 3 mini-modulesAI: AI Copilot tip in each module
  • A 60-min session broken into 3 modules — instructions (15 min), tools (15 min), stop conditions (15 min). Each ends with a 5-min "try it" on a shared agent.
  • Teacher Copilot tip: "If a student stalls on module 2, skip ahead to module 3 — once they feel a loop stop, the rest clicks."
LEARN
Student studies one part at a timeAI: AI Tutor explains, Socratic style
  • AI Learning Companion goes deeper on each part: how instructions shape behavior, why tool calls fail, what makes a stop condition robust.
  • 6 AI-adaptive concept cards with worked examples: instruction prompt, tool call, observation, retry, stop condition, refusal — each adapted to the student's chosen idea from Week 1.
PRACTICE
Student does 3 small exercisesAI: AI presses on each weak spot
  • Rewrite a vague instruction prompt to be specific — AI Tutor pushes: "what does 'helpful' mean here?"
  • Add one tool to a pre-built agent and watch its behavior shift — predict the change before you run it.
  • Deliberately make an agent loop forever, then fix it. AI Tutor explains why your fix worked.
EVALUATE
Teacher reviews mini-checkpointsAI: AI drafts a one-line note per student per module
  • 3 quick checkpoints (one per module): did the student notice what changed? did they fix it themselves?
  • AI drafts a one-line feedback per student per module; teacher reviews, picks one to send.

Student leaves with: Three small wins (one per part) and confidence to design their own loop in Week 3.

Week 3

Build your own

60 min · in-class
TEACH
Teacher walks the roomAI: AI Copilot surfaces who needs a nudge
  • A 60-min build session. Teacher walks the room, unblocking students one-on-one.
  • Teacher Copilot side-panel surfaces patterns: "3 students haven't picked a stop condition yet — nudge them." "This student's instructions are still vague — ask what 'help' means in their case."
LEARN
Student references on demandAI: AI Tutor answers debugging questions
  • Quick-reference cards from Week 2 are one click away inside the build tool.
  • AI Tutor answers "why did mine just do this?" with a trace pointer and a Socratic follow-up — never a fix straight out.
PRACTICE
Student designs, builds, testsAI: AI rehearsal partner plays a messy user
  • Write your agent's instructions, pick 2–3 tools from the menu, and sketch the 5-step loop.
  • Wire it into the starter template — no code; AI-debugged scaffold runs in browser.
  • Run 3 test goals (easy, medium, weird edge case) and log every place the agent gets stuck or oversteps.
  • AI rehearsal partner plays an impatient, off-topic, or contradictory user — stress-test before showcase. Fix one round of issues.
EVALUATE
Teacher reviews + sendsAI: AI drafts feedback citing real traces
  • Test rubric: 5 dimensions — completion, tool use, safety, recovery from failure, stop quality.
  • AI drafts a one-paragraph feedback note per student citing their specific traces; teacher reviews, edits, and sends.

Student leaves with: A working agent + a written list of the one thing they fixed and why.

Week 4

Polish & showcase

60 min · in-class
TEACH
Teacher runs the showcaseAI: AI prep checklist + run-of-show
  • A 60-min live showcase: each student gives their agent one real goal, live, while the class watches it plan and act.
  • Teacher prep: room setup, screen sharing, recording, plus an AI-generated run-of-show with timing per student.
LEARN
Student reflectsAI: AI Tutor coaches the demo narrative
  • AI Learning Companion on demoing an agent: narrate the plan, point at the trace, handle "but what if it does X?" questions.
  • 3 AI-adaptive concept cards: demo narrative, trace storytelling, fielding hard questions.
PRACTICE
Student polishes + deliversAI: AI generates the portfolio; teacher publishes
  • Polish one weakness flagged in Week 3 — usually a tighter stop condition or a new refusal.
  • Run a 3-minute live demo: one real goal, agent works, narrate the trace.
  • Watch own demo once; write a one-line reflection — AI helps articulate what they noticed.
  • Receive an AI-generated portfolio page (trace highlights + tool list + persona one-liner) — teacher reviews before publishing.
EVALUATE
Teacher signs + sendsAI: AI drafts rubric + parent report; teacher writes one line
  • Final rubric (same 5 dimensions, now scored against the live demo).
  • Auto-generated portfolio page per student — teacher reviews before publishing.
  • Parent report draft — teacher reviews, adds one handwritten sentence about this student specifically, signs, and sends.

Student leaves with: A working agent, a portfolio page they're proud of, and a parent report sent home.

◇ Drawing board
Also on our drawing board

4-Week Public Speaking Project

A non-STEM sample for ages 9–11 — kids design and deliver a 3-minute persuasive speech. Same Teach · Learn · Practice · Evaluate structure, completely different subject. If the framework really adapts to anything, the second sample should prove it.

Both are being designed. Which one would you want us to ship first? Tell us

03In the box

What's in the box.

Seven modules, grouped by Teach · Learn · Practice · Evaluate. Below is roughly the order we'd build them in. We're figuring this out in the open, so if you'd swap the order around, tell us.

TEACH
Building first

Curriculum Builder

The starting point. Tell it a learning goal — say, "a 4-week public speaking course for ages 9–11" — and it generates a full AI-native course package: lesson plans, AI Learning Companions, AI-coached task cards, rubrics, parent report templates.

AI drafts the whole package.You review, refine, and publish under your school's brand.
Building next

Teacher Copilot

The companion that turns a senior teacher's instincts into everyone's instincts. Prep chat before class, AI playing a 10-year-old so you can rehearse hard questions, and a quiet in-class sidebar for quick answers mid-lesson.

AI surfaces patterns and prompts.You read the room and decide.
LEARN + PRACTICE
Building next

AI Tutor

The student-facing AI assistant that lives inside every learning material — handbooks, concept cards, task cards. Socratic by default: it asks better questions, doesn't give answers. Teachers set the boundaries.

AI tutors patiently.You set the rules and watch the conversations.
Building later

Student Workspace

Where students submit work — code, screenshots, video, writing, recordings. Organized by project, ready for review.

AI structures and tags submissions.You shape the project and set the bar.
EVALUATE
Building later

Feedback Engine

First-pass feedback drafts grounded in your rubric — for students, for teacher review notes, for parent summaries. Plus class-level analysis: upload 20 submissions, get a draft summary of where the class is stuck.

AI drafts feedback in seconds.You edit, approve, and send.
Building later

Parent Report & Portfolio

A shareable record of what a student actually built. Every report includes a sentence you wrote yourself, and your name at the bottom.

AI organizes evidence into a beautiful page.You write the one sentence parents remember.
INFRASTRUCTURE
Building later

Institution Console

Classes, teachers, students, your branding, your domain. Minimal at first. Deeper as your school grows.

Your admins set it up once.Everyone else just uses it.
◇ One rule

One rule runs through all of them: AI drafts, teachers decide what students actually see.

◇ Join the conversation

We're designing this with educators, not for them.

Flintwise is still being designed. We're putting the draft in front of people who actually teach — before code, before pilots, before anything is locked in. Two ways to get involved:

Async group

Join the educators' group

A WhatsApp group of teachers and educators we're designing Flintwise with. Drop a question, a gripe, an idea — or just listen along. The earliest voices shape what we build first.

Join the WhatsApp group