Edodo

Article 2 of 9 in a series on pedagogy fundamentals in the AI age.

You have a student who didn't get fractions last week. Let's call her Maya.

You sit down on Sunday night. You open a chatbot. You type:

"Generate a lesson on fractions for a 6th grader who is struggling."

Out comes a beautiful lesson. Three learning objectives. A pizza hook. An anchor chart. A worksheet. A quiz.

You print it. You run it Monday.

Maya still doesn't get fractions.

So you tweak the prompt. Differentiated. Scaffolded. Engaging. You get worse lessons. Pretty soon you're producing the educational equivalent of professional, polished garbage. Lessons that look finished but don't teach.

This is vibe-teaching — trusting the prompt instead of looking at the lesson. I didn't believe in vibe-teaching even before AI showed up.

Every failure mode I've watched in the AI classroom is described in Design for How People Learn by Julie Dirksen. Twelve years ago. With cartoons.


The lie inside every AI-in-education product

Almost every AI-in-education product on the market hides one assumption inside its pitch:

AI can figure out what your student needs.

Trust the AI. It will personalize. It will adapt. It will diagnose.

It can't. Here is why:

The AI's default assumption is that every learner has a knowledge gap that more information will fix.

The AI was trained on the internet. The internet is a giant pile of information. So when the AI sees a student who isn't performing, it does the only thing the internet ever does. It throws more information at them.

When a learner isn't performing, it's almost never just one gap. It's one of five. And they each need a completely different fix.

Dirksen's Five Gaps

Knowledge — they don't know it.

Skill — they know it, but they can't do it yet.

Motivation — they could, but they won't.

Habit — they want to, but they keep slipping back.

Environment — the room or the system is fighting them.

Only one of those is solved by more information. The other four — skill, motivation, habit, environment — get worse if you pour information at them.

Maya, who has a motivation gap — Maya, who has decided fractions are not for her, that math is not for her, that school is increasingly not for her — Maya is being treated by the AI as if her problem is that she has not yet been told what a numerator is.

The cost of that mismatch isn't paid by the AI. It's paid by Maya.


What Dirksen actually says

On knowing vs. doing.

"The end of the journey isn't just knowing more, it's doing more."

— Julie Dirksen, Design for How People Learn

The AI gives you knowledge transfer. It doesn't give you a journey. No student in the history of school has ever changed their behavior because someone told them three new facts on a Tuesday.

On the diagnostic that catches every "tell-and-test" lesson.

"Is it reasonable to think that someone can be proficient without practice?"

Apply it to fractions. To writing a paragraph. To reading a graph. To giving peer feedback. The answer is almost always no — which means almost everything you teach is a skill, and skills require practice, and AI by default produces information, not practice.

On the motivation gap.

"If somebody knows what to do, but chooses not to do it, that's a motivation gap."

"If you've ever heard a learner say the words 'I know, but…' then you are probably not dealing with a knowledge gap, but rather a motivational one."

When the kid says "I know, but…" — close the AI. The lesson isn't the problem. The reason is the problem.

On expertise blindness.

"Respect Your Learners, For They Are Not You."

You can't remember not understanding the topic. The AI inherits that blindness from your prompt — because the prompt was written from inside your closet. The kid's closet is empty.

On the elephant and the rider.

Dirksen borrows from Jonathan Haidt. Two systems share your brain. The rider is rational: "I should pay attention." The elephant is emotional: "I'm tired, what's on Instagram, this is boring."

"The elephant WANTS, but the rider restrains that wanting. When the elephant and the rider are in serious conflict, guess who usually wins?"

The elephant. Always.

AI-generated lessons, by default, talk to the rider. "Today's learning objective is…" The rider nods. The elephant has not even shown up to class.

On retention.

"Repetition and practice are necessary to successfully retain most learning for the long term."

The AI will happily generate a new lesson on a new topic every day. None reinforces the previous one. By Friday, the previous lesson is gone.


What you do on Monday

1. Verb-First Lesson Design. Before you open any AI, finish this sentence in writing:

"By Friday, my students will be able to ___________ in ___________."

The first blank is a verb the student can be observed doing. The second is a real situation. Apply Dirksen's test: Is it reasonable they could do this without practice? If no, you're designing a skill, not a knowledge dump.

2. Five-Gap Diagnosis. Before generating anything for a struggling student, answer in writing: which of the five gaps is in the way? Then tell the AI.

"My diagnosis of this student is that the primary gap is [GAP], not a knowledge gap. Do not generate explanations, more examples, or extra reading. Instead, design [for motivation: a real reason / for skill: a practice activity / for habit: a tracking ritual / for environment: a redesign of the work surface]. Treat the student as someone who already knows enough."

That single sentence turns the AI from a content firehose into a co-designer.

3. Build Their Shelves First. Before you ask the AI to teach a topic, prompt it to teach the structure. Shelves before content.

"Before you teach this topic, design the mental shelves first. Give me a high-level organizer the learner can hold in their head; 3–5 categories that all subsequent content must fit into; and one everyday metaphor from the life of a [AGE/CONTEXT] learner. Do not generate any content yet. Just the shelves."

The AI is very good at building shelves. It almost never volunteers, because nothing in its training rewarded slowing down to structure before generating. You have to ask.

4. Bait the Elephant First. Before any lesson is final, ask: what is the elephant going to feel in the first 90 seconds?

Rider-talk to cut: "Today's learning objective is…" "First, we will define…"

Elephant-bait to use: a first-person scenario where the kid has a problem to solve; a surprising fact that contradicts what they already believe; a choice between two genuinely hard options; a mystery the lesson will solve.

"Design the opening 90 seconds of this lesson for the emotional brain, not the rational brain. Use a story, a mystery, a contradiction, a first-person scenario, or an emotional stake — never a learning objective or a definition. The student's first reaction should be 'I want to know what happens next.'"

5. The Reinforcement Loop. Open last week's lesson. Identify the 2–3 ideas that must survive. Design 60 seconds of retrieval — not re-teaching, retrieval — on those ideas. Embed inside this week's lesson, not as a separate "review." Three weeks later, do it again.

"Before generating today's new lesson, design 60 seconds of retrieval practice on these prior concepts: [LIST]. The retrieval must require the student to produce the answer from memory, not recognize it. Embed inside today's lesson — do not present as 'review.' The student should not realize they are reviewing."

The AI starts every conversation from zero. It has no memory of last week. You have to design the reinforcement loop. The AI cannot.


The strategic layer the AI cannot reach

A successful learning experience doesn't just involve a learner knowing more — it's about them being able to do more with that knowledge.

Julie Dirksen, Design for How People Learn

More doing. Not more knowing.

The AI is brilliant at producing more knowing. The AI is structurally incapable of producing more doing — because more doing requires knowing which gap is in the way, what skill is being practiced, how the practice resembles real life, what the elephant cares about, what was taught last week, and what closet has not yet been built in this child's mind.

Every one of those is strategic work. None of it is artifact production.

Once that strategic work is done — diagnose the gap, name the skill, match the practice, bait the elephant, schedule the retrieval — the AI is genuinely useful. It can write the worksheet, generate the role-play script, draft the parent email, differentiate the reading level.

The strategic layer is yours. The artisan layer is the AI's. Stop confusing the two.


The slogan

Your AI does not know your kids. That gap is your moat.

The AI's default assumption is that every learner has a knowledge gap. It's wrong. Most of the time, the gap is something else — skill, motivation, habit, environment — and pouring information at those gaps is exactly the wrong move.

Your job, the most strategic job in the building, is the diagnosis.

Diagnosis is pedagogy. Pedagogy is the moat.


If this is landing, the EDodo flagship — AI-Powered Learning Design — is built on these fundamentals. Eight weeks of project-based building, peer review, real artifacts. Educators who care deeply about pedagogy quietly find each other there.

If you've been running diagnosis, designing for the elephant, and scheduling retrieval well in your own classroom — and you speak fluent AI on top of it — I'd like to hear from you. We're building a faculty.


Source: Dirksen, J. (2015). Design for How People Learn (2nd ed.). New Riders. All quotes verbatim.