Edodo

Article 5 of 9 in a series on pedagogy fundamentals in the AI age.

The student who treats AI like a calculator becomes a calculator. The student who treats AI like a co-thinker becomes a better thinker. And the only person in the room who can teach them which one to be — is you.

Not "AI is good." Not "AI is bad." Something more specific: there is a way of working with AI that makes a person sharper, and a way that makes a person duller. The difference is taught. The difference is modeled. The difference is pedagogy.

The book that names the four moves of the sharper way is Co-Intelligence: Living and Working with AI by Ethan Mollick. The most useful, most practical book on AI for working professionals I've read. Four of its ideas are now the foundation of every AI-classroom policy I help build.

I'll argue today that his four rules are the new pedagogy.


The myth that needs to die

The story going around: AI is so good now that the teacher is the next telephone operator. We will all be obsolete by 2027.

I think it's exactly backwards. The most useful sentence Mollick writes is about a famous study of consultants:

"The AI-powered consultants were faster, better, and more creative than their peers. They also did about twice as much work as a human working alone."

Sounds like consultants are doomed. Read the next bit.

"Human consultants got the problem right 84 percent of the time without AI help, but when consultants used the AI, they got it right only 60 to 70 percent of the time."

Same group. Same AI. Slightly different task. They performed worse with the AI than without it.

Because the task fell on what Mollick calls the Jagged Frontier — a region where the AI looks confident, sounds confident, and is just confidently wrong. The consultants — busy, smart, well-paid — didn't catch it.

AI amplifies whatever judgment the human brings to it.

A student with no judgment, using AI, produces confident garbage. A student with strong judgment produces work two grade levels above where they were before.

The job of the teacher just got more important. Because the only thing standing between a kid and confident garbage is the judgment we are paid to install.

The AI is not the intelligence in the room. The pairing is. And the pairing is taught.


Mollick's four rules — verbatim

Rule 1.

Rule 1

Always invite AI to the table, even if, in the end, you ultimately decide you don't want it there.

— Ethan Mollick, Co-Intelligence

The word "invite" does enormous work. He's not saying you must accept AI. He's saying engage with it — turn it on, try it on the task, see what it does, see where it breaks, then judge whether it belongs.

The AI is at the table whether you invited it or not. The kids brought it. The question isn't whether AI is in your classroom. The question is whether you are in the conversation about it.

When you ban it, you don't remove it. You only remove yourself.

Rule 2.

"Be the human in the loop."

Not "the human at the end of the loop." Not "the human approving the loop." In the loop. Hands on. Brain on. Pushing back. Disagreeing. Editing.

"When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it, which can hurt human learning, skill development, and performance."

Co-intelligence isn't "AI does the work and I check it." Co-intelligence is "I do the work, AI pushes me, I push back, we end up somewhere I could not have gotten on my own."

Take the human's contribution out, you don't have co-intelligence. You have a human-shaped rubber stamp.

Rule 3.

"Treat AI like a person, but tell it what kind of person it is."

The rule most educators skip, because giving AI a "role" feels like playacting. It isn't. It's prompt design — and prompt design is one straight line away from the lesson-design skills you already have.

"Write a paragraph about the French Revolution" gets you a beige, lifeless paragraph.

"You are a sixth-grade history teacher with a class of curious but easily-distracted students who love conflict and stories about kids their own age. Write a 200-word paragraph on the French Revolution. Open with a single 12-year-old's eyewitness moment in 1789. Don't mention the word 'revolution' until the third sentence. End on an unanswered question."

Same AI. Same minute. Different output. The AI was always capable of it. You hadn't told it who to be.

Rule 4.

"Assume this is the worst AI you will ever use."

Whatever it can do today, it will do better in three months. Whatever it cannot do today, it might do in three months. Whatever you teach about its limits today is expiring software.

"I have found it quite hard to make predictions about what AI can and can't do: a year ago I was certain AI would be poor at most kinds of visual creative tasks, and then AI got good at them."

If one of the most-read writers on AI gets it wrong within a year, your "AI cannot grade essays" assertion from March is almost certainly wrong now too.


The Jagged Frontier

"Tasks that were easy for an AI can be hard for a human, and vice versa. Where AI works best, and where it fails, can be hard to know in advance."

The frontier of AI competence is jagged — peaks where it is brilliant beside valleys where it is laughably wrong, the peaks and valleys a single inch apart. The same AI, in the same minute, can write a working JavaScript tic-tac-toe game and get the next move on a tic-tac-toe board visibly wrong.

"Demonstrations of the abilities of LLMs can seem more impressive than they actually are because they are so good at producing answers that sound correct, at providing the illusion of understanding."

The illusion of understanding. The AI doesn't produce correct answers. It produces answers that sound correct. Most of the time those overlap. Sometimes they don't.

When they don't, you are the only line of defense.

"In order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the effectiveness of our work, we need subject matter expertise."

Only the expert can spot the AI's quiet failures. Only the kid who knows some history can catch the AI inventing a battle. Only the kid who can do the math can catch the AI's confidently-wrong answer. The expert is the only working sensor on the Jagged Frontier.

Our job — building expertise — is more vital than it has ever been.


What you do on Monday

1. Open-Door Policy. Before you decide whether AI is allowed on an assignment: try the assignment yourself with AI for 15 minutes; decide what AI use you want (forbidden, disclosed-and-allowed, encouraged) with a reason you can give a student; tell them in writing.

Never write "no AI" without trying it first. The minimum unit of an honest AI policy is a teacher who has done the assignment.

2. Hands-On-Output. Whenever AI produces something for you, do three things before you ship it:

  • Cut something. At least one paragraph. AI overwrites by default. Cutting forces you to read.
  • Add something. A piece of your knowledge of the kids — the running joke, the misconception you know they'll fall into.
  • Disagree with one thing. Find one claim, framing, or example you wouldn't have made yourself, and replace it.

If you can't do all three, you haven't used the AI. You've been used by it.

3. PAC — Persona-Audience-Constraint. Three sentences. Every prompt.

  • Persona — who is the AI being?
  • Audience — who is the AI talking to? Prior knowledge, common misconceptions, language level.
  • Constraint — what must the output do or not do?

The default-empty-prompt era is over.

4. Frontier Probe. Force the AI into a verification ritual:

"Answer the following question. After your answer: (a) flag any sentences where you are likely to be hallucinating or guessing, (b) for each factual claim, cite the kind of source where it could be verified, (c) name one expert who would push back on your answer and what they would say."

The AI's tone changes. It starts hedging. That hedging is information — the AI showing you the Jagged Frontier from the inside.

5. Glass-Box Practice. Once a week, in front of your students, use AI visibly. Let them see your prompt. Let them see the bad first draft — show the slop. Let them see the edits — talk through what you cut, kept, and changed, and why. Five minutes. The kids will learn more about AI from those five minutes than from any policy document.

You don't get to demand of them what you won't show them.


The Secret Cyborg in the staff room

"50% of workers admitted to using AI at work without approval or disclosure."

Half. Of working adults. Without telling anyone.

Mollick calls them Secret Cyborgs. Drafting emails, summarizing meetings, writing reports, preparing lessons — and not telling anyone, because they're afraid of looking lazy or violating an unwritten rule.

This is happening in your staff room right now.

The cost: good practices never spread. Bad practices never get caught.

The fix is visibility. Visibility starts with whoever has the most permission to be honest first. Try this in your next team meeting, exactly as written:

"I have been using AI this week to [specific task]. The prompt I used was [specific prompt]. The output was [honest assessment]. Here is what I changed and why."

Then sit down. Watch what happens. Someone else will speak. The dam breaks at the first honest sentence.


The slogan

Invest in the partnership every day.

Vivek (after Kent Beck), Co-Intelligence

Not "invest in AI." Not "resist AI." Not "use AI." Invest in the partnership.

Mollick's parting line:

"AI is a co-intelligence, not a mind of its own. Humans are far from obsolete, at least for now."

The whole book turns on the prefix. Co-intelligence. The intelligence in the room is not in the machine, and it is not only in the human. It is in the pairing. The pairing is something a person learns.

The person who teaches that learning — who shows a fourteen-year-old how to argue with a chatbot, edit its drafts, doubt its facts, fold its strengths into their thinking — is one of the most important people in that kid's life right now.

That person is you.


If this is landing, the EDodo flagship — AI-Powered Learning Design — is the cohort version of all this. Eight weeks of project-based building, peer review, real artifacts.

If you're one of those Secret Cyborgs — quietly mastering this in your own classroom — consider stepping out of the shadow and teaching with us. The world needs educators who can hold the line on pedagogy and speak fluent AI, with both hands, with their kids in the room.

There are not many of you yet.


Source: Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Portfolio. All quotes verbatim.