Article 6 of 9 in a series on pedagogy fundamentals in the AI age.
When you read a piece of student work that is technically fine and feels like it was written by no one, what is going missing has a name.
The name is agency.
Every AI design choice in your classroom is also an agency choice. Agency does not warn you when it leaves a lesson.
The decision about who is doing the thinking in this lesson is being made every single time a student opens a chatbot. Most of the time, we are not the ones making it.
The lessons that look most successful from the outside are sometimes the lessons that are most quietly hollowing out the kid in front of you. The work is finished. The grades are fine. The kids seem confident.
Something has gone missing that none of us are tracking.
The book that gives us the vocabulary to name it — and therefore design against it — is Artificial Intelligence and Human Agency in Education, edited by Adarkwah and colleagues. Without the vocabulary, we can only feel vaguely uneasy. With it, we can build a diagnostic.
Why "it's just a calculator" is the most dangerous lie
Every faculty meeting for three years now: "It's just a tool. We adjusted to calculators. We'll adjust to this."
It's wrong, and it's the lie that costs the most.
A calculator does arithmetic. It does not draft your argument. It does not decide what question to ask next. It does not produce a finished essay that sounds plausibly like you. A calculator extends one cognitive sub-routine and leaves everything around it untouched. Your thinking still has to assemble the problem, choose the operation, interpret the result, and decide what it means.
Generative AI is not that. Generative AI does the whole sentence. Or the whole paragraph. Or the whole plan. Or the whole reflection. It substitutes for the entire act of cognition.
From the book
AI can substitute for critical reasoning and thereby reduce student agency to take control of their own learning.
— Artificial Intelligence and Human Agency in Education
The word the researchers chose, deliberately, is substitute. Not augment. Not support. Substitute.
Agency is a muscle. Critical reasoning is a muscle. Self-regulation is a muscle. Use-it-or-lose-it. There is no warning light on the dashboard of a sixteen-year-old that says "agency depleted, pull over."
Bringing AI into a lesson is not a neutral technical choice. It is a pedagogical decision about who gets to do the thinking. If we don't make that decision on purpose, the default — every single time — is that the AI does it and the kid doesn't.
A vocabulary for what is being lost
The book's gift is naming. Once a thing is named, it can be defended.
Disguised completion.
"Students may become overly reliant on these technologies for guidance, feedback, and decision-making. This could potentially undermine students' ability to engage in independent problem-solving, critical analysis, and self-directed learning."
Three things on that list — guidance, feedback, decision-making — used to be the student's job to grow into. AI now offers them frictionlessly, on demand. The muscle is not being refused. It is no longer being exercised.
Self-regulated learning (SRL).
"Self-Regulated Learning is a critical skill set that allows learners to plan, monitor, and reflect on their progress."
Plan. Monitor. Reflect. In an unexamined AI lesson, all three migrate from the student to the model. The AI plans the structure. The AI monitors progress through real-time feedback. The AI reflects in a tidy summary at the end.
The cycle still runs. The student is just no longer the one running it.
Authenticity.
"Since AI can create almost original and natural-sounding text, there is a risk that individuals may misuse it by opting to utilize AI-generated content without proper authorization. This practice undermines issues of copyright, correctness, accuracy, ownership, and authenticity."
Four of those concerns are technical. The fifth — authenticity — is the pedagogical heart of the matter. Authenticity is what we lose when the student stops being the author of the sentence on the page.
The empty artifact.
"Students' assignments may be completed devoid of any actual intellectual effort as expected of learners and work produced may not necessarily reflect their academic achievement or levels of understanding, as traditionally viewed."
A coroner's report. The work was produced. Intellectual effort was not. The work no longer reflects the student. Our entire feedback loop — read the work, infer the kid's understanding, respond — has been severed at the joint.
What you do on Monday
Six skills. Each one a deliberate defence of agency.
1. The Pre-Prompt Compass. Before any student types into an AI tool, they hand-write — on paper, no screen — three lines:
- What I am trying to do. (Their actual goal, in their own words.)
- Why this matters to me. (A connection to a real situation.)
- What I think the answer might be, before AI helps me. (A first guess. Especially a rough one.)
Only after those three lines exist can they open the chatbot. Those three lines come back stapled to the final piece of work. If they're missing, the work is not accepted.
"Before you give me any help on this task, ask me three questions: What am I trying to do? Why does it matter to me? What is my first guess at the answer? Do not produce any content until I have answered all three in my own words."
The kid is the author of the question, not a passenger.
2. AI-Off Reps. Designate at least one task per week as AI-Off. Not "AI-restricted." Not "AI-light." Off. No tabs, no phones, no tools. Structure it around the SRL cycle in miniature: plan three lines about how you'll approach this; act for fifteen minutes without external help; reflect three lines on what worked and what didn't.
The AI-Off Rep isn't a punishment. It's a gym. The whole purpose is to keep the SRL muscle innervated.
3. The Voice Print. At the start of every unit, each student produces a 200-word AI-free Voice Print — writing on a topic they care about, hand-written or typed without tools, in one sitting. Save it. Reference it.
When AI is allowed, students compare drafts to their Voice Print: Where can I hear myself? Where does the writing sound like the model? What is one sentence I want to rewrite in my own voice, even if it gets technically worse?
Make the third one mandatory. The willingness to make a sentence "technically worse" in order to make it yours is the moment ownership returns.
"You are an editor for a student writer with a distinct voice. I will share their Voice Print first. Then a draft. Your only job is to highlight passages that do NOT sound like the Voice Print, and ask questions that help the student rewrite those passages in their own voice. You may not rewrite anything yourself."
4. The System 2 Tax. For every AI-permitted task, build in a non-skippable, AI-off cognitive step the student must complete before the AI is allowed to help.
Examples: write the three hardest sub-questions inside this problem before asking AI; produce a wrong-but-honest answer in your own words; predict what the AI will say and write it down; after the AI gives you an answer, write the strongest counter-argument without AI.
"I am working on a task. I will share my own first attempt — including where I got stuck and what I think the hardest sub-questions are. Your job is not to give me the answer. Your job is to interrogate my attempt: tell me where my reasoning is weakest, what I haven't considered, and what question I should ask myself next. Do not produce a finished solution."
The student does the thinking. The AI raises the cost of doing it badly.
5. The Goal Receipt. Adaptive AI tutors are seductive — they "personalize the path." Watch what that means: the AI plans, the AI monitors, the AI reflects. The student is the object of the SRL cycle, not the subject of it.
Before any AI-driven adaptive session, the student writes a Goal Receipt: what they want to be able to do, how they will know they got there, where they expect this to be hard. At the end, they revisit. Did I hit my goal? Was the platform's path the same as mine?
The platform can adapt all it wants. The kid owns the destination.
6. Reflection on the Hard Drive. Reflection is the easiest thing for AI to fake. The model has read ten thousand reflections. It can produce the form. It cannot produce the function.
Structure reflection prompts around things only the student can know:
- What is one moment in this unit when you were genuinely confused, and what was the confusion specifically about?
- What is one thing you said out loud, or wrote down, that turned out to be wrong? How did you find out?
- What is one feedback comment from me, or a peer, that genuinely changed how you saw the work?
- What would you do differently next time, in concrete terms — not "study more" but a specific behavior?
The AI doesn't know which moment. It doesn't know what they said. It doesn't know your comments.
Specifics defeat the model. Specifics are the proof of presence.
The slogan
Design for agency every day.
Not because agency is a nice-to-have. Because agency is the entire developmental project of education, and AI is the first technology in human history that can substitute for it without anyone noticing.
Calculators couldn't. Search engines couldn't. AI can.
The most pedagogically dangerous AI lesson isn't the bad one. It is the one that quietly does all the cognition for the student.
AI is the most plausible-looking substitute for thinking that has ever entered a classroom. It is fluent, fast, patient, and in many situations more competent than the person it is helping. Precisely because it is all of those things, it is uniquely capable of disguising the absence of the kid behind the work.
Your job — the job you cannot delegate — is to make sure the kid stays in the work. The planning happens in their head. The monitoring happens in their head. The reflection happens in their head.
When the AI is finally turned off — and it will be turned off, in a high-stakes moment you cannot predict — the kid must still be able to plan, monitor, and reflect on their own.
"The existing studies largely focus on generalized mental health outcomes... but there is a notable lack of in-depth examination of how AI-driven technologies influence cognitive processes like memory retention and critical thinking, especially in contexts where individuals are highly dependent on these technologies."
Don't wait for the longitudinal data. Design for agency now. The cost of getting this wrong is paid in students who, at twenty-five, find that they cannot think hard things through.
If you've been building agency-protecting practices in your own classroom, consider teaching with us at EDodo. The world is short on educators who can hold the line on human agency and speak fluent AI.
There aren't many of you yet.
Source: Adarkwah, M. A., et al. (eds.) (2024). Artificial Intelligence and Human Agency in Education: Volume One. All quotes verbatim.
