Edodo
All posts

Published January 25, 2026

How to Elevate Learning (Not Just 'Catch Cheating') in the Age of AI

AI didn't "break" assessment.

It exposed which parts of our grading were never measuring thinking in the first place.

If a student can generate a polished essay in minutes, the real question isn't "How do we catch them?"

It's this:

Are you assessing a learner's understanding…

Or their ability to access tools?

Many schools have tried two quick fixes: ban AI, or detect AI.

Both approaches are fragile.

Detectors will keep getting weaker as models improve.

And bans don't prepare students for real life, where every profession is being rebuilt around AI tools.

A more durable move is to redesign assessment so AI use is assumed, surfaced, and governed.

That's the shift our team made:

We stopped trying to make learning AI-proof.

We started making it AI-amplified.


The pattern is familiar (and that's good news)

Education has been here before.

When calculators entered classrooms, the same fears surfaced: students would "become dependent," lose number sense, and widen equity gaps.

Over time, the debate didn't end with "no calculators ever."

It ended with better task design, clearer learning goals, and smarter assessment conditions.

AI is the same kind of moment.

Not a reason to panic.

A reason to upgrade.


The risk is real: cognitive offloading

There's a specific failure mode teachers are already seeing.

Students don't just use AI to support thinking.

They hand the thinking over.

The OECD warns that unstructured AI use can lead to "cognitive offloading," where learners become dependent on tools instead of developing core skills.

That's why the right response isn't "more surveillance."

It's better design.

Design that makes the human work more visible.

More valuable.

And more measurable.


Three guardrails that make AI use assessable

Before you redesign tasks, set a baseline that applies across departments.

Simple.

Repeatable.

Enforceable.

We use three guardrails:

  1. Students disclose AI use (what, where, why)
  2. Students verify factual claims using acceptable sources
  3. Students own the decisions, reasoning, and final work

This aligns with UNESCO's call for human-centred, institution-level guidance for generative AI in education and research.

Once these norms exist, you can stop wasting energy on guessing.

And start grading what matters.


What to stop grading (because AI can do it too well)

This is the uncomfortable part.

But it's also freeing.

You can place less weight on:

  • "Clean writing" as proof of learning
  • Summary as a standalone skill
  • First-draft output as the primary artifact
  • Pure recall and easily searchable tasks

These skills still matter.

They're just no longer sufficient evidence of understanding.


What to start grading (because humans must own it)

If you want assessment to survive the AI era, anchor it in human ownership.

Grade things that AI cannot own for the student:

  • Reasoning and tradeoffs
  • Evidence evaluation
  • Transfer to new contexts
  • Argument under questioning
  • Reflection, revision, and metacognition
  • Ethical and responsible use

You'll notice something.

None of those depend on whether a paragraph "sounds like ChatGPT."

They depend on whether the student can think.


Ten AI-amplified assessment formats you can run this term

If you only adopt two, start with Oral Defense and Prompt Journal.

They shift culture fast with minimal disruption.

1) Oral Defense (Micro‑Viva)

Students may draft with AI.

But they must defend the work live.

Short.

Structured.

Scalable.

A typical package:

  • Final artifact
  • A one-page "defense sheet" (claim, evidence, limits, what changed after critique)
  • A 3–8 minute defense + Q&A

A simple prompt that works across subjects:

"You wrote this conclusion. Walk me through how you got there. What alternatives did you consider? What would change your mind?"

2) Fishbowl Seminar + Evidence Tickets

Students can use AI to prepare perspectives and questions.

But every contribution must be anchored in the text or data via "evidence tickets."

If you want a quick visual refresher on the routine, Edutopia's 60-second demonstration is a solid reference.

3) Debate + AI Opposition Research Log

Have students use AI to generate the strongest opposing case.

That removes straw-manning.

Then they build rebuttals and debate live.

You assess argument quality, rebuttal agility, and critical thinking.

4) Revision is the assessment

Stop treating the first draft as the main evidence.

Allow AI-supported Draft 1.

Grade the revision quality and the reasoning behind changes.

One workable weighting model:

ComponentWeight
Final draft quality30%
Change log with reasoning40%
Reflection on revision choices30%

5) AI Error Audit

Give students an AI explanation likely to contain mistakes.

Or let them generate one.

Then ask them to identify, correct, and verify errors.

This makes verification a skill, not a policing tool.

A clean rubric anchor here is the Paul‑Elder Universal Intellectual Standards: clarity, accuracy, relevance, depth, breadth, logic, fairness.

6) Prompt Journal (Process Portfolio)

This is your antidote to cognitive offloading.

Students submit a weekly log:

  • Prompt used
  • AI output (summarized)
  • What they accepted and why
  • What they rejected and why
  • What they learned

You're not grading "AI use."

You're grading judgment.

7) Roleplay Simulation

AI plays a stakeholder.

Students respond live and justify decisions.

Examples:

  • An upset parent conference
  • A patient interview
  • A dissatisfied customer recovery scenario

8) Micro‑Teach + Peer Cold Questions

Students teach a concept in five minutes.

Then field cold questions.

You assess clarity, depth, and misconception handling.

9) Constraints‑First Design Brief

Let AI brainstorm.

But force students to optimize under real constraints: budget, timeline, accessibility, safety, privacy.

Tradeoffs become visible.

10) Evidence Ladder

Students rank sources strong/medium/weak.

They justify the rankings.

Then they state a final position with epistemic humility.


The rollout that prevents staff overload

Don't flip everything at once.

Pilot.

Expand.

Scale.

A practical phased approach:

  • Phase 1 (2–3 weeks): Oral Defense + Prompt Journal in one course or grade
  • Phase 2 (next unit): Add Fishbowl or Debate
  • Phase 3: Share exemplars, rubrics, and student reflections across departments

That sequence matters.

It reduces initiative fatigue.

And it creates proof inside your own context.


Getting started

Pick one assessment you currently run.

Now answer three questions:

  1. What part of this task could AI complete with minimal understanding?
  2. Where will you see the student's reasoning, not just their writing?
  3. What small "defense" or "process artifact" can you add without redesigning the whole unit?

Then choose one upgrade:

  • Add a 5-minute micro‑viva
  • Add a one‑page change log
  • Add a prompt journal entry

Keep the rest the same.

Start small.

But make the thinking visible.