All books/Gagné's Nine Events of Instruction in Action
Chapter 137 min read

Event 8: Assess Performance

How to design assessments that verify mastery, align with objectives, and provide meaningful data about learning.

From Practice to Proof

Practice (Event 6) and feedback (Event 7) support learning. Assessment determines whether learning has occurred.

The distinction is important. During practice, learners have access to supports—hints, guidance, collaboration, reference materials. Assessment removes these supports. Learners must demonstrate competence independently, showing that knowledge and skills have been internalized rather than merely borrowed from external aids.

The cognitive process is again retrieval—pulling information from long-term memory and using it to perform a task. But unlike practice retrieval, assessment retrieval occurs without scaffolding. This is the final test of whether encoding was successful.

Assessment serves two purposes:

  1. For the learner: Confirmation of mastery, identification of remaining gaps
  2. For the instructor/designer: Data on instructional effectiveness—did the instruction achieve its objectives?

Assessment vs. Practice: Key Differences

Understanding the distinction clarifies design decisions:

Practice (Event 6)Assessment (Event 8)
Supports availableNo supports
Low or no stakesStakes attached (grade, certification, pass/fail)
Feedback immediateFeedback after assessment
Multiple attempts encouragedUsually single attempt
Learning activityMeasurement activity

Both involve performance, but assessment measures independent capability.


Alignment: The Non-Negotiable Principle

The single most important principle of assessment design: alignment to objectives.

If Event 2 stated that learners would be able to "analyze customer complaints and recommend solutions," the assessment must require analysis and recommendation—not recognition or recall.

Misalignment between objectives and assessment is one of the most common instructional failures:

  • Misaligned: Objective is "Apply the scientific method." Assessment asks "List the steps of the scientific method."
  • Aligned: Objective is "Apply the scientific method." Assessment presents a novel problem and requires learners to design an investigation.

This misalignment undermines the entire instructional system. Learners prepare for the wrong thing. Instructors get inaccurate data. Achievement appears higher or lower than it actually is.

Design Backwards

The most reliable way to ensure alignment is to design assessments at the same time as objectives—before developing instruction. Ask: "What would learners need to do to prove they achieved this objective?" Design that assessment first, then design instruction that prepares learners for it.


Types of Assessment

Different learning outcomes require different assessment approaches:

Knowledge Tests

  • Multiple-choice questions
  • True/false items
  • Fill-in-the-blank
  • Short answer

Appropriate for: Factual recall, concept recognition, basic comprehension

Limitations: Cannot measure complex application, analysis, or synthesis

Performance Assessments

  • Simulations
  • Demonstrations
  • Real-world task completion
  • Observed practice

Appropriate for: Procedural skills, physical tasks, complex decision-making

Strengths: High authenticity; measures what learners can actually do

Project-Based Assessments

  • Written reports
  • Presentations
  • Designed artifacts
  • Created products

Appropriate for: Synthesis, creativity, complex problem-solving, communication

Strengths: Demonstrates deep understanding and ability to produce

Scenario-Based Assessments

  • Case studies requiring analysis and recommendation
  • Branching scenarios with consequences
  • Problem-solving situations

Appropriate for: Application of knowledge in realistic contexts

Strengths: Tests transfer and judgment, not just recall

Authentic Assessment

Assessing performance on the actual real-world task in its real context:

  • Observing a mechanic perform a repair
  • Evaluating a nurse's patient interaction
  • Watching a teacher deliver a lesson

Strengths: Ultimate validity—if they can do it in the real environment, they've learned it

Limitations: Often impractical, may not be possible until after training


Designing Quality Assessments

Coverage

Assessment should sample the full range of objectives, not just the easiest-to-measure ones. If some objectives are harder to assess, that's not a reason to skip them—it's a reason to invest more design effort.

Validity

Does the assessment measure what it claims to measure? A reading comprehension test that requires extensive background knowledge may actually be measuring prior knowledge, not reading skill.

Reliability

Would the same learner get the same score if assessed again (assuming no new learning)? Subjective assessments need clear rubrics to improve reliability.

Fairness

Does the assessment give all learners an equal opportunity to demonstrate competence? Are there factors unrelated to the learning that might advantage or disadvantage some learners?

Appropriate Difficulty

Assessment should be challenging enough to differentiate levels of competence but not so difficult that even strong learners fail. The goal is measurement, not discouragement.


Pre- and Post-Assessment

Administering the same or parallel assessment before and after instruction provides powerful data:

  • Learning gain: Direct measure of what was learned
  • Starting point: Reveals what learners already knew (important for adjusting instruction)
  • Effectiveness data: Shows which objectives were achieved and which need instructional improvement

Pre-assessment should be identical or parallel in form and difficulty to post-assessment. Different formats make comparison invalid.


Context-Specific Examples

Corporate Training (Safety)

After a course on ladder safety:

  • Not a written test (though that might be part of it)
  • A practical assessment where employees must correctly select, inspect, position, and climb a ladder for a specific task
  • Observed by a qualified assessor using a checklist of critical safety behaviors
  • Pass/fail determination based on safety-critical items

K-12 Technology (2nd Grade)

After learning to use Kidspiration software:

  • A project: "Create a diagram that compares and contrasts farm animals and zoo animals"
  • Assessed against a rubric: Did they use the correct tool features? Is the content accurate? Is the organization logical?
  • Not a quiz about what buttons do what—an actual use of the skill

eLearning (Software Training)

At the end of a CRM tutorial:

  • A scored post-test with multiple-choice questions testing knowledge
  • Plus: A final simulation task where learners must complete a real workflow without hints
  • Score of 80% or better required for completion certificate
  • Combines knowledge testing with performance assessment

Higher Education (Nursing)

Final assessment for a clinical skills course:

  • Written exam: Multiple-choice and short-answer questions testing theoretical understanding
  • Plus: Simulation-based practical where students demonstrate clinical procedures on a manikin
  • Both components required for passing
  • Rubrics for practical assessment ensure consistency across evaluators

Professional Development (Teaching)

After a workshop on questioning techniques:

  • Not: A quiz on types of questions
  • Instead: Video recording of participant teaching a mini-lesson
  • Peer and expert review against criteria from the workshop
  • Self-reflection on implementation

When Assessment Reveals Problems

Assessment data is diagnostic. When learners fail to demonstrate mastery:

Most learners succeeded but some didn't:

  • Individual support or remediation may be needed
  • Check for specific gaps or misconceptions
  • Consider whether assessments measures learning or something else (test anxiety, language barriers)

Many learners failed:

  • The instruction may need revision
  • Check alignment between objectives, instruction, and assessment
  • Was practice sufficient? Was feedback effective?
  • Were prerequisites in place?

Everyone succeeded easily:

  • Assessment may have been too easy
  • May not differentiate levels of competence
  • Consider whether deeper mastery exists

Formative vs. Summative Assessment

A final distinction:

Formative Assessment

  • During learning
  • No or low stakes
  • Purpose is to guide learning and instruction
  • Examples: Practice quizzes, exit tickets, learning checks

Summative Assessment

  • After learning
  • Higher stakes
  • Purpose is to certify mastery
  • Examples: Final exams, certification tests, performance evaluations

Event 8 typically refers to summative assessment, but the principles apply to both. Formative assessments should also be aligned to objectives and well-designed.


Key Takeaways

  • Assessment measures independent capability without supports—the proof that learning occurred.
  • The cognitive process is retrieval under test conditions.
  • Alignment to objectives is non-negotiable. Assess what you said you'd teach; teach what you're going to assess.
  • Different learning outcomes require different assessment types: knowledge tests, performance assessments, projects, scenarios.
  • Design assessments at the same time as objectives—before developing instruction.
  • Pre- and post-assessment provides powerful data on learning gain and instructional effectiveness.
  • Assessment results are diagnostic—they tell you about learners and about your instruction.