top of page
Writer's pictureIrina Ketkin

How to Use Kirkpatrick's 4 Levels of Evaluation

The Kirkpatrick’s Model for evaluation has been the gold standard for decades. It stands as an indispensable framework for trainers, facilitators and educators across various industries. If you’re just stepping onto the L&D wagon, this guide is a must-have.


Let’s demystify the four fundamental levels of evaluation – Reaction, Learning, Behavior and Results – with practical explanations, examples, and tips on collecting and analyzing data.



How to Use the Kirkpatrick Model for Effective L&D Assessment - The L&D Academy

Table on Contents:


What is the Kirkpatrick Model?


The Kirkpatrick Model has been the standard for learning evaluation for many decades now. But don’t let its simplicity fool you – it sure packs a punch. There are 4 levels at which you would evaluate the learning:

  1. Level 1 is the "Reaction," which is all about first impressions: did the learners enjoy the learning/training?

  2. Level 2, "Learning" – did they actually learn something?

  3. Level 3, "Behavior” – like a detective, you're looking for evidence of changed behavior back at the workplace.

  4. Level 4, "Results", is the grand finale—did the training make a significant impact on the business?


Brief History and Evolution of the Model


Picture Donald Kirkpatrick as the Albert Einstein of the L&D world. Way back in the 1950s, he had an "aha!" moment and devised this four-level framework. Initially used by the U.S. Army, it soon caught on like wildfire and has been the go-to standard for learning evaluation ever since. Yep, it's the "Beatles" of training evaluation models—timeless and still rocking!


Why Use the Kirkpatrick Model?


Wondering why you should jump on the Kirkpatrick bandwagon? Imagine building a house without a blueprint; you wouldn't know where to start or what to focus on. The Kirkpatrick Model is your blueprint for building a bulletproof learning program. It helps you focus on what's crucial at each stage, from learner satisfaction all the way up to business impact.


The Value of Learning Measurement and Evaluation


Measuring and evaluating training is like being a detective — you're collecting clues to solve the mystery of "Is this training actually working?" The Kirkpatrick Model gives you the tools to turn those clues into a compelling story, full of twists and turns, that ends with a satisfying conclusion: impactful learning that benefits everyone. Evaluating learning and training helps you know what works, what doesn’t, and how to allocate resources for maximum impact.


Level 1: Reaction


What is it?


Level 1 evaluates the initial reactions learners have to your learning event (be that a training, workshop or an online course). It is like the "trailer" for a blockbuster movie – it gives you a quick glimpse of what the audience thinks, but it's not the whole story.


How to collect it?


The most common way is to utilize the so-called “happy sheets”. A simple post-course survey will do the trick – be that on a piece of paper or an online questionnaire. For the latter, you can use paid services like SurveyMonkey or the free Google Forms.


Example Questions


  • How satisfied are you with the training? (Not at all satisfied – Completely satisfied)

  • How relevant was the content to your job? (Not at all relevant – Completely relevant)

  • How likely are you to recommend this session to your colleagues? (Not at all likely – Extremely likely)

  • How can we improve this session in the future?


How to analyze it?


There are two types of data you’d normally collect at this stage – quantitative and qualitative. Quantitative data looks at numerical values (9 out of 10 people would recommend the session) and qualitative data analyses text (3 people recommend increasing the length of the session).


No need for a Sherlock Holmes magnifying glass here. Simple statistical methods, like calculating the mean (the average score, where you add up all the numerical values of the responses and then divide them by the number of responses) and mode (the value that occurs most often), can give you a clear snapshot of the general sentiment. And don't overlook the qualitative feedback; it's often where the real gems lie.


Level 2: Learning


What is it?

At Level 2 we need to figure out what kind of learning actually took place. We want to know if they’ve acquired the skills, knowledge, and attitudes the learning session aimed to instill. That is, if they attended a Conflict Resolution webinar, do they now know how to solve conflict?


Level 2 is all about making sure the training isn't just a flashy show but offers substantial learning value. Think of it as ensuring your learners leave the 'classroom theater' with a toolkit, not just a bag of popcorn.


How to collect it?

There is a multitude of ways to collect Learning data: quizzes, interviews, role-playing exercises, interactive eLearning modules, and so on.


Example Questions

  • Can you list the three main components of our new software?

  • How would you handle a disgruntled customer based on what you learned?

  • Which of these strategies would best optimize our workflow?


How to analyze it?

At this stage, you need to be more methodical. You can look at the quiz questions to determine which ones were most effective and least effective. Or use pre- and post-assessment comparisons to measure the learning changes and (hopefully) gains.


Level 3: Behavior


What is it?

It’s not enough to know your learners have gained new skills and knowledge. At Level 3 you need to understand whether their behaviors have changed in the long-term. Does it translate into real-world effectiveness? Did your educational efforts arrive at a meaningful destination?


How to collect it?

Measuring behavioral change can be challenging, to say the least. But not impossible! You can think of this as being a paparazzi, where you discreetly ‘snap photos’ of performance to catch learners in the act – positive or otherwise. The methods you can use for this are observations, performance reviews, feedback from peers and managers (collected in a survey or through interviews), and so on.


Example Questions

  • What changes have you observed in productivity/performance since completing the training?

  • How successfully is the team using the new communication methods introduced in the training?

  • How many safety incidents have you reported since the new protocol was learned?


How to analyze it?

To measure the behavioral change, you need to correlate performance metrics with specific training components to see what’s influencing what. This is where the challenge lies – there are a lot of influencing factors affecting performance and you need to make a strong connection between the learning that took place and the performance that has changed. For this, you can look at both quantitative data (like sales figures, customer NPS scores, etc.) and qualitative data (like peer reviews or manager feedback) for a more holistic picture.


Level 4: Results


What is it?

There is a reason Level 4 evaluation is at the top of the pyramid. At this stage, we need to evaluate the ultimate impact of the training program on organizational goals and bottom-line metrics – think of the likes of performance, revenue, and retention. It's like seeing if the training not only led the horse to water but also made a meaningful difference in the overall health of the herd. So, Level 4 is where the whole journey comes together, proving that your training is not just a 'good to have' but a meaningful contributor to organizational success.


How to collect it?

To collect data at this level, you'll need to go beyond quick snapshots and invest in a long-exposure capture of key performance indicators (KPIs). This could involve an in-depth review of quarterly financial reports, customer satisfaction surveys, or other relevant data that align with the intended outcomes of the training.


Example Questions

  • How has the training impacted revenue?

  • What impact did the leadership program have on employee engagement?

  • What changes to customer satisfaction can be attributed to the training?


How to analyze it?

The analysis phase at Level 4 is where you'll put on your detective hat. You're not merely looking for a thumbs-up or thumbs-down; you're piecing together the narrative of how training has—or hasn't—propelled the organization forward. This involves scrutinizing the data meticulously and perhaps employing more complex statistical methods to draw substantive conclusions about the training's ROI.


Want to learn more about measuring the Return on Investment (ROI) for learning? Check out our Quick Guide to Measuring the ROI of Learning.


Conclusion


And that’s all you need to know about Kirkpatrick’s Learning Evaluation Model. By now, you should have a well-rounded grasp of how to apply this framework to measure the effectiveness of your learning programs. For those new to the field of Learning and Development, this is not just theoretical knowledge — this is your actionable blueprint. So, dive in and start implementing the Kirkpatrick Model in your learning projects.


We'd love to hear how it goes! Please share your experiences, challenges, or questions in the comments section below. After all, the journey of learning and development is one best traveled together. Happy evaluating!


bottom of page