My MBA Journey

Record of my personal journey completing an MBA

OLAD Week 6 – Measuring Success

Week 6 Organisational Learning and Development - Measuring Success

Introduction to measuring program success

The 2022 Linked In Learning Report[1] states that there are now 53% of professionals in the OLAD space agreeing that learning and development has a place at the C suite level. Previously this was only 24% prior to Covid. However, this figure is only the OLAD peoples’ belief and it would be interesting to see some figures on the beliefs of those actually in the C Suite.

Delahaye and Choy (2018)[2] consider that evaluation, which is the fourth stage of OLAD, is the most controversial and seem to raise a number of questions. Evaluation and measurement is always desirable, but there can be questions around accuracy and methodology. Nor does it come without cost. That said, the value of the evaluation should be balanced against its cost. Poor evaluation methods will undoubtedly result in poor information and be of little if any value.

Many organisations approach OLAD from the wrong direction. Instead of setting the strategic objectives of the organisation and then pursuing learning and development that serves those objectives, they embark on the programs first. Such a process is potentially doomed to failure for this reason. The learning and development model should be approched from the objectives level and then develop programs to serve the achievement of those goals and drive behavioural change Throughout this subject, we have consistently reiterated that organisational learning and development decisions must be underpinned by organisational value and drive genuine behavioural change (Cohen 2022; Phillips & Phillips 2020)[3][4].

This module will focus on the evaluation of program success. Evaluation is a centre point to OLAD and the most widely accepted method is Kirkpatrick’s Four Level Training Evaluation. This model considers the training on four levels be the learner’s experience, the level of knowledge retention, changes in behaviour and the impact on the organisation. Another model is Phillip’s Five-Level Model, which is built on Kirkpatrick’s. However, it seems there are several models for the evaluation of training. The module will only focus on these two though.

Delahaye and Choy (2018)[2] consider evaluation as a means to measure and report the efficacy and efficiency of a learning and development program. The efficacy of a program is its quality and its efficiency is in the cost. This explanation defines evaluation in the context of this module.

It is important, however, to distinguish as well between assessment and evaluation. These words can be used interchangeably, but need distinction for the purpose of this discussion. Evaluation is used to judge the value of the learnings and assessment is used as an umbrella term for collecting information on the activities of learners and its interpretation and description of what has been achieved. Assessment is also used for mapping purposes against the HLOs (Delahaye & Choy 2018)[2]. As a result, learning and assessment are inextricably linked to evaluation by comparing assessment methods against learning outcomes.

Figure 1: Linkages in Learning

Linkages in Learning
Source: Biggs & Tang (2007) from Delahaye & Choy (2018)

As mentioned, there are other models for evaluation of learning and development including Kaufman’s Five Levels of Evaluation (Deller 2021)[5] and Anderson’s Model of Learning Evaluation (Deller 2021)[6].

Kirkpatrick’s Four-Level Training Evaluation Approach

This is the most widely used and accepted method of learning and development evaluation (Kirkpatrick 1983).

Figure 2: Kirkpatrick’s Four Level Training Evaluation Model

Kirkpatrick's Four Level Training Evaluation Model
Source: AIHR (n.d.)[7]

The first two levels consider the learner’s experience of the training and also the level of knowledge retained as a result. The next two levels are about impact. Firstly on the learner and their behaviour changes and lastly on the organisation and how its performance has fared in light of the training.

As mentioned in the introduction, it may appear appropriate to consider OLAD at the first level, but it is actually more appropriate to start at Level 4 and work through the other levels to ensure alignment (Petrone 2022)[8]. After all, it is the strategic directives of the organisation that need to be met first and foremost and the training must align with those.

LinkedIn Learning Course

The LinkedIn Learning course ‘Practical success metrics in your training program’ (41 minutes) by Ajay Pangarkar, explains the four levels of Kirkpatrick’s model. This course was completed and provided some excellent insights into the model. The most significant takeaway was that often the evaluation is about feeding the ego of the organisation and the trainer/s but it should be focused on the outcomes achieved by the learners. Reframing many of the questions can achieve this and provide valuable information for the organisation and trainer/s as well.

Figure 3: Practical Success Metrics in Your Training Program

Practical Success Metrics in Your Training Program
Source: Pangarkar 2019, Practical success metrics in your training program, LinkedIn Learning video, viewed 2 April 2023,

Other Notes Around Kirkpatrick’s Model

  1. The Kirkpatrick Model should align with the HRDNI
  2. Evaluation from learners at Level 1 provide shallow feedback and need to be considered in concert with the other levels.
  3. Worth having learners complete a pre and post training assessment to identify gaps that existed and have been addressed. The notes also suggest though that these assessments can be overstated because individuals are biased and can experience a “Halo Effect”. Whilst the suggestion is plausible, it is considered that some learners may also experience “Imposter Syndrome” where they underestimate their abilities.
  4. At level 3, the original performance gap should be revisited and effects considered. What behaviour changes are evident and how has this been measured?
  5. Delahaye and Choy (2018)[9] suggest that Performance Appraisals be utilised to evaluate the more complex areas of the HLO. In particular, this will relate to the meta-abilities area.
  6. Level 4 should consider the performance of the organisation pre and post trainnig. Kirkpatrick and Kirkpatrick (2006)[10] consider this to be the most difficult of all levels to evaluate. At times it can be difficult to find the evidence to identify impact at Level 4. It may be useful to consider this when designing the training in the first instance and agree on the methodology to be employed.

Phillips’s Five-Level Evaluation Model

Phillip’s argued that the Kirkpatrick model did not provide information on the ROI to the organisation. It also didn’t address the continuous improvement aspects of the programs. Consequently, Phillips added a fifth level and published the model.

Level 1 – Reaction

Considers the learner’s reaction and satisfaction levels with the training and also how they plan on using what they have learned.

Level 2 – Learning

Measures what changes have occurred in the learner’s knowledge, skills and attitudes. This level of evaluation also seeks to identify the depth of knowledge acquired and retained and how the learner’s self confidence in being able to perform better with the new knowledge.

Level 3 – Application

Level 3 still looks at change in behaviour but builds on Kirkpatrick’s work. Phillips does not only look at what change has occurred, but also seeks to identify why it has occurred and what other initiatives might be invoked to improve that even further.

Level 4 – Impact

Considers the impact the L&D initiative has had on the organisation overall. There are many factors that come into this however which is why it is considered complex to evaluate. Any impact measured must be looked at in context with any external or internal influences on the organisation unrelated to the training initiative.

Level 5 – ROI

Here Phillips departs from Kirkpatrick in considering the financial benefits of the training initiative. ROI can be difficult to measure at times in relation to training as sometime the changes and benefits are immediate and at other times more medium and long term. Leadership training is a good example. The ROI Institute (2014) suggests that it is not always feasible to conduct a review for every program.

The V Model

The image below comes from the ROI Institute of Canada. It is based on the Phillips’ model and demonstrates the process from needs and gap analysis through to deliver and evaluation of a program.

Figure 4: The V Model of Program Evaluation

Figure 4: The V Model of Program Evaluation
Source: Modified by AIB from ROI Institute of Canada (n.d.)

The video below also discusses the Phillips’ Model of Evaluation that demonstrates the value of being able to identify the ROI on learning and development programs.

NOTE – Failing to identify the ROI on a program can leave it vulnerable to being cut because it isn’t speaking the language that finance people understand. ROI should be demonstrable to create viability.

Source: ROI Institute of Canada

Using Learner Analytics to Support Evaluation

Learning Management Systems (LMS) can often provide an overwhelming level of analytical data. It is important to consider the analytics in terms of their value and how they reflect real outcomes of training initiatives.

The LinkedIn Learning course “Applying Analytics to Your Learning” was recommended in the module notes. The course provided useful insights into many of the analytics that can be used around learning and programs.

Measure your L&D success (

Figure 5: Applying Analytics to Your Learning Program

Figure 5: Applying Analytics to Your Learning Program
Source: LinkedIn Learning Certificate


This module has provided some extremely useful insights around the concept of OLAD from a holistic perspective. It was most useful to identify the linkage between OLAD and Project Design, where you would look to the ROI and the alignment of the learning program or project with strategic objectives of the organisation and then work backwards. The Phillips Model from the ROI Institute of Canada above proved most useful as a design and implementation strategy.

  1. LinkedIn Learning 2022, 2022 Workplace Learning Report: The transformation of L&D: Learning leads the way through the Great Reshuffle, viewed 1 August 2022,[]
  2. Delahaye, B & Choy, S 2018, Human resource development: Learning for innovation and productivity, 5th edn, Mirabel Publishing, Prahran.[][][]
  3. Cohen, C 2022, Time to flip the script on evaluation, 28 February, viewed 2 August 2022,[]
  4. Phillips, JJ & Phillips, PP 2012, Proving the value of HR: How and why to measure ROI, 2nd edn, Society for Human Resource Management, Alexandria, VA.[]
  5. Deller, J 2021, Kaufman’s Model of Learning Evaluation: Key Concepts and Tutorial, viewed 3 April 2023, <>.[]
  6. Deller, J 2021, Anderson Model of Learning Evaluation: The Comprehensive Guide, viewed 3 April 2023, <>.[]
  7. AIHR 2021, ‘A Practical Guide to Training Evaluation’, AIHR, 1 June, viewed 16 March 2023, <>.[]
  8. Petrone, P 2022, The best way to use the Kirkpatrick model for evaluating L&D impact, 18 March, viewed 5 August 2022,[]
  9. Delahaye, B & Choy, S 2018, Human resource development: Learning for innovation and productivity, 5th edn, Mirabel Publishing, Prahran.[]
  10. Kirkpatrick, DL & Kirkpatrick, JD 2006, Evaluating training programs: The four levels, 3rd edn, Berrett-Koehler, Oakland, CA.[]

Leave a Reply

Your email address will not be published. Required fields are marked *

Ric Raftis

Ric Raftis

Find out more about me on my About Me page.

Share this post:

Read & Learn More

More From The Blog

Inspirational content to help you shift your life into the path of success