The measurement challenge is a common one in the training industry. While we know that L&D plays a key role in helping organizations reach their goals, learning leaders have long struggled to prove the business impact of training.

To learn more and uncover the secrets behind effective training measurement and evaluation, we spoke with Asha Pandey, founder and chief learning strategist at EI Design.

Listen to this episode, sponsored by EI Design, to learn more on:

  • Common oversights learning leaders make when measuring the impact of training.
  • How to identify which key performance indicators (KPIs) to measure.
  • Popular measurement models and methods.

Listen Now:

Additional Resources: 

Complete the form below to view an animated video about this episode:

The transcript for this episode follows:

Speaker 1:

Welcome to the Business of Learning, the learning leader’s podcast from Training Industry.

Sarah Gallo:

Hello, and welcome to the Business of Learning. I’m Sarah Gallo, an associate editor here at Training Industry.

Taryn Oesch DeLong:

And I’m Taryn Oesch Delong, managing editor. This episode of the Business of Learning is sponsored by EI Design.

Ad:

EI Design has nearly two decades’ experience helping customers futureproof their training investment. From strategy to development, delivery to measurement, EI Design creates engaging learning environments that have a demonstrable impact on learners and the business. Its services include change management and upskilling trainers for virtual delivery, immersive learning strategies to engage remote learners, and a learning and performance ecosystem that drives continuous learning, performance gain and behavior change. Learn more at eidesign.net.

Sarah Gallo:

The measurement challenge is a common one in the training industry. While we know that L&D plays a role in helping organizations reach their goals, many learning leaders have long struggled to prove the business impact of training. To learn more and uncover the secrets behind effective training measurement and evaluation, we’re speaking with Asha Pandey, founder and chief learning strategist at EI Design. Asha, welcome to the podcast.

Asha Pandey:

Hi, [it’s a] pleasure to be here on the podcast today.

Taryn Oesch DeLong:

Alright, to start off, Asha, what makes it so challenging to measure training’s impact on business goals?

Asha Pandey:

Yeah, I think that is the important question. Now, you see organizations every year spend an enormous amount of time, energy and money on creating training and delivering it, yet the determination of the impact is something that tends to be rather elusive. Now, everybody acknowledges it’s important but it doesn’t get [it] done. So there are several inherent challenges in this exercise, and I’m going to summarize five of the challenges that we’ve seen with our customers. So, the first one we see is that during the training needs analysis phase, focus is predominantly on training’s ability to meet the learning outcomes, and at this stage the view for evaluating the training’s impact is very limited. It is [created] on [the] basis of learner reactions, registration or completion rates, a little bit of learning efficacy and [on] limited application. The second challenge that we see is that the change in thinking or behavior quite often is deemed to be rather difficult and may not even be attempted. The third [challenge], and I think which is a very critical bit, is the fact that the business KPIs are not identified upfront. And in situations where the business KPIs do get identified, organizations are constrained by the fact that there is no framework through which they can get the desired analytics [to determine] if the training delivered the required impact. Now, in certain situations, which is much smaller percentage, is that where the L&D teams do have the big picture, they have the complete perspective, they know what should be done, [but] they may not have the resources, which is combination of teams, tools, frameworks, etc., to collate this data, analyze it and then draw the actionable insights. So because of this, what we see is that while everybody acknowledges the need for the measurement of [the] impact of training, the exercise either doesn’t get attempted or it takes just too much of time. So these are some of the challenges that we see, which are fairly inherent to the exercise, but it doesn’t mean that there is no answer.

Sarah Gallo:

Yeah, for sure. I think those are some very real challenges that a lot of learning leaders have either already faced or will face at some point in their careers. Alright, so Asha, with those challenges in mind, what are some common oversights learning leaders may make when measuring the impact of training?

Asha Pandey:

Yeah, so I think one of the things we’ve noticed is the fact that if you measure something with a rather limited set of cues, you’re not going to get the big picture or the more appropriate picture. So what do I mean by this? What we see is that … remember when I talked about the fact that during the training needs analysis focus is so predominantly on making sure that the learning outcomes are met? At best, organizations are looking at what are called nowadays the “L&D metrics,” which means your basic learner reactions or the feedback on the training or the trainer, the number of registrations and the corresponding completion rates. And to some degree, the assessment scores are able to [measure] training effectiveness. But what’s really missing in this case is the more significant component, which is called the “business metrics,” which means that [we understand] why did the organization made the specific training investment, because there should have been a specific gain for the business, which means that the training should have been in a position to influence or impact that business goal. So the oversight that we definitely see is that we aren’t looking at only the L&D metrics, which is necessary, but it is the starting point. So if I were to draw the picture, the L&D metrics would be the core and the business metrics is the other concentric circle around the core. So while you start there, what you need to make sure that the business metrics is also in place. Now, to achieve this combination, the other piece which is important is that the L&D teams and the business leaders need to collaborate during the early phases of the project development, and only then they are able to decide how these two metrics would be coupled. You would be able to indeed arrive at the impact of the training. So let me just pick an example to show what I’m talking about, and this would totally resonate with our listeners as well. So [let’s] assume a situation where an organization is making an investment on a CRM tool for their sales and marketing, and it’s now replacing a bunch of legacy tools that they had. The L&D team is given this mandate to make sure that there are about four or five levels in the organization who need to be trained, and it needs to be done in a definitive time of six months. Now, the way the program would be crafted during the training needs analysis is that you’ve got these four or five personas. Each one of them has an expectation of a certain proficiency level that needs to be accomplished. So as a result, what would happen is that the L&D team’s evaluation also would be on determining the proficiency gain, which to a very large degree they would be able to get an assessment through the scores end of the program. But what’s missing in this picture is the business metrics. Now for the business leaders, if you were to ask, this investment happened; you’ve got these people who’ve gotten upskilled to this degree; what happens next? Now, if we were to now look at identifying a particular KPI, what would resonate for business leader is either improved revenue, improved profitability, maybe more customers [or] higher customer satisfaction. Now, presuming the sales rep in this exercise is able to save one hour as they transition from the older legacy to a more integrated tool. Now this one [rep in] eight hours is [able to complete] 12.5% additional work, which may be either a customer reach out or maybe another opportunity which is now going to be higher at this volume. Now this is the kind of an indicator which is impacting the business directly, and if you have this KPI, you’re going to be able to now truly demonstrate the value of the training on business. I hope that sort of gave the perspective of the oversight and how the bigger picture needs to be actually taken care of.

Taryn Oesch DeLong:

That’s a great example. Thank you. So we know that measuring the impact of training isn’t easy and you’ve broken that down for us. So when thinking about effective training measurement and evaluation, why is it so important that learning leaders identify those training KPIs before they launch a program?

Asha Pandey:

Absolutely. So what happens is that if you decide on the program’s design and development largely from the perspective, which is the more traditional and it is the core, of [achieving] learning outcomes, just the same way we saw in the previous example, you will tick those boxes but you’re still not touching the business KPIs. So what’s very important is that you have collaborative quantification between the L&D and business metrics. That must be done upfront because if not, you would not have the follow up steps to measure the impact on the aspects that the business is really, really seeking. And if you’re going to be as, ipso facto, bringing in some KPIs after the design and the development of the training program has been done, you’re likely to have at best a very unreliable outcome. So the key is really, Taryn, moving rather the exercise up to the DNA phase of expanding the scope, having both the L&D and the business leaders identify the parameters, and then make sure that these are now factored into the evaluation framework as well. And maybe I can just illustrate it with another example. So let’s take an example where an insurance company wants to implement a training [program] which is going to improve the accuracy of their estimates. Now, from the L&D perspective, if you say the mandate is that assuming there were one hundred people who needed to be trained on this particular proficiency, so the ticks for them are that they train these people within that certain time that was planned for. And now they’ve got their reaction as well as assessment scores with them, but you don’t have a KPI which you didn’t identify upfront. Now, this is where the gap is going to be from the perspective that the business wishes to see. Now the KPI, if it were identified upfront, would have been that I want to be certain that out of these one hundred people, 70 are operating at the right level. The next 30 people need to move up into proficiency, so I should have a before and after view which is attributable to the training to demonstrate the value that the business seeks. So if you don’t identify the KPIs upfront, you won’t be able to measure them later, so it’s very vital that organizations need to expand this phase of training needs analysis. It needs a strong collaboration between the learning and development team, as well as the business leaders and the quantification of the KPIs needs to happen. Both sets of metrics from the L&D the business metrics needs to be in place.

Taryn Oesch DeLong:

Thank you for illustrating that for us. So going off of that, how can learning leaders make sure they’re identifying the right KPIs for their program?

Asha Pandey:

So let’s just take a step back here, Taryn. The quantification, or what we [mean when we say] this particular L&D program was successful, is when we see that it is aligning and helping the organization meet the employee performance gain targets. But I think it’s more significant that it is impacting the corporate strategy, or sometimes it may be a tactical level, but definitely at the corporate strategy level. So as an extension, when you evaluate an L&D program against the correct set of KPIs, what you’re doing is ensuring that, indeed, the L&D program is going to support, yes, definitely the employee performance gain, but it would be able to drive the corporate strategy or anything which is required to be done at the tactical level. Now, this bigger perspective is vital in making sure that amongst the range of KPIs that may be in the business leaders’ radar, we’d also need to sift through and identify which are going to be the most significant ones and these need to be picked up. So there are some best practices that definitely organizations can use which are going to help them in doing so. So assessing that which KPI is right, the first step that we recommend is [asking yourself], is the training going to help you solve a specific business problem? Is the L&D program aligned to the corporate strategy? Have you looked for parameters which are going beyond the number of hours of training, the headcounts that it was supposed to cover? Is the KPI that you have identified indeed measurable? And has all of this been factored into the DNA phase so that there is a clarity from both L&D and the business side that we’re going to be measuring these, and there is a framework for us to obtain the data and do the evaluation? So these are some of the practices which can help you pick the right KPI at the right time at the foundational stage and make sure that it is going to do justice to the overall mandate.

Sarah Gallo:

Those are some great tips to keep in mind, Asha. Well, we know it’s definitely important to measure those business metrics that you have mentioned there. Are there any other best practices you have for today’s L&D leaders?

Asha Pandey:

Yeah, definitely. But before that, maybe there is another important bit that I’ll touch upon, the KPIs. Now, when we do the evaluation, you would find that the KPIs can be grouped into two categories very broadly. One [group is] going to give you a early indication [of success].These are called the leading KPIs. And some KPIs, which are necessarily going to be measured over a period of time, are called the lagging KPIs. So let me maybe just spend a little time before I tell you which best practices would add value. Now leading KPIs as the word denotes, can be used in the early stage of the programs, and these are typically going to give you cues on whether the knowledge retention has happened. Is there an improvement? Is there an application which is happening? But the crucial ones are the lagging KPIs, which are typically measured 30, 60, 90 or even 120 days after the program. And this in itself has multiple iterations because this is where it is going to show you whether the gain in the proficiency translated to the benefit that the business was seeking. So it would take you that much of time to determine whether it translated to that gain in sales, profitability, market share, etc. So some of the best practice that we’ve seen, and one of it connects back to the point that we’ve already covered, is that KPIs must be identified before the development of the training program begins. And while you may have identified the right KPI, do you have the inputs for you to be able to validate the data pertaining to the KPI is correct, because otherwise it’s garbage in, garbage out. So making sure that the data for the measurement of the KPIs is available and it is reliable, and it’s also important that from the range of KPIs that you have, if you’re going to have something which can’t be measured, it can’t be improved. So we avoid [data points] which are subjective. Two important things [to remember]: Sometimes in the beginning of the evaluation or the measurement that we are doing, we are likely to get results which are not quite in line with our expectations, so we shouldn’t ignore these negative results. A lot of times these are going to be our building blocks toward eventual success. Failure, the way I look at it, is an option, and when we do an honest reporting, we are going to be able to analyze why it didn’t work. And earlier we know we are able to make sure that there is a remediation or reinforcement can be put into action. On the other hand, if the early indicators are great, it doesn’t guarantee your lagging indicators are going to be as good, so you still need to wait for measuring the lagging KPIs as well.

Taryn Oesch DeLong:

Alright. So now that we’ve clarified lagging versus leading KPIs and the importance of clarifying those before a program, let’s shift gears a little bit. As we record this episode at the start of 2021, learning leaders are hard at work strategizing for the new year. How can measuring training impact position them for success this year?

Asha Pandey:

Absolutely. So the right KPIs can actually help you drive business results. And if you’re able to align the L&D programs to the key corporate strategies or key corporate initiatives, that’s where the success lies. So adopting a framework which allows you to do this is going to make sure that, first, your training is supporting the employee performance gain. More specifically, it is aligned to your corporate strategy. It’s definitely going to help you clearly demonstrate the gain for the learners. It can move from basics of learning to application going all the way up to behavioral change. The measurement also demonstrates a clear and tangible value for business. More specifically, it can also give the L&D teams a clear perspective on which programs are working, because they are impacting the business KPIs, and which aren’t. So you may want to then toggle the budgets and make sure that you are maximizing your training spend on the programs that can indeed impact these KPIs. So overall, what it definitely would give is a competitive edge for the organization, and it would be also reflected in a phenomenally better ROI on the training spend.

Sarah Gallo:

Those are some great points, Asha. Hopefully all L&D leaders will prioritize training measurement, not only this year, but throughout their careers in the field. So, while training measurement can be challenging, thankfully there are many models and methods for measuring the business impact of training that can help. Which of these have you found that are most effective?

Asha Pandey:

Yeah, you’re right. There’s a big list out there, so what I’m going to do is [explain those which] in our experience [were valuable] and of course, all of the models that I’m going to talk about are extremely popular as well. I think the model that tops the list is Kirkpatrick’s model. And I get two kind of reactions when I’m presenting on this topic. There’s [the] rolling of eyes [thinking] it’s a dated old model, not quite there, or it is too difficult, particularly when it goes to the level of behavioral change or the impact on business. But the fact of the matter is Kirkpatrick continues to be a strong model that is used by organizations globally. At one level, it is giving a pulse of learner reaction to knowledge retention and there’s room for determining the behavioral change and the impact on business. It also has evolved through a related model of moving it to the fifth level, which is the Philips Model of ROI determination. Now people are already talking about the sixth level to the Kirkpatrick model which is, after you have determined what is the impact of training, how do you sustain or maximize the impact? So I still believe there is a lot of value in this and in a bit, I’ll just also show you how we have integrated this into our framework that we use for training evaluation. The other model, which is my personal favorite, is the Learning Transfer Evaluation Model or (LTEM). This is a more current model. I think it was [developed in] the mid-nineties. And what I like about it is that it is more aligned to the way you would like this entire process to flow. So while it has the basics of attendance or activity, but then the learner perception, which is your reaction to knowledge, decision-making competence, then the task competence, and the next two are really important which is the transfer of learning to the effect of the transfer. So at one level that you would notice there are quite a few things which are coming from Kirkpatrick Model, although they’re packaged a little differently. Now, the other common model that we see is the Kaufman’s five levels of evaluation and very, very close to the same, that import, acquisition, application, output and so on and so forth. There’s also an interesting model which is called the Success Case Method, the SCM model, which has a remarkably different approach of the best performing programs versus the worst, and the analytics from here are able to see the improvement on the business results. There’s also the other model which is called the CIPP or the Context Input Process and Product evaluation model. And again, it has four stages, and it uses very iterative approach. Now, if we look at these range of options which are out there, our assessment and my personal assessment has been that there’s no one model which would work for every organization in every context. So I think the model that works, I would put there are three important criteria. That A, it should be easy enough to deploy. B, it should be flexible, and the organization should have the room to make sure that as they are evolving in this journey, the model can actually grow alongside. So the model that we have at EI Design has taken cues from many of the models I just spoke about, and we’ve used these models plus we have integrated some of the learning that we’ve had in servicing our customer’s needs over the last two decades, to bring up a framework which organizations will find it easy to adopt. When we began the podcast today, we talked about the challenges and it appears to be otherwise a very daunting task, and this is what we’ve done by simplifying. So I’ve talked about the fact that the first stage you need to expand the scope of your training needs analysis, and make sure that you do collaboration between the L&D and the business leaders quantifying the training metrics that you’re going to be using for evaluation, both L&D metrics as well as the business metrics. The next step is actually very crucial, which not all the models talk about, and is that even the best training programs may not be working for the learners simply because they don’t have any motivation. So what we’ve added here is room to validate that. Do we have the schema? Do we have measures which can help us motivate the learners? We use a framework which is called Octalysis, and we’ve taken inspiration from there to understand the drive or the motivation factors which should be part of our processing, starting with the training needs analysis phase. Then, which is again related to the motivation level, is the fact that unless the learner sees the value, sees there is a relevance in the content that is coming their way, they’re not going to be connected, they’re not going to be completing, they’re not going to be assimilating. They are not going to take the trouble to push the data from short-term to longer-term in their memory. So having this facet into your overall training deployment strategy which is communicate the relevance, communicate the value of the training to the learners, and keep the focus on the fact that how it is going to impact the organization as well. Now, the gauging the learner reaction comes next and go beyond just the smileys, the basic reaction. So what we’ve started doing is that we integrate a fairly comprehensive [survey] into the course itself. It is about 10 facets, but it allows us to get a sense of how the user experience is, how the learning experience is. Do you remember when I talked about the leading and the lagging KPIs? These are very good cues for your leading KPIs, and it’s a great practice to have a small focus group of eventual learners work with you during the development phase and between the prototype and the alpha stage itself. You can roll out this poll or survey and get that firsthand feed. Then the steps are fairly easy, which are determining the right training format, whether you go with online, blended, VILT, or are there any supporting interventions like coaching or mentoring may be necessary? And then the right immersive learning strategy that’s going to help you meet the learning outcomes, plus the kind of engagement quotient you require from the learners. Now as an expansion tool, the components that we had collected during the DNA phase, now we identify two important things. The gain for the learner and how we intend to measure, gain for the business and how exactly do we want to measure. The last, which is the 10th step, is basically closing the loop. Whether we met the objectives that we set out to accomplish, or if there were a gap what more could be done to make sure that there is a remediation or a reinforcement. So while I call this a 10-step model, it’s not really sequential, and at every stage we have room to feed back [into] a previous stage to make sure that the results are aligned.

Taryn Oesch DeLong:

Thanks, Asha. And we’re going to link to an e-book in the show notes that shares a little bit more about the model that you described.

Asha Pandey:

Absolutely, absolutely.

Taryn Oesch DeLong:

So to wrap things up today, do you have any final tips for our listeners on how they can maximize the business impact of their training programs?

Asha Pandey:

Yeah, so I think I’ll circle back with all the things that I’ve talked about. So a lot of it was common sense, if you ask me. You need to start right, which means the right foundation needs to be in place. So during the DNA phase, make sure both stakeholders [and L&D leaders] are engaged, L&D as well as the business leaders, and that there is clarity on the L&D as well as the business metrics. Choose the right evaluation model. There is no one model which would help you handle all the combinations of needs that you may have. Choose the model which gives you the flexibility, which would help you in customizing it for your environment. And I think the next bit is the more important piece, which is don’t be nervous about the fact that you [may not] get the results in the first pass or otherwise. So keep the spot checking going; do it periodically. Whatever incremental gain that one is seeing, or even if there is a setback, is fine. It needs to be evaluated and the correction should happen. But I think the final and the foremost is you need to look at a holistic picture which is the learning and performance ecosystem, because for you to be able to sustain and maximize the impact, you need to have many facets in the learning strategy. You need to make sure that there are measures which are influencing or changing the learning habits, which are increasing learners’ motivation, learning which is within the workflow of the learners which is on demand learning, not necessarily within the elements, and there is room for continuous learning, which is where pieces like your curated learning or user-generated content comes in. So adopting an ecosystem-based approach, which has components of both learning and performance support, is going to help you ensure that the employee performance behavioral transformation goals are met on one side, and you’re truly aligned to the corporate strategic goals, or sometimes even tactical elements, that are necessary at the org level. So I believe these are some of the tips which can definitely help organizations sustain the momentum or even maximize the business impact of trainings.

Sarah Gallo:

Perfect. Well, Asha, thank you again for speaking with us today on The Business of Learning.

Asha Pandey:

My pleasure.

Taryn Oesch DeLong:

For more insights on measuring the impact of training, visit the show notes for this episode at trainingindustry.com/trainingindustrypodcast.

Sarah Gallo:

And if you enjoyed this episode, don’t forget to rate and review us on your favorite podcast app.

Taryn Oesch DeLong:

Thanks for listening. If you have feedback about this episode or would like to suggest a topic for a future program, email us at info@trainingindustry.com or use the Contact Us page at trainingindustry.com. Thanks for listening to the Training Industry podcast.