Consider the following scenario:
You’re in a meeting to show company stakeholders that your training initiative “worked.” Your participants completed training three months ago, and business metrics show a significant increase over the months post-training. You attribute this to your brilliant and engaging training, but just when you think everyone’s impressed and convinced that your training is having a big impact on the organization, one of the business leaders pipes up and asks, “How are you taking credit for those increases in performance? So many other things could have caused those increases: The market got better, everyone got bonuses and those same employees were also involved in two other training programs this year. With all these other environmental factors as possible influences, how can you possibly say that productivity and business performance improved because of your particular training?”
Whether this person is genuinely curious, consistently skeptical or just wants to make you sweat a little, there’s no denying that it’s a great question. How can you directly attribute any increases in employee and business performance to just one particular training program or initiative? The answer is — you have to do some more rigorous measurement to “isolate” the impact of your training. Allow me to explain.
Isolating the impact of training means to cut away the “noise” of possible confounding factors and carve out a percentage of the overall business performance that you can confidently take credit for and attribute directly to your training. We call it “isolating” because it collects and uses specific data to parse out the effects of other influencers, attempting to leave your training all alone as the one and only influencer of at least a portion of any overall business improvement. If you’re familiar with the four-level Kirkpatrick Model, the five-level model from Phillips or the six-level model from yours truly, this isolation always takes place at Level 4 and is really an essential element if you’re going to conduct any credible return on investment (ROI) case studies for your leaders and stakeholders.
There are three primary techniques you could use to do this isolation. I’m going to provide a very high-level summary of each approach, the core data you would need and some guidance on when to apply each one.
3 Approaches to Isolating Training
- The control group technique.
The control group (trained versus untrained) technique compares the improvement of the trained group to the improvement of the untrained control group over the same time period. Much like any medical drug experiment where the test group (trained) gets the drug, and a twin sample gets nothing (control), it’s critical that the two groups are similar on all other variables, except the one you’re testing (the actual training). This means these two similar or “twin samples” of employees should be under the same market conditions, come from the same business functions, have the same tenure, etc. By controlling for all the other variables that could influence their performance, you can confidently claim that any incremental improvements for the trained group above and beyond the control group over the post-training months can be directly attributed to their training experience.
- The attribution technique.
The attribution technique is where you see a lift in hard business metrics but don’t have a control group. Here you want to take credit for only a slice of this improvement from the pre- to post-training period, so you get participants to attribute how much of a role the training played in their improvement over the last few months since training. Here, you may ask them “Of all the things that could have improved, how much would you attribute to your training?” This doesn’t sound as scientific as the control group, but with larger sample sizes making this attribution, you can really cut down the error of the estimate. Once you have an average percentage increase to attribute to training, you simply multiply that by the increase you find in the objective performance data. For instance, if sales increased by 15%, and your trainees on average said that training was responsible for one-half of that increase, then you could confidently attribute a 7.5% increase in sales to your training. The key here is that you’re not trying to take credit for the entire jump in business performance, but rather chopping it down and isolating the impact by having participants consider other factors and only attribute a percentage of their overall performance gains to training.
- The estimate technique.
The third technique is the estimate technique. We use this when we can’t get a control group and we can’t get any actual participant performance data from the business to use the attribution technique. In the absence of this hard data, you can still do some isolation by first simply asking participants if and how much they’ve improved a specific business metric over the past months because of the training, and then take that very rough estimate of their performance improvement (which is typically inflated) and adjust it down for error and increased confidence. To adjust the original Level 4 estimate down, you’ll go back to your Level 3 data and multiply the original Level 4 estimate by the percentage of participants who had significant behavior change at Level 3 because of the training. The logic and math behind this is: If any participant didn’t apply the training and improve their behaviors on the job at Level 3, then we can’t be confident in their performance increase estimates at Level 4, so we assign them a 0% estimated increase at Level 4 and in that way drastically reduce any false inflation of business impact.
Consider this example:
Let’s say on average a group of call center reps estimated an improvement of 12% in their customer satisfaction scores. From that same group of participants, you see only 32% of them reported applying for the training and improving their behaviors on the job at Level 3. You’ll simply multiply that original 12% by the 32% and you’ll end with an isolated estimate of 3% improvement in performance as a result of the training experience.
Here’s a quick decision tree to help you decide which technique is best for your impact case study:
Look at the data that’s available, decide on any questions you need to add to your post-training assessments and start carving out your training’s specific contribution to the bottom line.
Remember, the only reason why stakeholders ever second-guess the real impact of training is because we don’t do the research to show them it “worked.” And in order to really be credible with our numbers and our ROI calculations, we need to “isolate” the results at Level 4. And the best part of this is no matter what technique you use, your measurement approach will be waves above almost 90% of all organizations that never get past Level 3.
Ready to optimize the way you evaluate your training programs and prove the impact of your training initiatives? You can learn more from the author, Dr. Paul Leone, by participating in Training Industry’s Measuring the Impact of L&D Certificate program, or you can start proving your impact today by accessing the evaluation template below.