The International Organization for Standardization (ISO) recently released the first-ever standards for learning and development (L&D) metrics. The guidelines are an attempt to standardize a process that has long been a challenge for L&D professionals: measuring training effectiveness.

Training Industry research indicates that of all the core training management responsibilities, “assessing business and training performance” is the one that learning professionals are most likely to rate themselves as below average for.

In terms of why it took so long to formalize a standard around L&D metrics, Dr. Paul Leone, founder and principal of MeasureUp Consulting and an instructor for Training industry’s Measuring the Impact of L&D Certificate course, says, “There are so many nuances to a story of impact” and every training function is unique in the types of training it offers and, as a result, in how it approaches measurement. There are few companies who have “gotten measurement right,” he explains.

The ISO’s guidelines offer a much-needed framework from which learning professionals can approach training measurement and evaluation. However, they are still just a starting point for approaching training measurement — and should be adapted according to individual business needs and stakeholder preferences.

Here, we’ll consider the guidelines in more detail and offer tips for adoption.

From Kirkpatrick to ISO

Learning leaders across disciplines have long used Kirkpatrick’s Four-Level Evaluation Model, initially developed in the late 1950s, to measure training effectiveness. In fact, The Kirkpatrick Model is the most recognized method of training evaluation, with its four levels including: Level 1 (Reaction), Level 2 (Learning), Level 3 (Behavior) and Level 4 (Results). Despite its popularity, the percentage of training organizations measuring past Level 2 remains low: While 69% of learning organizations track Level 1 metrics and 90% track Level 2 metrics, only 67% track Level 3 metrics and 53% track Level 4 metrics, according to recent Training Industry research.

The Kirkpatrick Model offers a basic framework for training evaluation, which is its greatest criticism: It’s too primitive of a framework with which to approach modern measurement and evaluation.

Tom Whelan, Ph.D., Training Industry’s director of research, explains, “There’s no way you can take a whole learning and development system and actually boil it down to four basic categories.” In doing so, “it’s oversimplifying something that is never simple.”

That’s why, although the ISO’s guidelines leave room for improvement, they’re a step in the right direction in offering a more comprehensive framework for training measurement that can help “break people out of the Kirkpatrick mold,” Whelan says.

Ken Taylor, Training Industry’s chief executive officer, says the standards also offer learning professionals a “common language” around training measurement, which the L&D industry was lacking before the ISO’s release.

The standards break down L&D metrics into three distinct categories:

1. Efficiency metrics.

These can be defined as “quantity metrics” such as the number of courses offered, the number of learners taking a given course, costs, utilization rates and the percentage of employees “actively involved with learning,” according to the standards.

2. Effectiveness metrics.

These can be defined as “quality metrics” that answer the question, “How good was the program?” Effectiveness metrics can be compared to Kirkpatrick’s four levels, in addition to Phillips Level 5 ROI, in that they include: learners’ reactions to the program (Level 1), the amount learned (Level 2), the degree of application on the job (Level 3) and ROI (Level 5).

3. Outcome metrics. 

Outcome metrics are tied to the organizational metric targeted by the learning (e.g., to increase sales or to reduce cybersecurity incidents).

Leone says that while most training professionals “do a good job measuring efficiency [metrics],” very few are measuring effectiveness and outcome metrics.

This makes sense, as efficiency metrics — such as the number of learners who took a course — are easier to measure than outcome metrics — such as whether safety training reduced work-related injuries — which would require learning professionals to isolate the impact of training. Although outcome metrics are more difficult to measure, they are the ones that executive stakeholders typically care about the most.

The ISO standards also outline five “categories of users,” which are essentially the different audiences for whom learning professionals measure training impact for. The categories, along with the ISO’s recommended metrics for each, include:

  • Senior organizational leaders: Recommended metrics include “measures such as percentage of employees reached by learning, percentage of employees with an individual development plan, total cost of learning and contribution to outcomes.”
  • Group or team leaders: Recommended metrics include “measures such as the number of participants, number of courses, hours spent in learning and satisfaction with the learning.”
  • Heads of learning: Recommended metrics include those “that are not of interest to the CEO but are managed at the department level (e.g., percentage of courses completed on time, percentage of online content that is utilized, mix of virtual versus in-person learning and percentage of informal versus formal learners).”
  • Program managers: Recommended metrics include “measures such as number of participants, completion rates, completion dates, application rates and outcomes.”
  • Learners: Recommended metrics include measures such as “number of offerings available, informal learning opportunities and competency assessments.”

Tips for Adoption

To begin applying the ISO’s recommendations in your organization, consider the following best practices for adoption.

Start with a needs analysis.

Without an initial needs analysis, training measurement will inevitably fail. Training professionals should work with stakeholders from the beginning to identify the business outcome(s) they’re hoping to achieve. Sometimes, “training might not be the answer,” Taylor says, which is why identifying desired outcomes before rolling out a program is critical.

If it’s determined that training is needed, identify which metrics your stakeholders care about. “Probably, they really want to know whether or not training moved the needle in terms of performance,” Taylor says.

Consider your audience.

Not every “category of user” cares about every learning metric. For instance, executive stakeholders might not care about the number and types of courses offered, whereas that information would be relevant to a learner. “There’s different audiences for metrics, and different metrics for different audiences,” Taylor says. Presenting the right metrics to the right audience(s) is key.

Don’t get too focused on proving ROI.

Most training professionals feel the need to show ROI. However, ROI “isn’t the be-all and end-all” to evaluating training effectiveness, Taylor says. After all, the cost of training isn’t the training cost, he explains, “The cost of training is the employees not working.” Ajay Pangarkar, co-founder and partner at CentralKnowledge, agrees, and explains that most business leaders “see no credibility in ‘training ROI.’”

Let’s consider an example. If a CEO takes their salespeople off the floor to learn, they fully know that decision will negatively impact sales. They’ve intentionally made the decision to invest in training even if it causes a short-term hit to the bottom line. In this case, the metric the CEO would care about most is likely whether the salespeople have applied what they’ve learned back on the job, and whether that application has led to an increase in sales over time.

“Being a cost center (like other supporting functions), L&D isn’t expected to deliver positive financial results for its activity,” Pangarkar says. “It is expected to contribute causally to the overall financial improvement of the business.” This is one sentiment that is left out of the ISO’s recommendations.

Instead of spending significant time and energy chasing ROI, invest those resources back into tracking the metrics you’ve identified in the needs analysis phase that your stakeholders care about most.

A Starting Point

The ISO’s standards for L&D metrics are a great starting point for organizations with little to no expertise in training measurement. They provide a framework from which they can begin to measure training effectiveness and offer a common language around training measurement. They’re also a helpful alternative for organizations that are more experienced with training measurement but who still rely on The Kirkpatrick Model’s rigid (and largely outdated) four levels of evaluation.

However, the guidelines should be considered as just that — a starting point. One criticism of the ISO’s recommendations is that they don’t operationalize training measurement in a way that business leaders truly care about, Pangarkar says. This is likely because they were created based on the input of L&D leaders, not the stakeholders they’re measuring impact for.

Even if they’re not a perfect solution to business’s training measurement challenges, the standards are still a welcome advancement in that they “illuminate a path forward to more relevant understanding of learning activities through quantitative means,” Whelan says.

Considering the fact that most organizations still “haven’t gotten training measurement right,” as Leone put it, this is a win for both learning leaders and the businesses they support.