People are getting the message. Because that message is coming from executive business leaders, senior learning leaders, and industry agencies. Business and learning leaders are insisting on proof that learning programs are working. Consequently, they demand that learning professionals implement a measurement strategy.
The Learning Profession Falls Short at Measurement
Despite much talk for years in our industry on the importance of showing learning ROI, most organizations struggle to implement an effective measurement strategy.
A recent Training Magazine article provided statistics that back up something I commonly see in my business. The average organization does a poor job of measuring its learning initiatives. The stats in the article showed that 7% don’t measure at all, while 22% rely only on completion rates and smile sheets. In contrast, only a very slim portion of organizations (8%) effectively measure the impact of their learning programs on individual and organizational performance. Certainly, our profession faces challenges in addressing learning measurement.
If it’s not already, measurement must become part of the learning professional’s tool set. The reasons for measuring learning are simple. We measure to improve the effectiveness of our learning programs. We measure to clarify the link between learning and organizational performance. That’s it, two things, but two very important things.
The Necessary Step to Improving Effectiveness
The basic goal of learning measurement is to improve learning program effectiveness. To improve anything, we must understand its basic components, relationships between those components, and how to improve those components. We must know what makes it work, and what it looks like when it works well.
For effective learning programs, we must understand:
- Intent of the program, usually stated as course objectives. What knowledge, skill, or attitude gains will the program achieve?
- Methods used to achieve the objectives. What instructional methods and/or assessment tools will be used to execute the program?
- Success in reaching the objectives. Did the methods used, result in new knowledge, skills, or attitudes?
When our strategy measures these three things, we can evaluate program results and determine areas for improvement. A successful measurement strategy visualizes the relationship between learning design/delivery and student results. If we don’t examine the connections between intended outcomes and student gains, we cannot improve program efficacy. Without that examination, we fail to link cause and effect. We gain no information on which design decisions worked and which did not.
We determine program effectiveness by comparing program objectives to achievement (through evaluation). When we understand the connection between objectives, instructional methods, and evaluations, we learn what methods achieve the intended results.
Using surveys as measurement tools highlights this point. For 50 years, learning professionals tried to use surveys to measure program impact. Today, most understand surveys are poor measurement tools. Even if students complete the survey, and even if they answer honestly, we still lack information on how to improve our programs. Surveys provide no clarity on outcomes. They offer little information beyond student opinion of program content. Surveys leave learning professionals with little validation that their efforts worked and no information on how to improve their methods.
A Measurement Strategy Links Learning to Organization Performance
Beyond learning which instruction methods work (or don’t work), another reason to measure learning is to clarify the link between learning and organizational performance. To measure this, learning professionals must define the program’s intent. We must be clear on the program objectives so we can show how those objectives (when reached) influenced organizational performance.
The path between learning and organizational performance goes through a few steps. The first step comes in confirmation that objectives have been acquired (learned). When evaluations tie directly to learning objectives, we can verify if students gained the targeted competency (knowledge, skill gains). Therefore, this means we can verify that the methods used to help the student learn actually worked. This verifies the effectiveness of the learning methods.
The second step confirms that the new learning is being applied on the job. Certainly, for an organization to benefit from learning, something must change. Students need to apply what they learned on the job to improve job performance. Therefore, once students return to the job, we verify their new skill use using the same learning evaluations. With these measures, we show a change in employee performance.
Why use the same learning evaluations to measure both learning and application? When we design evaluations that measure an objective, those evaluations represent the criteria for success for that objective. Objectives should always be written for the job. Essentially, objectives/evaluations are job requirements. That means they work as tools to measure job performance. We use the same evaluations because they represent the standard of job performance our learning program brought them to.
The Metric Showing Organizational Change
The final step in showing the link between learning and organizational performance is to capture the change in the key performance metric (KPM, a measurement of organizational performance). When business leaders request a learning program most aim to fill a performance need in the organization. Performance needs often surface when a monitored metric performs poorly.
Certainly all business leaders know the metrics used to gauge their business health. They may be productivity metrics (number of cases closed, number of leads generated), cost metrics (negotiated price, inventory reduction), quality metrics (customer satisfaction scores, repeat customers), or error metrics (case errors, defects).
Focused learning programs target improvement in job performance to improve a business metric. With a measurement strategy designed to show acquisition, learning, and application of the objectives, we create a clear chain of evidence. That evidence shows how learning led to the employee’s improvement on the job. Once we know learning changed the employee’s job performance, the associated organizational metric should change.
This chain relies on the learning professional to identify the target metric before program design begins. With that knowledge, the designer targets program content to ensure performance changes that can influence the metric.
Proving learning’s impact on organizational performance comes when we show that employees who took the program learned the skills, applied the skills, and then the metrics that reflect their performance changed.
The Key to Long-Term Success
For effective learning measurement, our strategy must address the top drivers of learning measurement. A measurement strategy that reaches the top drivers and integrates well into daily workflow is vital for accepted implementation and long-term success.
When researching measurement solutions, keep in mind basic requirements for an effective strategy. The measurement strategy needs to:
- Provide results that inform on the success of the learning practice (methods, procedures, techniques)
- Provide a measurement technique useful across all learning delivery methods (e-learning, ILT, blended)
- Enable measurement of on the job performance
- Enable measurement using business-based outcomes
Using a measurement strategy with these elements gives us a way to improve our practice and show how learning contributes to business success.
Do you have a measurement strategy in place? Can you show how your learning programs drive organizational change? If not, contact us here at eParamus. We’ll teach you how to measure your learning programs and deliver the metrics your business leaders want.
Photo copyright: fizkes / 123RF Stock Photo
Enter your information information below to subscribe to our blog
[gravityform id=”14″ title=”true” description=”false” ajax=”false”]