So, why have learning measurement programs failed to raise the bar for learning?
The failure of traditional measurement is due to poor measurement practice.
Two things you would expect to gain from any learning measurement program:
1. A method for improving programs (quality control). 2. Information to highlight the value of learning.
Yet, many learning measurement programs fail to deliver on these two things.
Learning evaluation programs have been around for many years. The types of evaluation programs differ mainly in the type of information collected. The most valuable measurement programs inform on two things. Those two things are behavior and critical thinking skills. Employees get both from learning programs.
But, the most popular programs in use today use one of two things. Either surveys to measure stakeholder’s opinions. Or, use software to measure learner’s activity. Certainly, stakeholder opinions or learner engagement are indicators of benefit. But these types of programs provide little feedback for improving quality. They also are seen as a poor measure of learning value.
Learning professionals are making some key mistakes in their measurement practice. Some learning departments are unable to show efficient means to measure programs. This stops them from reaching the level of business savvy enjoyed by other business units in the organization. When looking for an evaluation program it helps to watch out for the indicators of poor measurement practice.
Learning measurement fails due to these top three things:
1. Relying on other’s opinions of value instead of verifying our own success. 2. Implementing measurement solutions that do not inform on how to improve our practice. 3. Implementing measurement programs that need expertise outside of learning. Learning measurement programs with these characteristics will lead to failure. Looking at each characteristic individually highlights the fundamental flaw in the approach.
We rely on asking others how we are doing instead of finding out for ourselves
Several consultant and measurement technology companies provide evaluation solutions. Those evaluation solutions depend on surveying employees. Learning evaluation programs that rely on surveys are the most likely to fail. The first and most distasteful part of this practice is that it maintains the idea that learning professionals cannot verify their own success. Or, that they need to rely on others for verification. Using a survey to determine if we “did our job well” sends a clear message that we have no other way of showing quality.
We are asking students/managers to take time away from their jobs to tell us if our programs are effective. Could you imagine if any other department (marketing, finance, sales, R&D) used an employee survey as their main verification of success? There is nothing wrong with asking stakeholders how they felt about the program. But opinions are not facts. Opinions rarely reflect real achievement. This is why senior business leaders do not accept them as credible evidence of effective programs.
We install solutions that do not inform on how to improve practice
Measurement data verifies the success of a learning program. But, the real benefit comes when the measurement data helps to identify and repair poor programs. Many evaluation programs focus on things like student activities (i.e., when they opened a file or took a class). Or they give the results of survey questions that show which parts of the program they felt were most useful. Sadly, none of this helps define which design decisions worked and which ones did not. This is because they do not measure skill outcomes.
The information on activities and opinions of value does not tie back to the decisions of the learning professional. Most evaluation programs cannot tell you the necessity of the skills targeted in your program. They cannot tell you the learned or applied skills addressed in the program. The information that comes from design analytics is crucial. Without it, there is no information the learning professional can use to improve their practice. An evaluation program must state what failed and when it failed. Otherwise, it is impossible for trainers to diagnose and repair their programs.
We install measurement programs that need expertise outside of learning
Let’s face it. Unless it is an evaluation program implemented using a learning professionals’ expertise, the solution will not be widely accepted or adopted. Unfortunately, many of the current measurement solutions need things like expert data collection or statistical analysis. This is not a viable long-term solution.
So, choose a program that enables the training team to use their own skills (design and analysis). Learning professionals create solutions. The evaluation of programs should not be so complicated. They must be able to determine and use the results in their practice.
To fit into learning expertise, the solution must be able to fit into the normal learning workflow. It must address the measurement of design and allow for easy collection of data. The data collected should be easy to understand. This should be true of both the trainers and their stakeholders (student/managers). This is so data results can be put in place across the organization.
There are two types of impractical measurements. One is measurement that requires the continuous involvement of a consultant. The other is measurement that only provides data in the form of complex tables and charts.
The bottom line
Implementing a learning measurement program does not help learning professionals unless it can deliver on the original promise. The promise of improving learning outcomes and clarifying the value of learning within an organization. Learning professionals should try to avoid these key mistakes in order to put a satisfying measurement program in place.
Contact usTo learn more, contact us here at eParamus. Please follow eParamus on LinkedIn and feel free to connect with me, Laura Paramoure, PhD to discuss your specific learning challenges.
Enter your information information below to subscribe to our blog