It surprises me daily how much confusion still surrounds how to measure learning ROI. It will never stop astounding me and will never stop motivating me.
An article published by an extremely well-known organization that targets learning professionals prompted me to write this. The article writer has been in the learning profession for a long time, and yet still remains clueless and confused about how to measure learning ROI. He believes many of the same myths and misconceptions that I run into from learning professionals everywhere.
Perpetuating these myths is dangerous to our profession and frustrating. It would be nice if those in our profession who do not know how to measure ROI would stop telling people it cannot be done, and get out of the way for those of us who are doing it!
Since we obviously cannot stop bad information, let’s bust these myths. Let’s discuss how they hold you back from discovering how to measure learning ROI.
Myth #1: Measuring strategic metrics is the key to calculating learning ROI.
It’s operational metrics, not strategic metrics, that hold the key to creditable calculation of learning ROI. Strategic metrics include things like employee satisfaction, customer satisfaction, cost of goods sold, and profit. Many factors outside of learning affect the rise and fall of these metrics. Because of that influence, it takes too much time and money to isolate the impact of each factor. That’s why you can’t measure learning ROI with strategic metrics as your basis.
The correct choice, and the process we teach in the ROI by Design® model, is using operational metrics. Business units and departments measure operational metrics every day. In fact it is often business unit metrics that indicate the need for a learning program. They include things like leads generated, customer service calls completed, or correct orders processed. Business unit metrics measure and report on things like employee productivity, task quality, or task errors. They report on outcomes from behavior.
Learning programs create behavior change, and using operational metrics for ROI, allows you to measure metrics directly tied to defined behavior. Operational metrics remove the influence of outside forces because they directly reflect outcomes from employee behavior (skills).
Myth #2: You have to isolate the learning impact from all other influences to calculate ROI.
The belief in this myth is tied to the misconception above about measuring strategic metrics. As I noted above, this thinking is misguided and impractical. You can’t accurately isolate all of the influences to strategic metrics, so that shouldn’t be your goal. More important, no other business function in business attempts to do this. Choosing to measure metrics directly influenced by learning programs remains the only practical solution. Once you understand that learning is designed to impact knowledge, skills, or attitudes, then the only reasonable conclusion is to measure metrics that are influenced by changes in these areas.
Myth #3: Part of the learning ROI calculation is asking learners and managers what they believe the impact was.
Of course, this is false. I ask you again: What other business function operates this way? Which other department relies on the beliefs and opinions of others to track the success of its work? No other department does this. All other business units rely on empirical data that shows they are both effective and efficient in their work. Learning groups should be held to the same standard.
Far too many organizations feel satisfied by their “smiley sheet” survey data, yet complain when the business does not respect their work. Survey responses don’t correlate to learning. But sadly, nearly all learning organizations use surveys as their only “measurement” tool. Why? Because they are simple and easy. But you must remember that survey responses only represent opinions. No survey will ever confirm what skills students actually learned or what impact their learning had back on the job. Survey responses cannot verify that the learning department did their job, they cannot verify that skills were added to the organization.
Myth #4: When calculating learning ROI, expect imprecision.
The writer of this article talks about measuring changes in strategic metrics, then looking at survey results about whether learners and managers believed there was an impact. Then he suggested chatting with business leaders about what they think the impact was to strategic metrics and then coming up with a percentage point based on that conversation. After all these conversations, the learning team could then come to a “consensus” about what the learning ROI would be. What nonsense! This boils down to a guessing game. Follow this plan only if you want your learning organization to never be taken seriously. There’s no reason why the measure of learning ROI should not be as precise as other ROI measures executed within the organization.
There are far too many pervasive myths driving the confusion and frustration about how to measure learning ROI. This confusion is unnecessary and harmful to our profession. eParamus remains motivated to fight this confusion. Our goal is to teach every learning professional how to design, create, and measure the impact of their learning programs.
If you want to master how to measure learning ROI, please contact us here at eParamus. We’ll be happy to answer your questions and teach you how to create measurable learning programs.
Photo copyright: olegdudko / 123RF Stock Photo
Enter your information information below to subscribe to our blog
[gravityform id=”14″ title=”true” description=”false” ajax=”false”]