Lack of Learning Measurement Standards Drives Confusion

The confusion and noise surrounding learning measurement often creates frustration in learning professionals. Personally, I believe the inability of learning leaders to agree on clear learning measurement standards is an embarrassment to the industry. Don’t we know enough about our discipline, methods, and impact, to convey our results in a creditable way?

Sadly, our reliance on employee opinion surveys to tell us if we are doing our job well, is a clear indication that we do not.

Despite these frustrations, I love the field of learning. Many of my family have spent their careers in education and practiced lifelong learning. I find continuous learning to be one of the great joys in life. Although I started my career in marketing, it was no surprise that the universe conspired to bring me to education within business.

The power of learning moves me to drive our industry toward stronger professionalism. I have faith in the intention of learning facilitators, instructional designers, and learning leaders. That faith keeps me sharing the message of transparency, clarity, and accountability in learning results.

To that end, I continue to share the foundations of learning measurement in hopes my peers will have the same desire to use data to improve their practice and show the value of their work.

The Essence of Learning Measurement Standards

There is good news for creating creditable learning measurement standards. Because, at its essence, learning professionals only need to show data on four things:

SCRAP – Are you working on the right things?

LEARNING – Are your programs effective at creating capability?

APPLICATION – Are business units supporting training transfer?

METRICS – Do the behaviors gained from learning change organizational performance?

If you capture these four results, you can establish learning measurement standards for your organization and demonstrate effective learning practice.

More good news is that measuring these things is within your control. Their measurement fits easily into your current practice. Instead of tracking all student activity, constantly pushing out the same content, or changing all delivery methods to micro-learning, you can simply measure what you currently do.

Let me explain.

Before we measure the success of anything, we must identify the targets (outcomes) we are shooting for. For learning programs, the intended outcome is found in the course materials and stated in the design (specifically in the objectives).

Also found in the design are the evaluations and assessments. They provide certainty on the expected level of outcome. Evaluations tied to each objective clarify expectations and measure achievement of that objective. After learning, a student should be able to say, “If I know the answer to these questions or if I can demonstrate the behavior, then I have reached the objective.”

Objectives state the expected outcomes and evaluations verify achievement of those outcomes.

Opinion Surveys (Unfortunately) Outweigh Solid Design Strategy

Unfortunately, much of the industry did not use these basic building blocks of design to measure the success of programs. They did not use them to set learning measurement standards. Instead of measuring success against the actual achievement of targets outlined in our program, we opted instead to ask others if we were successful.

As long as there has been education and higher learning, we have used evaluations to verify understanding. In every standard education setting, you show progress by passing tests. Our tests were either questions or activities that verified our understanding. Passing tests gives us, and our educators, confidence that we have achieved the necessary outcome. They allow us to accept and take pride in our accomplishment.

For some reason, with business learning, we decided that assessments were a bad thing. We worried about employee fear. What if employees are bad test takers? What if the test shows learning didn’t happen? Would I lose my job as a learning professional? Would employees lose their jobs?

“Knowledge is power” is a truism for a reason. When did avoiding reality lead to anything good?

Repurpose Time Spent on Surveys Into Creating Assessments That Capture Useful Data

I can hear some of you now saying, “Creating and administering assessments is too hard. We can learn what we need just by asking students what they think of our program.”

Think about that for a minute. Surveys only reveal opinions; they in no way provide evidence of ability. As I often comment, “I can take a class on good nutrition. I can note on the end-of-class survey that I learned from the class and will use the material. But the first time I see a plate of cookies at dinner time, I am likely to eat them instead of my vegetables!” The class may have provided wonderful information and I may have been engaged in the discussion, but if I can’t show I retained the information and put what I learned into practice, then what purpose did it serve?

Additionally, if for my current programs, I create a survey, send it out, and compile the responses, that is a lot of effort.  Not to mention that getting employees to fill out surveys is almost impossible.Why not spend that time and effort on creating an assessment that will show capability?  Employees have no motivation to give you feedback on a survey because there is no incentive to do so. Conversely, if they take an assessment that verifies their learning, they at least get feedback about themselves. They get information that is pertinent to their job success.

Surveys and Other Useless Measuring Tools

Surveys have been our main measuring instrument for the past 50-60 years. That leaves a lot of room for improvement. In recent years, technology has emerged that promises to help us show the value of learning. One popular choice has been experience API (xAPI) tools that capture employee activities, done online and offline, that can be considered part of their learning. The technology captures learning activity and learning professionals use that information to convey their value to the organization.

Capturing activity may be interesting, but does it really show the value of the learning program? Doesn’t software that shows employees organically directing their own learning suggest there is little need for anything other than sharing content? At the very least, software that captures student activities and uses this information to show our value only sends the message that providing content is all that the learning department needs to do. It says that employees do not need support to learn new skills, they simply need to have access to content and other employees.

Embracing this technology to measure employee capability or the value of the learning department is harmful to our industry. It may provide visibility to employee activity but it sets the learning industry back years to the time when we had to remind learning professionals that “telling isn’t training.”

Old Tools and Principles Repackaged as Innovation

Micro-learning software has arisen as another popular learning technology. For some reason, we think breaking learning into bite-size chunks and constantly shoving it in employees’ faces over the computer, tablet, and phone constitutes learning.

At first, I too was on-board with micro-learning. I know people find it easy to absorb bite-size content and that content reinforcement is a good thing. However, after looking at the practice, I realized that instead of ensuring people learn, the information is served so frequently, they simply memorize the answer to move to the next stage. I imagine employees sitting at the airport, clicking through screens long enough to get their manager off their back to complete the learning.

Sending reminders of content is good. But using micro-learning as your main learning method or main means to show learning’s impact falls short of ideal. Micro-learning software is simply a new delivery method, like e-learning was 15 years ago. It has its place, but that place is not in proving the capability that learning adds to an organization.

These systems suggest that if employees look at the material, then they are learning. Yes, just-in-time information helps employees reference material when they need it. Yet, again, is that what learning professionals want to hang their hat on? Do we want to convey our value to the organization by our ability to put content into bite size portions and send it out to employees? Employees already have the internet. Other than controlling what information they find, how is this any different?

Please, don’t get me wrong. These types of systems have their place. They are tools to capture information and deliver content, but you do not need learning professionals to use them. You only need IT people to manage the software and technical writers to populate the content.

Embrace Technology that Delivers Useful Data

Here’s a radical thought: Instead of focusing on technology that delivers content, why don’t we focus on technology that captures data on the effectiveness of our learning methods? Let’s demand technology that enables standards of design and measurement, so we know how to improve our practice. Let us use technology that captures employee capabilities derived directly from our learning programs. We should focus our efforts on technology and methods that answer these four questions.

SCRAP – Are you working on the right things?

LEARNING – Are your programs effective at creating capability?

APPLICATION – Are employees able to transfer their learning to the job?

METRICS – Do the behaviors gained from learning change the organization?

Let’s hold ourselves accountable for our own professionalism. Let’s enable learning measurement standards to measure what we do by clearly stating our expected outcomes in our objectives. Then let’s create corresponding evaluations so we can verify those outcomes with assessments.

Look for Tech That Builds Up Our Profession, Rather Than Tears It Down

As professionals, we should focus on technology that helps us capture the achievement of the targeted outcomes and provides us information to improve our practice. Our profession needs to stop chasing the latest shiny object and get real about our own value.

We owe it to our students to provide them with clear benchmarks and empirical evidence of their achievement. We owe it to our business stakeholders to provide transparency into what we can and cannot do.

When we use our professional expertise to design learning programs that clearly state the expected outcomes, and assessments to verify those outcomes, we can easily validate our methods and track learning success.

eParamus customers use IMPACT 2.0 software to design measurable learning. The IMPACT tool tracks achievement of the intended outcomes. Results are automatically captured and sent to facilitators, designers, students, and managers. Everyone is focused on the outcomes and the part they play in ensuring those outcomes are learned, applied, and impactful to the organization.

Is software to help you capture the actual outcomes from learning programs sexy? Perhaps not, but it certainly is effective at helping you improve your practice and showing everyone the value of your work.

Would you rather focus on tools that tell you and your business stakeholders useful information? If you’d like to explore what IMPACT can  do for your team of learning professionals, please contact us here at eParamus. We’re ready to help.

Please follow eParamus on LinkedIn and feel free to connect with me, Laura Paramoure, PhD to discuss your specific learning challenges.

Photo copyright: gstockstudio / 123RF Stock Photo

Enter your information information below to subscribe to our blog

[gravityform id=”14″ title=”true” description=”false” ajax=”false”]

Talk to a Training Evaluation Professional

Thank you for your interest in eParamus. We look forward to helping you meet your design and measurement goals.

Contact Us - Global