Over the years we have discussed why surveys are a poor measurement for learning. The prime reason is because they collect self-reported data. Surveys ask participants what they think, believe, or feel about training (its content or usefulness). They provide no evidence of actual learning gains or application. Surveys have their place in gathering the thoughts, beliefs, and feelings of a large population, but they are not reliable tools to verify learning outcomes.
Opinion Data vs. Empirical Data
When eParamus works with customers, we replace survey measurements with assessments that empirically verify knowledge and skill gains. We encourage our customers to stop using surveys (level one assessments). eParamus feels continuing to use surveys (along with factual assessments) hinders their ability to embrace more accurate measurements. We want them to rely instead on more reliable data.
Our customers come to us because they need a better way to verify learning impact. However, what we did not count on was how uncomfortable many would be letting go of their survey results.
Fortunately, this dedication to old habits ended up helping them make the shift. When compared, the limits of this old way of measuring are obvious.
Precise Measures Tied to Learning Objectives
Essentially, surveys used as measurement tools ask questions that relate to learning the material, applying it on the job, and the overall value of the course. Learning professionals often use these self-reported opinions to justify their learning program efficacy.
The ROI by Design measurement creates a precise verification of learning. Learning success is determined by achievement of the objectives targeted in the learning courses. When used, the process verifies the achievement of learning objectives. Objectives are identified (and results are displayed) in three categories: basic knowledge, critical thinking skills, and behavior skills.
When you collect results by objective (not by evaluation questions) you are able to measure against the expected outcomes (skills). You can display results showing the effectiveness of specific objectives or whole topics within a learning course.
When the Data Disagrees
Recently, one of our customers shared the results of a level one survey. He said the survey asked about topics that the course covered. The survey asked the participants if the material in each topic was relevant, if they thought the topic was taught effectively, and if they felt they would use the learning on the job.
The program included four topic areas. Data from the eParamus tool Business IMPACT 2.0 showed learning was accomplished in three topic areas but the fourth topic was not learned.
The self-reported survey results showed that students thought ALL topic areas were relevant, all areas were effective, and they would use the information from all topic areas on the job. Interesting.
IMPACT data clearly showed students failed to learn one topic (they failed the test). But on the survey, students self reported that they learned the material and would use it on the job.
Even when we have the best intentions, what we say is not always true. Sometimes what we say we will do does not always translate into action. This is a good example.
Your learners cannot use a new skill if they have not learned it. Personal motivations drive our answers to survey questions. This makes them untrustworthy as empirical data. Instead, collect evidence of learning through ability assessments. In this way, the data can’t be tainted. Either the learner knows it or doesn’t know it.
Testing as Accepted Verification
Logically we all understand that testing is the only way to truly verify learning and application. We routinely use tests for business credentials. Anyone who has ever earned a diploma, degree, certificate, or business accreditation accepts that testing is the prime method of verification. Every day, the business community uses these verified levels of competency when deciding who to hire.
Surveys are great to gather general information on expectations, perceptions, and feelings, but they are unreliable to verify results or drive decision making. As learning professionals, we intuitively understand we must collect empirical data to improve our reputation as a business partner and provide creditable performance data.
Fortunately, as the use of good measurement techniques grows within the learning industry, we’ll clearly see the benefits of the new methods over the old. Lessons learned from this shift will continue to improve our practice and show verifiable value to stakeholders.
Do you need to shift away from surveys as your go-to measurement tool? Do you need empirical data to prove your learning program value? We can help you make this shift. Please contact us at eParamus for help.
Please follow eParamus on LinkedIn and feel free to connect with me, Laura Paramoure, PhD to discuss the learning challenges you face.
Photo copyright: pressmaster / 123RF Stock Photo
Enter your information information below to subscribe to our blog
[gravityform id=”14″ title=”true” description=”false” ajax=”false”]