Reaction sheets. Happy sheets. Result surveys.
No matter what name they go by, survey responses don’t correlate to learning.
In a meta analysis of the literature describing over 150 research studies, the correlation between survey results and learning was r = .09. In other words, a correlation so small that it doesn’t even count as a correlation.
Sadly, the survey is where nearly ALL learning organizations stop when it comes to measuring the success of their learning.
Why? Because they are simple, easy to use, and widely available. The training industry consistently uses end-of-course surveys to assess training. The popularity of these methods clearly demonstrates that they are easy to complete.
Do We Need Better Smile Sheets?
Is the answer better surveys? I don’t think so.
What survey responses boil down to are opinions.
- Did you like the instructor?
- Did you like the class?
- Do you think you’ll use what you learned in class back on the job?
- Do you think direct reports will use what they learned in class back on the job?
None of those responses tell us anything about what the student actually learned, retained, or will use on the job.
As learning professionals, we must stop trying to show our value by highlighting the number of courses produced or survey results that only describe the perception of impact.
If Not A Survey, Then What?
The key to measuring training effectiveness lies in measuring the real outcome from learning, behavior changes, not opinion. With Measurable Instructional Design®, you create a direct connection between the objectives of a course, mastery of the material and changes to the job. It’s this correlation that produces certainty in impact.
Measurable Instructional Design® enables training professionals to clearly demonstrate how their training design creates the behavior changes that can impact performance. It also allows training professionals to achieve predictable results because this method tests the impact against a specific design. When you have specificity in design, that greatly reduces variation in training results. The more you can measure your success, the more you learn about the best ways to analyze performance results and performance gaps. You can then fill those gaps with increasingly more effective instructional methods.
Most importantly of all, however, is that Measurable Instructional Design® allows training professionals to truly see how their programs impact an organization. Once you understand the influence that design has on results, you clearly see what training can and cannot do. Once you understand clearly how learning makes an impact, you can focus on delivering training that improves business results.
Do you see value in smile sheet surveys? Do you think there are better ways to assess training impact? Leave a comment letting me know what you think.
Please follow eParamus on LinkedIn and feel free to connect with me, Laura Paramoure, PhD. I’d love to know more about your training challenges.