The Things You Do Not Know Because You Use Surveys to Measure Training

Everyone talks about the importance of measuring learning effectiveness. Unfortunately, few have moved past using surveys. Even when they know that they are a poor measurement method.

The good news is, there are other much better options for measuring learning. For the past 10 years, top-performing organizations have been capturing data, without the use of surveys. This data, derived from program assessments, shows exactly how effective programs are. The focus is on measuring the intended outcomes of the program (as described in the program’s goals and objectives).

Assessment results offer many advantages over survey data and has been growing in popularity. One of its major benefits includes providing specific information to improve practice. Another benefit is showing learning value (without asking anyone’s opinion).

Survey results provide opinions and perceptions of the training program. Inherently, opinions and perceptions do not confirm actual learning, application, or impact. Since they are not facts, they cannot be reliable to tell about program improvements or value.

Data acquired from measuring capability (the thing learning provides), aids a simple cause and effect result. Capability is either added, and transferred to the job, or it was not. Either changes in capability filled the metric gap or it did not.

Valuable information

At eParamus, we continually surprised by the complacency of using surveys. Especially with the superiority of comparative test data. So, we thought we would share all thing things when you use surveys to measure.

Measuring in the simplest terms allows learning professionals to create straight forward data. Test data tied to specific objectives do not need statistical or complicated analysis. Learning professionals can determine many things using their current analytical skills.  Specifically, they have clear answers to the following questions:

Are we working on the right things?

Everyone has an opinion on the courses needed. In fact, with all the opinions, learning professionals rarely get a voice in the decision. Using test data methods, we can determine if the requested capability is already present in the intended audience. This verifies the need for the content covered in a program. When reviewing over 10 years of test data, we found that most courses produced today have over 55 percent of the content already known. That is a lot of wasted time and money delivering programs that have little chance of making a difference!

Are programs creating employee capability?

When units of measure are clear, (measuring critical thinking or behavior skills), you can determine if employees have gained the intended skills by using the assessments created as part of instructional design. Your assessments become the instrument you use to verify which skills are gained. You can verify capability and determine the effectiveness of teaching methods for any learning program.

Is learning sustained/applied to the job?

Like measuring the attainment of capability, comparative test measurement can determine if the capability gained in the classroom is being used on the job. The benchmarks for the success of a skill are in the learning course. Measuring their existence on their job is as simple as reassessment (using the same program instrument). Reviewing test data on transfer over the years shows a lot of capability is being lost after learned. Timely and credible data is a game-changer when learning departments want to engage other business units in supporting learning transfer. People can question opinions on a survey. But clear evidence of the capability gained in the classroom and then lost when they returned to the job is undeniable. It often takes this type of credible data to get the organization engaged in learning outcomes.

What skills are added to the organization?

Measuring the intended capability expected from a learning program makes it simple to report on skills gained from programs. Tracking capability as gained and then applied through measurement paints a clear picture. It shows the specific skill added to the organization. Test data, tied directly to skill objectives, can show all the competence that’s added due to the efforts of the learning department.

Which business unit metrics change from learning?

Test measurement uncovers skills added to employees and applied to jobs. Making the connection between learning programs and organizational changes has never been easier. Most learning programs try to fill poor performance. By design, learning programs target employee capability to address the business unit metrics that measure the performance. Using test measurement, there is a direct link between changes on the job and metric changes. The application of the skills can be directly related to the change in both productivity and quality of performance (indicted in the metrics). Most importantly, the metrics influenced by learning programs are easily identified.

The bottom line

Using surveys as a primary measurement for learning is both outdated and a waste of time and effort. This is because they do not give us dependable information to improve our practice. Surveys are not verification they are not credible. True, they are easy for us to administer. But they also lead us down the path of chasing the “popular vote” instead of securing quality outcomes.

A final point to consider:

Asking employees to complete surveys to tell us if programs are effective highlights the fact that we do not know if our work is effective.

It emphasizes our poor professional practice when we rely on others to tell us if we are good at our work. Providing actual measurement of the skill ability gives employees two things. First, it gives clarity on what’s expected. Second, it gives employees confidence from verification of. As learning professionals that is the very least we should provide.

Next time you want feedback on how you are doing, try the direct approach. Try measuring the intended outcomes from your course. You know what skills the attendees should gain from your course. Design your program so the outcomes (and verification of those outcomes) is clear. Skip the survey and answer the important questions for yourself!

Contact us

To learn more, contact us here at eParamus. Please follow eParamus on LinkedIn and feel free to connect with me, Laura Paramoure, PhD to discuss your specific learning challenges.

 

Enter your information information below to subscribe to our blog

[gravityform id=”14″ title=”true” description=”false” ajax=”false”]

Talk to a Training Evaluation Professional

Thank you for your interest in eParamus. We look forward to helping you meet your design and measurement goals.

Contact Us - Global