For eight years, Chief Learning Officer magazine has honored top L&D organizations with their LearningElite awards. They say these awards go to “organizations that employ exemplary workforce development strategies that deliver significant business results.” The process is rigorous and peer-reviewed. The ultimate winner is chosen after the top five competitors compete in a final capstone project. Winners represent the elite when it comes to learning program success. Their secret is aligning their learning strategy with business strategy. [Read more…]
The shoemaker’s children have no shoes. We grasp the meaning of that proverb all too well. Learning development teams realize the importance of learning. However, we often fail to apply it to our own careers.
In L&D, our work helps others learn. We encourage people in their professional development. We pour our hearts and souls into it. Yet often we neglect our own careers. We develop learning for a living, but fail to practice our own learning development. [Read more…]
In part 1 of this series, we described the learning phases in an organization.
Here’s how learning programs often come about. A manager notices a problem. Let’s say it’s a spike in complaints about customer service. The manager approaches the learning department and asks for customer service training.
The manager then sends employees to training with high hopes. After training, the manager expects employees to return to the job able to do their jobs better.
Learning professionals often take the request from the manager at face value. They create a program and then expect the students to take newfound knowledge and apply it on the job.
Trainers deliver the content, hope it sticks, and then send the employee back to the business unit managers to deal with. Problem solved.
If everyone has done their job, why is up to 85% of training scrapped?
Trainers think their job is done because they have designed and delivered the training to the student.
Business managers think their job is done because they sent their employees to training.
So, if everyone has done their job why are the following stats happening?
A 2004 study found that slightly less than 20 percent of participants never apply what they learn in a training program back on the job. Another 65 percent try what they learned but revert back to their old ways. This makes for a whopping 85 percent who scrap learning. More recently, a 2014 Conference Executive Board (CEB) whitepaper reported that in the average organization, 45 percent of all learning delivered ends up not being applied.
Those statistics are horrendous. Stakeholders—both learning professionals and business managers—do not recognize their own accountability or limits in learning success.
Where does accountability fall in the learning process?
The learning professional has control of and should be accountable for the following:
- Create learning programs that generate a new ability (knowledge, skill, or attitude) in the student.
- Create learning that identifies the specific job behaviors that must change to improve job performance.
- Show how that learning translates to actions taken while on the job.
- Ensure the business metrics (key performance indicators) that will change when job behavior changes are identified, documented, and incorporated into the design of their program.
The business unit manager has control of and should be accountable for the following:
- Tell the learning professional the improvements in employee behavior that are needed.
- Identify the business metric to use to determine the success of the learning.
- Ensure students understand the expectation that learning will be applied when they return to the job.
- Ensure the job environment (time, systems, processes, recognition, manager expectations) aligns to support the new learning.
This seems fairly straightforward, but both groups—learning professionals and business unit managers—miss the mark all too often.
If it’s so clear, what’s the problem?
Learning professionals have trouble speaking the language of business. Business managers have trouble speaking the language of learning.
Like the customer service example given above, business managers typically approach the training team because of a perceived business need. “There’s a customer service problem. Give us training on customer service.”
Business managers think in terms of performance and typically request training based on the subject they think they need training on. Learning pros think in terms of learning objectives and communicate with the manager in these terms. Fortunately, both groups understand the need to improve behavior.
The outcome envisioned by these two stakeholders in terms of behavior should be agreed upon in the Examining/Planning phases of learning. Accountability for supporting behavior change in each phase should be understood.
Metrics are the keys to getting both sides to speak the same language. Focus on the metrics that need to change and then the behaviors needed to create change will help smooth out this process. We’ll delve into this deeper in part 3 of this series.
Do your business managers and training professionals speak different languages? Do they understand where accountability lies in the learning process? Contact us at eParamus. We’d love to help get everyone on the same page.
Photo copyright: ssilver / 123RF Stock Photo
Have you thought about how learning progresses through an organization? It occurs in four distinct phases:Examining, learning, applying, and impacting.
First, the organization is examined to determine the need for learning. Second, the student learns the material. Third, the learner applies the material on the job. Finally, the learner’s new skills have (or should have) a positive and measurable impact on the business.
The matrix above graphically shows how accountability shifts during those phases. It moves to different roles during each phase of the learning cycle because in each phase there is a distinct role that has a span of control over learning results.
The trainer is most accountable when learning is acquired. This accountability occurs during learning program design and in the classroom where the learning program is taught.
After the classroom, accountability shifts to the student’s manager. This is the stage where learning is applied on the job. The manager is the person most directly in charge of the day-to-day responsibilities of the learner. In that role, the manager must ensure that the learner applies their new knowledge while at work.
The diagram also shows how accountability aligns with evaluations (measurements) that assess learning. These measures show results at different points during learning progression. Collectively, the measures show how learning affected the organization.
Knowing Who Is Accountable Matters
Learning professionals control the design and delivery of learning programs. However, the success of their programs is often judged by changes that occur on the job. (In other words, did the learning transfer? Did the learner apply their new knowledge or skill while on the job? Did that learning have a positive impact on performance?)
Learning professionals feel accountable for learning application even with little control over what a student does back on the job.
Learning professionals are held accountable for the student’s use of the learning, despite having little control over the student’s job environment.
To compound this challenge, most learning professionals don’t know how to tie their learning programs to business unit metrics. They fail to realize that learning must target employee behavior change that will cause the business metric to change. Without this link, the learning program has little opportunity to result in positive transfer to the job or impact to organizational measures.
The Wrong Accountability Focus
Learning professionals realize they are being held accountable for on-the-job results. Knowing this, they attempt to insert control back on the job (despite, as shown above, having little control in that arena). They attempt this with mentoring, action plans, and other reinforcing activities.
On the face of it, these activities are good because they may help the student retain the material or even apply it.
However, more often, learning professionals run into resistance with these activities. Why? Because learning professionals do not supervise or manage the students on the job. They have little influence over the student’s time, tasks, motivation, or job environment. It’s hard for the learning professional to enforce the completion of these activities when they do not manage the employees directly.
It’s not the learning professional who should be accountable at this phase of learning progression. It’s the manager. The manager should understand that the new knowledge and ability may require changes to the job environment or support in order to be applied.
This confusion in accountability causes mixed signals and misplaced expectations. Without partnership in the progression of learning, business unit managers will see learning as ineffective and learning professionals will struggle against that perception.
The solution is to recognize these phases and agree on accountability roles. Learning must be a collaboration between learning professionals and business unit managers. It’s this recognition and working together that will lead to positive change for both the learning professional and the business unit. In part 2 of this series, we’ll give specifics for who is accountable for what and show how to improve collaboration and infuse accountability within your organization.
Do you need to adjust the accountability expectations of your learning programs? Contact us at eParamus. We’d love to help.
The state of measurement in L&D lags woefully behind all other areas of the business. This lag has made learning measurement a hot topic at nearly every conference I’ve attended this year. L&D professionals realize that this area needs serious improvement.
Given the sheer volume of data being collected by businesses today, it seems truly insane that a better job isn’t being done by the L&D profession.
Consider this quote from a Chief Learning Officer article, Stagnant Outlook for Learning Measurement:
In the 2008, most enterprises reported a high level of dissatisfaction with the state of training measurement. By 2010, that feeling had moderated substantially as analytics became mainstream in other areas of the business. But once CLOs could compare their capabilities with capabilities in other business areas, their dissatisfaction increased. In 2013, 2014 and again in 2015, there has been more dissatisfaction — partially resulting from higher expectations combined with continuing challenges around resources and leader support.” (Emphasis mine.)
I’d bet that not only did CLO dissatisfaction increase, but that the dissatisfaction of the CLO’s boss also increased.
L&D will continue to be devalued if they cannot bring their level of accountability and reporting up to at least the same level as other departments and functions within the business.
Why Is Securing A Learning Measurement Budget So Hard?
Second, L&D and business are talking at odds. Business uses the language of business and L&D uses the language of training.
Third, many businesses and L&D professionals don’t know that they should even ask for this essential part of their budget. Most of us think of the L&D budget as a big bucket. As the awareness of learning measurement options increase, a need for a learning measurement budget is being created.
As businesses and learning professionals attempt to solve the measurement challenge, they need resources specifically set aside for measurement and evaluation.
Thankfully, L&D professionals are finally coming around to the idea that learning is, in fact, measurable. There are ways you can make it easier to secure funds for adding learning measurement to your budget.
3 Easy Ways to Get Learning Measurement Added to Your Budget
- Show that learning measurement is not only possible, but it’s also simple.
My book, ROI By Design, explains how you can design courses to measure learning impact. If you skim through my blog, you can find a multitude of tips on making program measurement easy.
- Speak the language of business.
My last post described how framing learning measurement in terms of quality assurance (QA) can put you and your business leaders on the same page.
- Explain to your bosses why effectiveness is a much better measure than efficiency.
Many business leaders just want you to tell them how many courses you deliver, how many people you trained, and what the results were on your smile sheet surveys because that is the only data they have known from L&D. Ultimately, they are dissatisfied with that data because those figures tell you practically nothing about the value of L&D. Switch your learning to measure business outcomes and metrics deliver that data to your boss and suddenly you’re playing a whole new ballgame.
The need for learning measurement will only grow, especially as our profession is compared to the data driven decisions of other company functions.
What is the state of learning measurement in your organization? Do you need help making it data driven so you can more easily secure budget for learning measurement? Contact eParamus and we can help you make that a reality.
I often ask people if their learning and development (L&D) function uses a quality assurance (QA) program. They sometimes reply saying that their QA is making sure courses include all the required design components.
That’s the wrong answer.
Again, many L&D professionals confuse tasks with results.
We discussed this before when we covered the difference between efficiency and effectiveness metrics. This thinking about quality assurance is another mental hurdle that L&D should leap over.
Measurement Is the QA of Training
I would like to help our industry — and the businesses they interact with — understand this: A measurement program for learning is the same as a QA program. Without measurement, you can’t compute the quality of your programs and you have no data to show anyone else.
Quality control (QC) and QA are important measures that are foundations for most businesses. Businesses assume these types of measures occur in every department. They expect metrics and data from all business groups. They understand QA is simply “good business.”
However, most people in L&D and, in fact, most businesses do not understand that learning measurement is the QA program for learning.
Every year, L&D has trouble securing budget for measurement. Why?
Because leadership does not understand what measurement means when it comes to learning. (Have you had trouble securing budget for training measurement? Here are tips to help. And a post coming later this month will provide even more tips for making budget time easier.)
QA is the easiest way to think about it. Using that concept helps align the language (and practices) of L&D in a way that business understands.
Is It a Language Barrier?
We’ve said before that often L&D and business aren’t speaking the same language. This word choice is one way to change that.
We use the term “measurement” in L&D because that is the term that has emerged when people describe the “unknown” value of learning. But, the truth is, measurement in learning is the same as a QA program.
Most departments in an organization have QA/QC programs. They have ways to assure that the product/service/process they deliver is of good quality.
The finance department uses auditors. Operations measures errors. Manufacturers measure material scrap. Engineers use modeling to test stress loads. You get the picture.
Every group has quality measures of some kind to prove the effectiveness of their product. Quality assurance (through measurement) is just a routine part of business.
For learning, measurement of results in terms of knowledge and skill, is the way to ensure the quality of learning design and facilitation.
If L&D designs a program well then, by measuring the results, you will see that the program achieved the intended increase in knowledge, skills, and behavior. If it did not, then measurement also allows you to diagnose and repair the learning program to improve the design quality and get better results.
Do you think there’s a language barrier between L&D and business? Do you see the connection between training measurement and QA? If you need help making this connection, please contact us.
No matter what name they go by, survey responses don’t correlate to learning.
In a meta analysis of the literature describing over 150 research studies, the correlation between survey results and learning was r = .09. In other words, a correlation so small that it doesn’t even count as a correlation.
Sadly, the survey is where nearly ALL learning organizations stop when it comes to measuring the success of their learning.
Why? Because they are simple, easy to use, and widely available. The training industry consistently uses end-of-course surveys to assess training. The popularity of these methods clearly demonstrates that they are easy to complete.
Do We Need Better Smile Sheets?
Is the answer better surveys? I don’t think so.
What survey responses boil down to are opinions.
- Did you like the instructor?
- Did you like the class?
- Do you think you’ll use what you learned in class back on the job?
- Do you think direct reports will use what they learned in class back on the job?
None of those responses tell us anything about what the student actually learned, retained, or will use on the job.
As learning professionals, we must stop trying to show our value by highlighting the number of courses produced or survey results that only describe the perception of impact.
If Not A Survey, Then What?
The key to measuring training effectiveness lies in measuring the real outcome from learning, behavior changes, not opinion. With Measurable Instructional Design®, you create a direct connection between the objectives of a course, mastery of the material and changes to the job. It’s this correlation that produces certainty in impact.
Measurable Instructional Design® enables training professionals to clearly demonstrate how their training design creates the behavior changes that can impact performance. It also allows training professionals to achieve predictable results because this method tests the impact against a specific design. When you have specificity in design, that greatly reduces variation in training results. The more you can measure your success, the more you learn about the best ways to analyze performance results and performance gaps. You can then fill those gaps with increasingly more effective instructional methods.
Most importantly of all, however, is that Measurable Instructional Design® allows training professionals to truly see how their programs impact an organization. Once you understand the influence that design has on results, you clearly see what training can and cannot do. Once you understand clearly how learning makes an impact, you can focus on delivering training that improves business results.
Do you see value in smile sheet surveys? Do you think there are better ways to assess training impact? Leave a comment letting me know what you think.
Image copyright: toddkuhns / 123RF Stock Photo
Hiring managers complain that their applicants lack even the basic knowledge needed to fill a role. And job hunters think companies want purple unicorns—super humans with an impossible combination of skills—to fill open roles. There is likely some truth on both sides of the issue.
I just read another article talking about the skills gap. This article offers ideas for closing that gap for good. The author makes a good point. She says:
Companies can shift their recruiting strategies to ‘hire for attitude, train for skill’ as a first step to close the skills gap. This approach to recruiting prioritizes hiring talent who align closely with a company’s culture, vision, and attitude above finding hires with the perfect set of pre-existing skills.”
Hiring someone who fits well within the culture of your company makes sense. In my book, ROI By Design, I say the following:
Training “cannot make a manager a nicer person. If the manager knows what to do and shows through exercises in the course that he or she can display the soft skill, then training is successful. If after the course, the manager chooses not to use the skill on the job then that is another issue. It is no longer a training issue. Reinforcing what training can and cannot fix with a gap in performance is important. Soft skills fall under the same rules as technical skills with regard to what training can do. If addressing the gap requires a knowledge, skill, or attitude change, then whether it is a soft skill or a technical skill it can be measured. If it is not knowledge, skill, or attitude change that is needed, then it is not a training issue.”
No amount of training can fix a personality issue. Training will not change the chemistry between a new hire and your company.
Hire for Attitude; Train for Skills Gaps
If you’ve found a great person, but the skills are lacking, what can you do? As a hiring manager, you must ask: If this person fits in my company in every other way, is training the answer?
You have to answer these questions:
—How wide is the gap between what they know and what they need to know?
—Does my company have the time to train them and get them up to speed?
—Does my company have the resources available—materials, mentors, learning tools—to train them?
—Are we looking for purple unicorns? If so, do we need to change our mind about who we are willing to hire?
Ways to Transition New Employees
You can address many of these topics with a solid learning program. Prepare new hires with an onboarding program that helps them learn the ropes at the new company. Pair new employees with mentors. More senior employees can guide newer employees as they set their career path.
What about you? Do you think moaning about the skills gap is helpful? What solutions have you tried to fix this problem? Please comment if you have ideas for solving this issue.
And, if you need help strengthening your training or onboarding programs, please contact us here at eParamus. We’d love to help.
If you think it sounds farfetched, you’re not alone. This struggle has been going on for quite some time.
Many learning officers and the businesses they serve hold onto the false belief that training is a nice-to-have perk. Learning isn’t a value-add for the business. It can’t be measured or quantified and you certainly can’t prove that it adds a dollar value to the bottom line. And, if the organization needs to make budget cuts, then the learning function should be first in line.
I’ve railed against these limiting beliefs for some time, both here on my blog and in my book, ROI By Design.
Check out these posts to read some of my thoughts on this very subject:
- Training Impact Measurement Is A Blind Spot for Business
- Can You Create Measurable Training?
- Training Impact: Is It About Isolating the Effect of Training?
- Training Evaluation Shifts L&D From Overhead To Business Partner
Because this topic is such a passion of mine, I love it when I transition my clients away from this mindset. I also enjoy when I hear of others who have seen the light.
Transform L&D With Data
Take for example, this story recently posted by Chief Learning Officer Magazine. It describes how a financial tech company transformed their tiny learning department (only 2 people!) from the role of order taker to the role of business partner.
How did they do it? Metrics and measurement, of course! That’s where the rubber meets the road in the learning function. Businesses believe in data and proof. If you want to change L&D from a touchy-feely, smiley sheet function to a partner that is listened to and valued, data is your answer.
Consider this quote from the article:
One of the key challenges that learning organizations face today is getting a seat at the decision table. The learning and development challenge is to demonstrate impact on business performance…”
…[A]s the organization began to look more closely at overall business results, it recognized that learning and development could be a key differentiator in business performance.
Until then, the company’s learning organization — made of two employees — lacked a learning strategy, specific goals and metrics. It implemented several one-off training classes but was unable to quantify business impact.”
The Steps You Need to Transform L&D
As you read the article, I see the same steps that I go through with my own clients.
First, they realized their current actions would not cut it in today’s business environment.
Second, they assessed the business L&D needs.
Third, they prioritized those needs based on value to the business.
Next, they partnered with the business leaders inside the company and learned how to speak the language of business.
They then determined the metrics to measure, created learning programs aimed at those metrics, and then measured the impact of learning on the business.
It sounds so simple when you spell it out like that, but so many L&D functions and businesses find it so hard.
It really shouldn’t be. (See: Training ROI Is A Struggle—But It Shouldn’t Be)
Are you facing this struggle now? Do you wish you knew how to measure training so you could prove its business impact? To learn more, I hope you’ll check out the other blog entries that I’ve mentioned in this post. And if you want an in-depth exploration of this topic, consider order my book, ROI By Design. It will help you see just how simple this transformation can be.
Next week, I’ll be leading a pre-conference workshop. The workshop is for the annual conference of the ANPD in Las Vegas. (The ANPD is the Association of Nursing Professional Development.)
The topics of my workshop are my favorites—finding the ROI of education and ROI by Design®.
How are training ROI and nursing professional development related? Good question! To answer, first let’s talk about the Association of Nursing Professional Development or ANPD.
Using Nursing Professional Development for Better Healthcare Outcomes
The goal of ANPD is to inspire members to excel by providing education services. Their purpose is to enhance and improve healthcare outcomes. Professional development and improvement of outcomes is defined by standards and based on research. ANPD recognizes the link between best practices and better healthcare outcomes.
Their purpose sounds almost exactly like the goals of the ROI by Design® model. Define job and role standards. When creating training, tie job standards to desired outcomes. Measure the results. Make changes based on outcomes and reporting.
A goal of this conference is for attendees to identify best practices, research evidence, and measure outcomes. Nursing leaders, HR leaders, nurses, teachers and students all will be at this conference. They all want to know how to gain better outcomes in healthcare.
Targeted Education and Training ROI
My session is a pre-conference workshop about ROI by Design. It’s on Monday, July 13.
Session attendees will learn how to measure the ROI of education and training. I’ll describe a new training model that uses employee education to create behavior changes. Those behavior changes link to the metrics that your organization measures. That way, you can see exactly how education impacts your health organization.
In my session, you will learn how to use this model and measure your training outcomes. I’ll give examples of other progressive organizations and how they use this model. I’ll present real-world examples to explain how you can get the same types of outcomes. We’ll also talk about the implications of this work to both practice and research in the healthcare field.
Do you want to read more about how I helped one healthcare organization improve their healthcare outcomes? You can download the UNC Healthcare case study.
Are you planning to attend the pre-conference workshops and the ANPD conference this year? If you plan to attend, drop me a note or leave a comment. I’d love to meet you there.