Top 5 L&D challenges while measuring impact and how to tackle them.The proof of the program is in the implementation... oops! But true.After the leadership training program last season, Kritika was super-excited when she saw that 90 percent of the participants couldn’t stop at just one exclamation mark after “excellent”. But after six months- life came back to square one and the senior management felt that they needed to do more “impactful” programs for their managers. Kritika wished, if only Kirkpatrick gave a little more importance to the reaction stage for evaluating the success of learning program; she would have nailed it .How many times have you got excited looking at the feedback sheet immediately after a program and felt like Kritika?We can’t stress enough on getting the “evaluation of impact” part right, if you want to build credibility for your programs and be able to contribute effectively towards organization growth. We also know it’s not easy. After engaging with thousands of business and L&D leaders, we know by now it mostly hits a blind spot, when it comes to demonstrating impact from an ROI perspective.But the good news is, with the right technique and preparation, you will be able to evaluate effectiveness of your learning programs and show tangible results.That is what the organization is looking for; isn’t it? If you get it right, you can not only win your management’s confidence, but also bag bigger learning budgets in the times to come.But why does it get so difficult to measure impact?Let’s get started with addressing the top 5 challenges one faces while trying to show ROI.
- Post program action items are too vague to measure :Participants write" I want to think strategically" or " I want to be more profitable". When you follow up on these action items after say, six months, you end up getting nothing concrete. Showing ROI is impossible when the goals are not actionable and quantifiable.Think about when does a participant write genaralistic action items. It could either be that he could not easily link the learning back to work, or he did not invest enough time and effort into planning implementation.What you can do about it : Once you have collected the expected learning outcomes, make sure that the program is optimized to drive these outcomes distinctly. During the program, real business challenges should be addressed or solved. Based on insights gained, participants should be encouraged to create realistic and actionable implementation plans which they can start working on, soon after the program. The program facilitator should guide them to make SMART goals - specific, measurable, attainable, realistic and timebound.Another thing that you could drive is - Make each participant do a 15 minute conversation immediately after the program- with his / her reporting manager to refine their action items and make them relevant to his / her roles and KRAs. Record these action items for future followups.
- Learning is difficult to retain if not practiced at work -When it comes to participants implementing what they learnt, it’s always a fight against time. If learning is not retained long enough through implementation, it’s futile; no matter how impactful it seemed at the reaction stage (immediately after the program).What you can do about it : There are ways to prolong retention and reinforce key takeaways from a learning intervention. Have trial exercises, quizzes, modules, discussion forums etc for the participants at periodic intervals so that they do not phase out of the program. A follow up on implementation progress of action items every month is also a great way to keep them going.
- Tracking is a painstaking and time consuming process -Keeping a track and measuring impact over multiple stages with limited resources becomes very cumbersome. Its very difficult to balance lining up future programs with following up with the last ones.What you can do about it : Involve other stakeholders - especially reporting managers to measure improvement periodically. They could help you record performance based metrics like improved work process, quantity or quality of work or even subtle changes like higher levels of motivation, proactivity, positive attitude, etc. The key here is to partner with them from the pre-program design stage. The more inputs they give in designing the program, the more they will believe in its relevance and effectiveness in improving their team’s performance. Defining outcomes from the program based on metrics they already track, will make it easier for them to evaluate its impact for you.If you have an external partner for the program, it is a good practice to get into an agreement with them to help evaluate the effectiveness of the program after a particular time period.
- You don’t have an end-to-end tracking :It would be great if the performance or improvement of each participant can be measured progressively over a period of time. Make a collective analysis of their learning and implementation journey from reaction stage right upto the result stage.What you can do about it : Demand for a system which allows various stakeholders to monitor and mark performance improvements of the participants over a capsule of time, after the program. It should list their actionable goals and behaviour metrics which are directly correlated to the key insights or learning from the programAt enParadigm we developed an online portal called Cockpit maintained by our customer success team, exclusively for one purpose- to make it possible for L&D or business heads to have a 360 degree view of the participants journey - from learning to retention to implementation
- Everything is not quantifiable :While it is important to set SMART goals and track their implementation, not all outcomes can be shown in revenue or numbers. How do you measure higher levels of motivation, proactivity, positive attitude, or team spirit? These can contribute to better performance but cannot be quantified.What you can do about it : If there is increase in productivity leading to faster outcomes or improved quality in performance, convert time saved or increased output into a cost or a revenue figure. You can also link attributes like lesser attrition or better quality of work to the learning program. For making it more tangible, maintain a “before-and-after” survey sheet. Take feedback from multiple relevant stakeholders - reporting managers, business leaders, HR leaders, etc to record the status quo and find - out what all has changed after the program.Its is not always about money. Sometimes, as Kirkpatrick & Kirkpatrick suggest, it is a good idea to link effectiveness to expected learning outcomes. ROI can be replaced with ROE - return on expectations. What does the management want as a learning outcome to bridge performance gaps? What outcome will make them consider the learning initiative a success? This can be the baseline assessment criteria at the time of evaluation. Once you get clarity on the expectations, it becomes that much easier to measure results accordingly.
Are there any more challenges that stop you from showing the worth of your learning programs? Write to us for suggestions.