A survey on the evaluation of leadership development programmes has revealed that there is confusion among learning and development professionals when it comes to the use and value of evaluation in measuring the impact of people development initiatives.
The only issue over which there was general agreement was that leadership development programmes can be evaluated – with 88 per cent of respondents agreeing or ‘strongly agreeing’ with this sentiment.
The survey, which was conducted in February by Ceres Management, specialists in improving how we measure the impact of people development initiatives, found that:
- 88 per cent of respondents believe that, although leadership development programmes focus on ‘soft skills’, they can still be evaluated.
- Opinion was more or less equally divided over whether or not organisations want the evaluation of leadership development programmes to measure their impact on business goals.
- Opinion was, again, almost equally divided over the view that leadership development programmes are difficult to evaluate because their goals are rarely defined in precise terms.
- While 44 per cent of respondents agreed with his view, 52 per cent of the respondents didn’t.
- Only 35 per cent of respondents believe that relevant stakeholders aren’t involved in planning the evaluation of leadership programmes; while a further ten per cent are ‘unsure’.
- 60 per cent of respondents said leadership development professionals would not feel threatened by an evaluation of how their programmes affect their organisation’s business goals – whereas only 24 per cent felt the opposite.
“The results of this survey indicate that learning and development professionals seem to be unsure about the role of evaluation – even if most believe that it is possible to evaluate any development programme,” commented Dr John O’Connor, of Ceres Management.
“They’re also unsure about whether organisations even want to measure the value of development programmes in terms of their impact on a business’ goals,” he added.
“Of course, this may be because current evaluation techniques make it difficult to achieve this measure. However, a new evaluation model – called Results Assessment (RA) – may help to modify these professionals’ thinking.
“RA could help any learning and development professional who understands the importance of evaluation but who also realises that there’s a gap between what s/he does and what s/he wants to do,” he said.
Designed to link engagement to measuring value and provide practical ways to move beyond ‘level one’ evaluation, RA aims to demystify evaluation – enabling HR and L&D professionals to become serious ‘business partners’, by aligning training activities to business goals and measuring the results that are important to the business.
It does this collaboratively with key stakeholders initially defining what business success looks like; then aligning programme and evaluation goals with business needs; focusing evaluation effort around clear performance needs, and using results data to make important decisions about programme impact, on-going investments and so on.
“If an organisation is not currently measuring impact outcomes, it needs to realise that evaluation is possible and easy to do, and it provides a more integrated approach to business planning,” O’Connor said.
O’Connor has many years’ experience designing and developing engaging and stimulating learning solutions.
He commented: “In their haste to please, L&D professionals can get carried away with what they’re good at – that is, the design or the delivery systems – but can fail to link programme results with what the business wanted to achieve. Thinking in terms of ‘solutions’, we often overlook the need to align our development initiatives to business issues.”