The Performance Gap: Issue 3

Last week I wrote about feedback, specifically about why so much of it makes performance worse rather than better. This week I want to go one level upstream, to the question that precedes feedback entirely: What does it actually take for learning to stick?

The training that feels most effective is usually the least effective

And the training that actually works rarely feels like learning at all.

Here is something every organisation that invests in leadership development should know - and almost none act on.

The way humans learn most comfortably, and the way humans learn most durably, are not the same thing. In fact, they are almost exactly opposite.

The most comfortable learning experience is smooth, clear, logically sequenced, and easy to follow. The content flows. The facilitator is skilled. Participants leave feeling that they have absorbed something. The end-of-programme feedback forms reflect this: high scores for delivery, relevance, and perceived value.

The most durable learning experience is effortful, interrupted, slightly frustrating, and occasionally confusing. Participants are asked to retrieve information before they feel ready to. Material is spaced across sessions rather than concentrated in one. Problems are introduced before solutions. People leave feeling that they have worked harder than they expected to.

The end-of-programme feedback forms for this kind of experience are measurably lower.

And the actual learning (tested weeks or months later) is measurably higher.

What the research shows

This is one of the most robust findings in learning science, documented most accessibly in Peter Brown, Henry Roediger and Mark McDaniel's Make It Stick - a synthesis of decades of cognitive psychology research on what actually produces durable memory and transferable skill.

The core insight is that fluency (the feeling of understanding) and learning (the capacity to use that understanding later, in new contexts, under pressure) are different things. And the conditions that maximise fluency actively suppress the encoding that produces durable learning.

Three specific mechanisms are worth naming because they are both consistently supported by the evidence and consistently absent from conventional leadership programmes.

  1. Spaced practice: Material revisited across time (days or weeks apart) produces significantly more durable retention than the same material covered in one concentrated session. The brain encodes more deeply when it has to reconstruct rather than simply recognise. This directly contradicts the residential programme model, where all content is delivered in a single intensive experience.

  2. Retrieval practice: Being asked to recall information (before you feel fully prepared, without notes) strengthens memory traces in a way that re-reading or re-watching does not. Testing is not just a measurement of learning. It is a mechanism of learning. Most corporate programmes have no retrieval practice whatsoever.

  3. Interleaving: Mixing different types of problems or content within a learning session (rather than practising one type to mastery before moving to the next) produces slower initial progress and faster long-term retention. It forces the learner to discriminate between types of challenge rather than pattern-match to a recently practised solution. It is uncomfortable. It works.

The uncomfortable implication for organisations

Anders Ericsson (the psychologist whose research on expert performance is the most cited in this field) found that deliberate practice, the kind that actually builds capability, is characterised by three things; it pushes the learner to the edge of their current ability, it involves focused feedback at the moment of practice, and it is uncomfortable enough that most people avoid it when given the choice.

The conditions that produce expert performance are not the conditions of a two-day leadership programme with a skilled facilitator, good coffee, and a comfortable hotel.

This is not an argument against residential programmes. It is an argument for designing them differently - for building in the retrieval, the spacing, the difficulty, and the application under real conditions that actually transfer to the workplace. And it is an argument for measuring outcomes at the point that matters: not the day participants leave, but six weeks later in a difficult meeting.

Most organisations are not doing this. Most don't know they're not doing it, because the measurement system they use (participant satisfaction scores) is specifically optimised to tell them the opposite of the truth.

The question for the week

Think about the last leadership development programme your organisation commissioned or attended. How was its effectiveness measured, and at what point? If the answer is a feedback form completed on the last day - what would it mean if that measurement is telling you nothing about whether learning actually occurred?

Next week: Why the assumption that you should identify and develop your most talented people early may be the most expensive mistake in your talent strategy.

Dr Andrew A Walker | Chartered Psychologist | Leadership Coach | andrewantonywalker.com

Previous
Previous

The Performance Gap: Issue 4

Next
Next

The Performance Gap: Issue 2