Why interventions fail: a guide to common pitfalls in behaviour change
Calorie labelling shows how behaviour change interventions can go wrong and how we can learn to anticipate problems.
Behaviour change interventions don’t always deliver their intended outcomes and even well-designed efforts can produce unexpected results or fail to achieve meaningful change. Understanding these patterns of failure is critical—not to assign blame, but to learn how to design better interventions.
This article . introduces a taxonomy of failure patterns, providing a structured vocabulary to analyse why interventions might fail. From "compensatory behaviours" to "environmental barriers," these patterns help explain how unclear or unrealistic goals lead to unintended outcomes.
We’ll revisit the calorie labelling policy in England because it serves as a straightforward example to illustrate various aspects of behaviour change. While it increased awareness of calorie content, it didn’t reduce caloric intake, offering a valuable case study for understanding how and why behaviour change can fail—and what we can learn from those outcomes.
The challenge of defining what to change
At the heart of any behaviour change intervention lies a fundamental question: what are we trying to change? Defining the target behaviour is a crucial step, providing a foundation for intervention design and a way to measure success.
Calorie labelling illustrates this challenge. The intervention aimed to:
Reduce caloric intake from out-of-home meals (a tangible, measurable behavioural goal).
Increase awareness and use of calorie information (a proxy measure of progress).
Defining target behaviours is rarely straightforward. As behavioural scientist Laura de Moliere explains in "But What’s the Behaviour You’re Trying to Change?", behaviours are shaped by a web of interconnected factors that interact dynamically. Focusing on a single behaviour or barrier risks missing how the broader system shapes outcomes and can lead to incomplete interventions.
Behaviour change also doesn’t follow a straight line: small inputs can lead to dramatic results, while large efforts may fail to create impact. Each person’s journey is unique, and group-level trends often fail to capture individual variability, which makes one-size-fits-all solutions difficult to design.
Despite these challenges, defining target behaviours provides a starting point for understanding why interventions succeed or fail and helps identify gaps in the broader behavioural system.
Using calorie labelling as a case study, we’ll explore common patterns of failure and how they can reveal hidden complexities in real-world behaviour change.
Identifying patterns of failure
Even well-designed behaviour change interventions can fall short. Some fail to change the intended behaviour, while others backfire or create unintended problems. Analysing these patterns of failure helps us anticipate risks and refine strategies before implementation.
A useful tool for this is a taxonomy of failure patterns, offering practitioners a structured way to identify and address why interventions diverge from their goals. Using calorie labelling as an example, here are common failure patterns, grouped into three key categories:
1. No change or the wrong change
No measurable change: The intervention doesn’t affect the target behaviour (e.g., diners notice calorie labels but don’t change their choices due to habits or social influences).
Backfiring effects: The intervention triggers the opposite of its intended outcome (e.g., diners opt for higher-calorie meals to maximise value for money).
2. Partial or offset effects
Proxy outcomes vs. target behaviours: A proxy measure improves, but the ultimate behaviour doesn’t change (e.g., awareness increases, but caloric intake remains the same).
Compensatory behaviours: Positive changes are offset by other behaviours (e.g., choosing a low-calorie entrée but ordering a high-calorie dessert).
Offset effects over time: Initial success is undermined by later behaviours (e.g., healthier choices revert to old habits as the novelty of calorie labels fades).
3. Barriers and external pressures
Environmental mismatch: The context doesn’t support the desired behaviour (e.g., limited low-calorie options prevent diners from acting on their intentions).
Positive side effects without primary success: The intervention fails in its main goal but creates unintended benefits (e.g., calorie labelling inspires food providers to develop healthier menu items).
Counteracting forces: The intervention faces active resistance or opposing pressures (e.g., restaurants promote high-calorie meals to recover lost sales).
The findings from the calorie labelling study in England offer a clear illustration of these failure patterns:
No measurable change: Average caloric intake per meal actually increased slightly, from 1,000 to 1,080 calories. This change was not statistically significant, indicating that calorie labelling didn’t reduce caloric intake as intended.
Proxy outcomes vs. target behaviours: Awareness of calorie information improved, with more diners noticing and using calorie labels, but this didn’t translate into behaviour change.
Environmental mismatch: Even motivated diners may have been limited by menu options, with appealing lower-calorie choices unavailable or less prominent.
These outcomes demonstrate how an intervention’s effects can diverge from its goals, aligning with several categories in the taxonomy. By analysing these patterns, we can better understand why the intervention fell short and how future strategies might address similar challenges.
Anticipating and mitigating failures
Designing effective behaviour change interventions isn’t just about identifying what to change—it’s about understanding how and why interventions might fail. Tools like pre-mortem analysis and the IN CASE framework provide structured ways to uncover potential risks and refine strategies before implementation.
Pre-mortem analysis, as proposed in the taxonomy of failure study, involves asking high-level exploratory questions to identify vulnerabilities in an intervention’s design:
What factors could be causally relevant to the success of the intervention?
How could the intervention influence these factors?
What precautionary measures should be taken to avoid failure?
The IN CASE framework, developed by the Cabinet Office Behavioural Science Team, provides a systematic way to evaluate unintended consequences, offering insights directly applicable to calorie labelling.
Non-target audiences: Calorie labelling might inadvertently influence children dining with their parents. This could increase their focus on calorie counting at an early age, potentially fostering unhealthy relationships with food.
Emotional impact: For some diners, calorie labels might evoke guilt or shame, leading to negative associations with eating out and disengagement from the intervention altogether.
By combining the broad focus of pre-mortem analysis with the specific dimensions of IN CASE, practitioners can design interventions that are more resilient and adaptable. These tools not only help anticipate failure but also ensure that interventions align better with the complexities of real-world behaviour.
Why understanding failure patterns matters
Failure is often seen as the end of the story for behaviour change interventions, but it shouldn’t be. Recognising and analysing failure patterns provides valuable insights that can strengthen future interventions. Rather than dismissing an approach as ineffective, understanding why it failed helps us refine strategies and identify overlooked opportunities.
The calorie labelling study shows how this kind of analysis can offer broader lessons:
It moves beyond binary judgments: Labelling wasn’t a "success" or "failure"; it improved awareness (a proxy outcome) but didn’t reduce caloric intake (the target behaviour). This nuance matters when evaluating interventions.
It highlights underlying barriers: The study suggests that knowledge alone isn’t enough to change behaviour. Habits, environmental constraints, and competing motivations must be addressed for interventions to be effective.
It encourages iteration: Recognising failure modes like environmental mismatch or compensatory behaviours enables us to adjust the design of interventions, layering additional strategies to address these barriers.
Understanding failure isn’t just about learning from past interventions—it’s about building a mindset that anticipates risks during the design phase. Combining tools like the taxonomy of failure patterns with frameworks like IN CASE equips practitioners to navigate complexity and refine interventions before they’re implemented.
Treating failure as a learning opportunity rather than a verdict turns it into a critical tool for crafting smarter, more adaptable interventions.
What kinds of studies were included in the analysis the taxonomy was based on?
Of the 65 studies that were compiled, 58% included a field experiment, and 75% of all studies also included a control or baseline condition to compare the behavioural interventions against.Common domains in which the interventions were trialled included charitable donations(13%), tax compliance (8%), health (diet or exercise; 25%), and pro-environmental behaviour(28%).
The 65 studies utilised several types of interventions, notably defaults(15%), social comparisons and social norming (40%), labelling (12%), and provision of information delivered through letters or text messaging (24%)
More details at: https://psyarxiv.com/ae756
Osman, M., McLachlan, S., Fenton, N., Neil, M., Löfstedt, R., & Meder, B. (2020). Learning from behavioural changes that fail. Trends in Cognitive Sciences.
de Molière, L. (2024) '“But what‘s the behaviour you‘re trying to change?” What applied behavioural science gets wrong under complexity', Frontline BeSci, February 2024.
Emery, A., Molière, L., Lang, P., Nicolson, M., & Prince, E. (2021). IN CASE: A behavioural approach to anticipating unintended consequences. (UK Cabinet Office Behavioural Science Team)