Behavior change models are theoretical frameworks that explain why people do what they do, and how to get them to do something different. They are the operating systems of behavioral science, and there are a lot of them. A 2015 scoping review by Rachel Davis and colleagues identified 82 distinct theories of behavior change across psychology, sociology, anthropology, and economics. Of those 82, just four account for 63% of all published research.
Most articles comparing these models give you a paragraph on each and call it a day. This one doesn’t. I’ve gone through the primary literature, the meta-analyses, and the critiques for every major model. For each, you’ll get the origin story, the core components, the actual effect sizes from meta-analyses, the named criticisms, and an honest assessment of when to use it and when not to.
If you’re a practitioner trying to design an intervention, a student trying to understand the field, or just someone who’s been told to “use a behavior change model” and doesn’t know where to start, this is the guide. (For our complete guide to behavior change itself, see our behavior change guide.)
What Is a Behavior Change Model?
A behavior change model is a formal framework that identifies the key factors influencing human behavior and specifies how those factors interact to produce (or prevent) behavior change. Some models focus on individual cognition (what people think and believe). Others focus on motivation (why people want to change). Others focus on environment and context (the structures around people that make behavior easier or harder).
No single model captures everything. The field has been trying for 50 years, and the result is a fragmented landscape where different models emphasize different pieces of the puzzle. The smartest approach is to understand what each model does well, what it ignores, and when to reach for it.
Here’s the honest comparison.
How I’m Comparing These Models
For each model, I cover:
- What it is: the core components and how they relate
- The evidence: specific meta-analyses with effect sizes, sample sizes, and named researchers
- What it gets right: genuine strengths
- What it gets wrong: named criticisms and evidence gaps
- When to use it: the contexts where it actually helps
I’ve organized the 16 models into five categories based on what they emphasize:
| Category | Models | Core Focus |
|---|---|---|
| Individual cognition | TPB, HBM, SCT, PMT | What people think and believe |
| Motivation and stages | TTM, SDT, IMB | Why people change (or don’t) |
| Intervention design | COM-B/BCW, Fogg B=MAP, BCT Taxonomy, PRIME | How to design interventions |
| Environment and policy | Nudge, EAST, MINDSPACE, Ecological | How context shapes behavior |
| Integrative | Behavioral State Model | Person-behavior fit across all drivers |
Part 1: Individual Cognition Models
These models assume that behavior change starts with changing what people think. They share a common logic: if you change someone’s beliefs, attitudes, or expectations, behavior change will follow. The evidence says this is partly right, but with a ceiling that most popular accounts ignore.
Theory of Planned Behavior (Ajzen, 1991)
The Theory of Planned Behavior is the most tested model in behavioral science. Psychologist Icek Ajzen developed it in 1991 as an extension of his earlier Theory of Reasoned Action (1975, with Martin Fishbein). The core idea: behavior is driven by intentions, and intentions are shaped by three factors.
Core components:
- Attitudes: Do I evaluate this behavior positively or negatively?
- Subjective norms: Do the people around me think I should do this?
- Perceived behavioral control (PBC): Do I believe I can actually do this?
These three feed into intentions, which drive behavior.
The evidence:
Christopher Armitage and Mark Conner published the definitive meta-analysis in 2001, covering 185 independent studies. The TPB accounted for 27% of variance in behavior and 39% of variance in intentions. When behavior was self-reported, the model explained 31% of variance. When measured objectively? Just 21%.
Rana McEachan and colleagues published a stricter meta-analysis in 2011, limited to prospective studies (where behavior was measured after the TPB survey, not simultaneously). The numbers dropped. The intention-behavior correlation was 0.43. The model explained 21-24% of variance in dietary and exercise behaviors. For follow-ups longer than 5 weeks, prediction dropped to 16-18%.
The largest meta-analytic dataset on intentions (422 studies, 82,107 participants) found that intentions explain about 28% of behavioral variance. A medium-to-large change in intention (d=0.66) produces only a small-to-medium change in behavior (d=0.36).
What it gets right: The TPB is genuinely useful for understanding what people intend to do. If you need to predict whether a population will adopt a new behavior, measuring attitudes, norms, and perceived control will get you reasonable predictions.
What it gets wrong: The model assumes that changing intentions changes behavior. The data says otherwise. This is called the intention-behavior gap, and it’s one of the most robust findings in the field. Roughly 47% of people who form a behavioral intention fail to act on it (Sheeran, 2002). Peter Gollwitzer’s implementation intentions (“if-then” plans that specify when, where, and how to act) were developed specifically to bridge this gap. Early meta-analyses reported d=0.65 (Gollwitzer & Sheeran, 2006), but recent meta-analyses with stricter inclusion criteria and bias correction put the effect closer to d=0.15-0.31, still useful but far more modest than the original claims. The TPB also ignores habits, emotions, impulsive behavior, and environmental constraints. Subjective norms are consistently the weakest predictor across studies.
When to use it: Predicting behavioral intentions. Understanding what a population thinks about a target behavior. Not for designing interventions to change complex behavior.
Health Belief Model (Rosenstock, 1966)
The Health Belief Model is the oldest formal model of health behavior. Irwin Rosenstock developed it at the U.S. Public Health Service in the 1960s to understand why people weren’t using free tuberculosis screening. Marshall Becker expanded it in 1974. Self-efficacy was added later.
Core components:
- Perceived susceptibility: How likely am I to get this condition?
- Perceived severity: How bad would it be?
- Perceived benefits: Will the recommended action help?
- Perceived barriers: What will it cost me (time, money, pain, effort)?
- Cues to action: What triggers me to act? (symptoms, doctor’s advice, news)
- Self-efficacy: Can I actually do this? (added later)
The evidence:
Christopher Carpenter published a meta-analysis in 2010 covering 18 longitudinal studies with 2,702 subjects. The results were not encouraging. Effect sizes were generally low across all HBM variables. Barriers were the strongest predictor, with benefits second. Susceptibility was near zero in most contexts. Severity was weak.
Nancy Janz and Marshall Becker’s 1984 review of 46 studies reached the same conclusion: barriers are the most powerful predictor, and the model’s overall predictive power is modest.
What it gets right: The focus on perceived barriers is genuinely useful. If you want to know why someone isn’t doing a health behavior, the barriers they perceive are the single best place to look.
What it gets wrong: Susceptibility and severity, the two “threat” components, are consistently weak predictors. This undermines the model’s central premise that threat perception drives behavior. The model also assumes rational cost-benefit analysis, ignores social influence and habits, and provides no mechanism for how the constructs interact with each other.
When to use it: Identifying perceived barriers to a specific health behavior. Designing health communications about new or unfamiliar threats. Not as a comprehensive intervention design framework.
Social Cognitive Theory (Bandura, 1986)
Social Cognitive Theory is Albert Bandura’s comprehensive theory of human functioning, developed in his 1986 book “Social Foundations of Thought and Action.” At its center is a concept Bandura introduced in 1977 that has become the single most reliable predictor of behavior change across all models: self-efficacy.
Core components:
- Self-efficacy: Belief in one’s capability to perform a behavior (the star of the show)
- Outcome expectations: What do I expect to happen if I do this? (physical, social, self-evaluative consequences)
- Observational learning: Learning by watching others (modeling)
- Reciprocal determinism: Person, behavior, and environment all influence each other bidirectionally
- Self-regulation: Self-monitoring, goal-setting, self-reward
The evidence:
Alexander Stajkovic and Fred Luthans’s 1998 meta-analysis found that self-efficacy predicted performance with correlations ranging from r=.30 to r=.45 depending on domain. A meta-meta-analysis synthesized 13 meta-analyses covering 536 effect sizes with a total sample of 421,880 participants. The average effect size was r=.38, a medium-to-large effect. Self-efficacy predicted behavior across health, work, and academic domains.
Self-efficacy is built from four sources, in order of potency: (1) mastery experience (actually succeeding at the behavior), (2) vicarious experience (watching someone similar succeed), (3) verbal persuasion (being told you can do it), and (4) physiological states (how you interpret arousal and stress).
What it gets right: Self-efficacy works. It’s the single most consistent predictor of behavior change across all models, all domains, and all populations. Any behavior change intervention that builds genuine competence and confidence is doing something right.
What it gets wrong: As a complete model, SCT is very broad and hard to operationalize. “Reciprocal determinism” sounds elegant but is essentially circular: everything affects everything. This makes it hard to test as a whole and hard to use as an intervention design guide. In practice, SCT is less a design framework and more a collection of useful constructs, especially self-efficacy, that other models borrow freely.
When to use it: Any time self-efficacy is the bottleneck. Training programs, skill-building interventions, coaching, mentoring. Use the self-efficacy construct even if you don’t use the full model.
Protection Motivation Theory (Rogers, 1975)
Protection Motivation Theory was developed by Ronald Rogers in 1975 to explain how fear appeals work. He revised it in 1983 to add coping appraisal. The model is essentially a structured version of two questions: “How bad is the threat?” and “Can I do anything about it?”
Core components:
Threat appraisal:
- Perceived severity (how bad is it?)
- Perceived vulnerability (how likely am I to be affected?)
Coping appraisal:
- Response efficacy (will the recommended action work?)
- Self-efficacy (can I do it?)
- Response costs (what will it cost me?)
The evidence:
Donna Floyd, Steven Prentice-Dunn, and Ronald Rogers published the first comprehensive meta-analysis in 2000, covering 65 studies with approximately 30,000 participants across 20+ health issues. The overall effect size was d=0.52 (moderate). All PMT components were significant: increases in severity, vulnerability, response efficacy, and self-efficacy all facilitated protective behavior. Self-efficacy and response efficacy were the strongest predictors in the coping appraisal pathway.
What it gets right: PMT provides a clear, structured way to design health communications. If you’re creating a message about a health threat, the model tells you exactly what to address: make the threat feel real, show that the recommended response works, and build confidence that people can do it.
What it gets wrong: The model is limited to threat-based motivation. It can’t explain positive behavior change that isn’t fear-driven (exercise for enjoyment, healthy eating for taste). It also shares the rationality assumption with HBM and TPB. The evidence for actual behavior change (vs. intentions) is weaker.
When to use it: Health risk communication. Designing messages about specific health threats. Understanding why people do or don’t protect themselves. Not for positive motivation or complex behavior change.
Part 2: Motivation and Stage Models
These models focus on why people change, not just what they think. They address the quality and dynamics of motivation itself.
Transtheoretical Model / Stages of Change (Prochaska & DiClemente, 1983)
The Transtheoretical Model is the most widely used behavior change model in clinical practice and the most heavily criticized in the academic literature. James Prochaska and Carlo DiClemente developed it in 1983 by studying how smokers quit on their own, attempting to integrate across therapeutic orientations.
Core components:
Five stages of readiness:
- Precontemplation: Not thinking about changing
- Contemplation: Thinking about it but not ready
- Preparation: Planning to act soon
- Action: Currently making changes
- Maintenance: Sustaining change for 6+ months
Plus: 10 processes of change, decisional balance (pros vs. cons), self-efficacy, and temptation.
The evidence:
The TTM is the most cited model in the field, appearing in 33% of all behavior change research articles (Davis et al., 2015). In Thomas Webb’s 2010 meta-analysis of internet health interventions, TTM-based interventions produced an effect of d=0.20, compared to d=0.36 for TPB-based interventions and d=0.15 for SCT-based ones.
The criticism (and it’s substantial):
Robert West published “Time for a change: putting the Transtheoretical Model to rest” in Addiction in 2005. His argument: the stages are arbitrary and unstable, the model ignores the biology of motivation (reward, habit, associative learning), and stage-matched interventions show no consistent advantage. West explicitly called for the model’s abandonment.
Julia Littell and Heather Girvin reviewed 87 studies in 2002 and found that the stages are not mutually exclusive, there is “scant evidence of sequential movement through discrete stages,” and “practical utility is limited by concerns about the validity of stage assessments.”
Colin Bridle and colleagues published a systematic review in 2005 finding no consistent evidence that stage-matched interventions outperform non-stage-matched interventions.
DiClemente responded in the same issue of Addiction, calling West’s critique “a premature obituary.” The debate continues, but the weight of evidence is not favorable to the TTM as an intervention framework.
What it gets right: The insight that change isn’t all-or-nothing is genuinely useful. People are at different levels of readiness, and acknowledging this in clinical conversations helps build rapport.
What it gets wrong: Almost everything about the formal model. The stages aren’t discrete categories. People don’t move through them sequentially. Stage-matched interventions don’t outperform alternatives. The model ignores automatic behavior, habits, and environmental context. It overemphasizes conscious decision-making. Robert West called the model’s central predictions “not supported” by evidence.
When to use it: As a clinical conversation tool to gauge readiness. Not as a formal intervention design framework. If someone insists on “stage-matching,” know that the evidence doesn’t support it.
Self-Determination Theory (Deci & Ryan, 1985)
Self-Determination Theory, developed by Edward Deci and Richard Ryan at the University of Rochester, asks a question the other models skip: not just “will this person change?” but “what kind of motivation is driving the change?” The answer matters enormously for whether behavior lasts.
Core components:
Three basic psychological needs:
- Autonomy: Feeling that your behavior is self-chosen
- Competence: Feeling effective and capable
- Relatedness: Feeling connected to others
The motivation continuum (from least to most self-determined):
- Amotivation → External regulation → Introjected regulation → Identified regulation → Integrated regulation → Intrinsic motivation
The critical distinction: autonomous motivation (identified, integrated, intrinsic) versus controlled motivation (external, introjected). Autonomous motivation predicts lasting change. Controlled motivation predicts short-term compliance that evaporates when the external pressure disappears.
The evidence:
Johan Ng and colleagues published a meta-analysis in 2012 covering 184 independent health datasets. The overall correlation between autonomous motivation and health behaviors was r=.26, a small-to-medium effect. Autonomy support from practitioners predicted satisfaction of all three needs: autonomy (β=.41), competence (β=.33), relatedness (β=.47). Competence was the strongest predictor of autonomous motivation (β=.35). The effects on actual health outcomes were small: autonomous motivation → physical health (β=.11) and psychological health (β=.06).
Nikos Ntoumanis and colleagues meta-analyzed 73 SDT-informed experimental studies in health in 2021. Effect sizes were small to medium. One surprising finding: competence-support techniques (like identifying barriers and developing plans) sometimes yielded smaller effects when offered before autonomous motivation was established, possibly because they felt externally controlling.
What it gets right: The autonomous vs. controlled motivation distinction is one of the most practically important insights in the field. External incentives (rewards, punishments, social pressure) can produce compliance, but the behavior often stops when the incentive stops. Worse, external rewards can actively undermine intrinsic motivation, a phenomenon called the overjustification effect. Edward Deci, Richard Koestner, and Richard Ryan’s 1999 meta-analysis found that tangible rewards reduced intrinsic motivation with an effect size of d=-0.36. Pay someone to do something they already enjoy, and they enjoy it less once the payment stops. Building autonomous motivation produces more durable change. This has major implications for coaching, healthcare, parenting, and management.
What it gets wrong: The path from motivation to actual health outcomes is weaker than the path from motivation to intentions. Effect sizes for physical health outcomes are small (β=.11). The theory is also relatively complex, with six types of motivation and three basic needs, which makes it harder to apply than simpler models.
When to use it: Understanding why someone is (or isn’t) motivated. Training healthcare providers, coaches, or managers in autonomy-supportive communication. Designing interventions where motivation quality matters more than motivation quantity. Long-term behavior maintenance.
Information-Motivation-Behavioral Skills Model (Fisher & Fisher, 1992)
The IMB model was developed by Jeffrey and William Fisher in 1992, originally to explain and improve HIV prevention behavior. It’s the simplest of the major models: three constructs, clear causal pathways, and a direct translation to intervention design.
Core components:
- Information: Knowledge about the behavior and the condition
- Motivation: Personal attitudes and social norms about the behavior
- Behavioral skills: Objective ability and perceived self-efficacy to perform the behavior
Information and motivation work primarily through behavioral skills to produce behavior change, though information and motivation can also directly affect behavior.
The evidence:
The IMB accounts for 7% of behavior change research articles (Davis et al., 2015), primarily in HIV prevention, medication adherence, and diabetes management. Structural equation modeling studies generally support the model’s pathways, though the “information” component is often the weakest predictor, with behavioral skills (particularly self-efficacy) being the strongest.
What it gets right: Radical simplicity. Three constructs, three intervention components. If people don’t know what to do, educate them. If they aren’t motivated, address attitudes and norms. If they can’t do it, build skills and self-efficacy. The direct mapping from assessment to intervention is elegant.
What it gets wrong: It’s probably too simple. The model doesn’t account for environmental constraints, habit, emotional regulation, or systemic barriers. Information is consistently the weakest predictor, which suggests knowledge gaps are rarely the actual bottleneck. The model was built for a specific problem (HIV prevention) and may not transfer well to complex, sustained behavior changes.
When to use it: Health education, medication adherence, sexual health, any domain where a clean assessment-to-intervention mapping is useful. Best for behaviors where information and skills really are the bottleneck.
Part 3: Intervention Design Frameworks
These aren’t just models of behavior. They’re tools for building interventions. They answer the practitioner’s question: “I understand the problem. Now what do I actually do?”
COM-B + Behaviour Change Wheel (Michie, van Stralen & West, 2011)
The COM-B model and the Behaviour Change Wheel represent the most ambitious attempt to unify the field. Susan Michie, Maartje van Stralen, and Robert West developed them by systematically reviewing 19 existing frameworks and finding that none covered the full range of intervention types and policy options. Published in Implementation Science in 2011, the BCW is essentially a meta-framework built on top of all the others.
Core components:
COM-B (the behavioral diagnosis):
- Capability: Physical (skills, stamina) + Psychological (knowledge, cognitive skills)
- Opportunity: Physical (environment, resources, time) + Social (norms, social influence, cultural expectations)
- Motivation: Reflective (conscious beliefs, plans, intentions) + Automatic (habits, emotions, impulses)
All six components interact to produce Behavior.
The Behaviour Change Wheel (the intervention toolkit):
- 9 intervention functions: Education, Persuasion, Incentivisation, Coercion, Training, Restriction, Environmental restructuring, Modelling, Enablement
- 7 policy categories: Communication/marketing, Guidelines, Fiscal measures, Regulation, Legislation, Environmental/social planning, Service provision
The evidence:
The BCW doesn’t have a single effect size because it’s a meta-framework, not a predictive model. Its evidence base is the evidence base of the 19 frameworks it synthesizes. It has been adopted by the UK NHS, Public Health England, the WHO, and numerous national governments. It was used extensively to design COVID-19 behavior change interventions globally.
What it gets right: Comprehensiveness. COM-B forces you to consider capability, opportunity, AND motivation, not just one piece. The BCW then links each COM-B deficit to specific intervention types and policy options. This is the closest thing the field has to a systematic intervention design process.
What it gets wrong: Complexity and circularity. Using the full BCW process properly requires training and time. The link from COM-B assessment to specific intervention functions involves professional judgment, not a mechanical algorithm. Some practitioners find it overwhelming. Critics note that the categories can overlap (is “social modeling” an intervention function or a motivation source?) and that the framework tells you what type of intervention to use but not the specific content. There’s a deeper problem: COM-B is arguably tautological. Saying behavior requires capability, opportunity, and motivation is true by definition, but it doesn’t specify which capabilities, which opportunities, or which motivational processes matter for any given behavior. It identifies categories but doesn’t predict effect sizes or specify mechanisms.
When to use it: Complex, multi-level behavior change challenges. Intervention design at organizational or policy level. When you need to be systematically comprehensive. Pair it with the BCT Taxonomy for specific technique selection.
Fogg Behavior Model / B=MAP (BJ Fogg, 2009)
BJ Fogg, a Stanford communication researcher, introduced his behavior model in 2009 at a Persuasive Technology conference. He updated it in his 2019 book “Tiny Habits.” The model has been enormously influential in Silicon Valley, product design, and the tech industry, while generating skepticism in academic behavioral science.
Core components:
B = M + A + P (Behavior = Motivation + Ability + Prompt)
- Motivation: How much do you want to do it? (pleasure/pain, hope/fear, social acceptance/rejection)
- Ability: How easy is it? (time, money, physical effort, mental effort, routine disruption)
- Prompt: What triggers the behavior at the right moment?
All three must converge simultaneously. If any one is missing, the behavior doesn’t occur. The “action line” represents the threshold where motivation × ability is sufficient. The Tiny Habits method reduces ability demands to near-zero (start with 2 pushups, not 50) and anchors new behaviors to existing prompts.
The evidence:
A 2025 scoping review in BMC Public Health systematically reviewed the Fogg model in health behavior change interventions, finding that the “ability” component was the most commonly addressed. The model has been primarily validated in product design and commercial contexts rather than academic health behavior research. Peer-reviewed intervention studies using the full model are limited.
What it gets right: The focus on reducing friction (ability) is genuinely useful. Most behavior change models overemphasize motivation and underemphasize how hard the behavior is to perform. Fogg correctly argues that making the behavior easier is often more effective than trying to increase motivation. The Tiny Habits approach (start absurdly small) has practical value for initiation.
What it gets wrong: The model assumes a single moment of behavior, which makes it poorly suited for sustained, complex behaviors like exercise routines or dietary changes. It ignores social and environmental context, identity, and the distinction between initiating a behavior and maintaining it. As one 2025 review noted, it “underplays unconscious and environmental factors.” The limited peer-reviewed evidence is a real weakness compared to models like the TPB or COM-B.
When to use it: Product design and UX. Simple, discrete behaviors. Getting someone started (first steps). Not for sustained, complex behavior change programs.
BCT Taxonomy v1 (Michie et al., 2013)
The BCT Taxonomy isn’t a model or theory. It’s a classification system: a standardized vocabulary for describing exactly what an intervention does. Susan Michie and colleagues developed it through a Delphi process with 14 international experts, drawing from 124 techniques across 6 existing classification systems.
Core components:
93 distinct behavior change techniques organized into 16 groupings, including:
- Goals and planning (goal setting, action planning, problem solving)
- Feedback and monitoring (self-monitoring, feedback on behavior/outcomes)
- Social support (practical, emotional, unspecified)
- Shaping knowledge (instruction, information about consequences)
- Natural consequences (information about health, emotional, social consequences)
- Comparison of behavior (demonstration, social comparison)
- Associations (prompts/cues, habit formation/reversal)
- Repetition and substitution (behavioral practice, habit formation)
- Reward and threat (material incentive, social reward, self-reward)
- Regulation (pharmacological support, reducing negative emotions)
- Self-belief (verbal persuasion about capability)
The evidence:
Meta-analyses using the BCT Taxonomy to identify effective components have found:
- Physical activity: Goal setting, graded tasks, self-monitoring of behavior, and social incentives show significant effects
- Healthy eating: Self-monitoring combined with feedback and goal setting works best
- Interventions combining self-monitoring with control theory BCTs (goal setting + feedback) are more effective than either alone
What it gets right: Precision. Before the BCT Taxonomy, researchers described interventions vaguely (“counseling,” “health education,” “motivational interviewing”) with no way to compare what they actually contained. The taxonomy enables precise description, which enables precise comparison, which enables meta-analyses to identify what actually works.
What it gets wrong: Ninety-three techniques is overwhelming. The taxonomy tells you what techniques exist but not which to use for your specific problem. It’s a catalogue, not a decision tool. It requires training to code reliably. And it’s atheoretical: it describes intervention content without specifying the causal mechanisms through which techniques produce change.
When to use it: Alongside COM-B/BCW for selecting specific techniques. When designing intervention protocols. When reporting intervention content in research. Not as a standalone design guide.
PRIME Theory (Robert West, 2006)
PRIME Theory is Robert West’s attempt to build a general theory of motivation that accounts for what the classical models ignore: impulse, habit, and the fact that people frequently act against their own stated plans. West developed it in his 2006 book “Theory of Addiction” as an alternative to the TTM, which he had publicly critiqued.
Core components (hierarchical):
- Plans: Conscious representations of future actions plus commitment
- Responses: Starting, stopping, or modifying actions
- Impulses/Inhibitory forces: Experienced as urges
- Motives: Experienced as desires (wants and needs)
- Evaluations: Evaluative beliefs about what is good or bad
The hierarchy matters: plans can generate motives, but impulses can override plans. This explains why someone can plan to quit smoking, genuinely believe smoking is harmful, and still light a cigarette when stressed. The lower-level system (impulses) overrides the higher-level system (plans).
The evidence:
PRIME Theory underpins England’s national stop-smoking services and has influenced UK addiction treatment policy. West’s broader work on smoking cessation is extensively cited. The theory has less standalone meta-analytic evidence compared to TPB or SCT, but it integrates findings from neuroscience, behavioral economics, and learning theory that the classical models ignore.
What it gets right: It takes seriously the fact that human motivation is not purely rational. By including impulses and automatic processes in the formal model, PRIME Theory can explain phenomena (relapse, impulsive behavior, action against stated intentions) that the TPB and HBM cannot.
What it gets wrong: Less empirical testing than established models. Primarily applied in addiction and smoking cessation, with less evidence in other behavior change domains. The hierarchical structure, while conceptually elegant, can be difficult to translate into specific intervention design steps.
When to use it: Addiction and substance use. Understanding why people act against their own plans. When automatic/impulsive processes are clearly important.
Part 4: Environment and Policy Frameworks
These frameworks shift focus from what’s happening inside the person to what’s happening around them. They argue, with good evidence, that the easiest way to change behavior is often to change the context.
Nudge Theory / Choice Architecture (Thaler & Sunstein, 2008)
Economist Richard Thaler (Nobel Prize, 2017) and legal scholar Cass Sunstein published “Nudge” in 2008, introducing the idea of “libertarian paternalism”: you can steer people toward better choices while preserving their freedom to choose. The tool is choice architecture, changing how options are presented rather than changing people’s minds.
Core components:
- Defaults: Pre-selected options (opt-in vs. opt-out)
- Framing: How information is presented
- Social norms: Highlighting what others do
- Salience: Making key information prominent
- Simplification: Reducing friction in choice processes
The evidence:
Dennis Hummel and Alexander Maedche reviewed 100 nudging publications with 317 effect sizes in 2019. Effectiveness varied substantially by nudge type and domain.
Stephanie Mertens and colleagues published a larger meta-analysis in PNAS in 2022: 200+ studies, 450+ effect sizes, N=2,149,683. The overall effect was d=0.43 (small to medium).
But here’s the catch. The Mertens meta-analysis was immediately challenged for severe publication bias. Maximilian Maier and colleagues, also in PNAS, applied bias-correction methods and found the corrected effect dropped to d=0.01-0.02, effectively zero. Mertens had to issue corrections for retracted studies (Shu et al., 2012), coding errors, and erroneous values. This doesn’t mean nudges don’t work in specific contexts. It means the average effect across the published literature is almost certainly inflated, and the true average effect size may be negligible.
Real-world applications have produced dramatic results in specific contexts: pension auto-enrollment (Save More Tomorrow) increased participation from roughly 40% to over 90%. Behavioral Insights Team tax letter redesigns produced 17% higher response rates.
One commonly cited example deserves scrutiny. The Johnson and Goldstein (2003) organ donation study is presented in nearly every nudge summary as proof that opt-out defaults dramatically increase donation rates. The reality is more complex. A comprehensive analysis found no significant difference in actual organ donation rates between opt-in and opt-out countries. Spain, the world leader in organ donation, achieved its results through transplant coordinator infrastructure and hospital protocols, not through its opt-out default. The default changed registration rates on paper, not donation rates in practice.
What it gets right: Defaults work. When you change the default option, behavior changes dramatically, with minimal effort and cost. For simple, one-time decisions (enrollment, registration, consent), choice architecture is the most efficient intervention available.
What it gets wrong: The publication bias problem is real and serious. Beyond defaults, the evidence for other nudges is weaker than commonly claimed. Nudges also don’t address root causes. They work by steering choices at the point of decision, which means they’re best for one-time or infrequent decisions, not sustained behavior change. The “libertarian paternalism” framing has also drawn philosophical criticism about who gets to decide what’s “better.”
When to use it: Policy design at scale. Default setting for enrollment and consent. Simplifying complex decision environments. Not for sustained, complex behavior change.
EAST Framework (BIT, 2014)
EAST is the Behavioural Insights Team’s practitioner-friendly distillation of behavioral science. Released in 2014, it was designed to be memorable, actionable, and usable by policymakers who aren’t behavioral scientists.
Core components:
- Easy: Reduce friction. Simplify. Use defaults. Pre-fill forms.
- Attractive: Draw attention. Design rewards well. Use images and color.
- Social: Show what others do. Use commitments. Leverage networks.
- Timely: Prompt at the right moment. Consider present bias. Help people plan.
The evidence:
BIT’s flagship results include tax letter redesigns that produced 17% higher response rates and one of the largest randomized controlled trials ever run in the UK: testing organ donation messages on a high-traffic webpage. A reciprocity-based message (“If you needed an organ transplant, would you have one? If so, please help others”) increased registrations significantly. Though as noted in the Nudge section above, registration rates and actual donation rates are different things.
What it gets right: Simplicity and memorability. Four principles that anyone can apply. The emphasis on “Easy” first is well-supported by evidence. Reducing friction is almost always the highest-return intervention.
What it gets wrong: EAST is atheoretical. It doesn’t explain why these principles work, which limits its ability to generate novel predictions or guide complex intervention design. It’s also limited to nudge-type interventions and can’t address deep motivational or systemic challenges.
When to use it: Quick wins in service design. Government communications. When you need a simple framework for a non-specialist audience. As a complement to deeper models like COM-B, not as a replacement.
MINDSPACE Framework (Dolan et al., 2012)
MINDSPACE is a more granular version of the nudge approach, developed by Paul Dolan and colleagues for the UK Cabinet Office. The mnemonic captures nine behavioral influences that operate largely automatically.
Core components:
- Messenger: We’re influenced by who communicates information
- Incentives: We respond more to losses than gains (loss aversion)
- Norms: We do what others do
- Defaults: We go with the pre-set option
- Salience: We notice what’s novel and relevant
- Priming: Subconscious cues influence behavior
- Affect: Emotions shape decisions
- Commitments: We follow through on public promises
- Ego: We protect our self-image
What it gets right: More specific than EAST. Each component is individually supported by research (mostly). Useful for policy brainstorming.
What it gets wrong: The Priming component is now on shaky ground. The psychological priming literature has been heavily criticized in the replication crisis, with many landmark studies failing to replicate. Including priming as one of nine “robust” effects looks outdated. The framework as a whole has not been tested as an integrated system. It’s more a checklist than a model.
When to use it: Policy brainstorming. As a more detailed alternative to EAST. Treat priming claims with skepticism.
Ecological / Socio-Ecological Model (Bronfenbrenner, 1979; McLeroy et al., 1988)
The Socio-Ecological Model isn’t a behavior change model in the same sense as the others. It’s a framing that prevents a common error: assuming behavior change is entirely an individual problem.
Core components (five levels):
- Individual: Knowledge, attitudes, beliefs, skills
- Interpersonal: Family, friends, social networks
- Organizational: Workplace policies, institutional rules
- Community: Relationships between organizations, community norms
- Policy: Laws, regulations at local to national level
The evidence:
The CDC uses the Socio-Ecological Model as its organizing framework for violence prevention. It has been applied in public health, sports medicine, mental health, and adolescent pregnancy prevention. It doesn’t generate effect sizes because it’s a conceptual framework, not a predictive model.
What it gets right: It’s the only framework that systematically forces you to look beyond the individual. If someone isn’t exercising, the ecological model asks: Is the problem individual (knowledge, motivation)? Interpersonal (no workout partner)? Organizational (no gym at work, no time off)? Community (unsafe neighborhoods, no parks)? Policy (no physical education requirements, car-dependent infrastructure)?
What it gets wrong: It’s too broad to guide specific interventions. It tells you to “think about all levels” but doesn’t tell you what to do at any of them. It needs to be combined with a more specific model (like COM-B) to be actionable.
When to use it: Framing complex behavior change challenges. Ensuring you don’t fall into the “individual blame” trap. Program planning and evaluation. Always use alongside a more specific model.
Part 5: Integrative Frameworks
The models above share a common problem: each captures some drivers of behavior while ignoring others. The following framework attempts to fix that by integrating the pieces into a single comprehensive system.
The Behavioral State Model (Hreha, 2024)
The Behavioral State Model is my attempt to build a comprehensive framework that includes what the existing models leave out. I developed it after years of applied work at Walmart, Google, and other organizations, where I kept running into the same problem: the standard models were missing critical drivers of behavior, and the missing pieces were often the ones that mattered most.
The core idea: at any moment, a person exists in a particular behavioral state that makes certain actions more likely and others less likely. That state is determined by eight components, six internal (“Identity”) and two external (“Context”).
Identity components (internal):
- Personality: Does this behavior align with the person’s interests, values, and dispositional preferences? Personality is one of the strongest predictors of behavior across all domains, yet COM-B, Fogg, and most other models don’t include it explicitly.
- Perception: Does the person believe they can do this, and believe it’s worth doing? This is distinct from actual ability. Someone physically capable of running a mile who believes they can’t will behave identically to someone who actually can’t.
- Emotions: Is the person in an emotional state compatible with the behavior? Most models lump emotions into “motivation,” but emotions are not motivations. They are evolved mechanisms that solve specific adaptive problems (fear triggers avoidance, anger triggers approach, disgust triggers rejection). They need to be addressed on their own terms.
- Abilities: Can the person actually perform the behavior? This covers physical abilities (strength, coordination, stamina) and cognitive abilities (comprehension, attention, memory). Unlike perception, this is about actual capacity.
- Social status/situation: Will performing this behavior raise or lower the person’s standing in their immediate social group? People are exquisitely sensitive to status implications. A behavior that signals low status will be avoided regardless of its health benefits.
- Motivations: Does the person have sufficient incentive, intrinsic or extrinsic, to act? This is the component that most models overemphasize at the expense of everything else.
Context components (external):
- Physical environment: Does the physical setting permit or prevent the behavior? (Access to equipment, time, space, infrastructure.)
- Social environment: Do the immediate social norms and peer behaviors support or undermine the behavior?
The Behavioral State Score:
For any target behavior, you can score each component (0-10) to produce an overall Behavioral State Score. If any component scores near zero, the behavior will not occur regardless of how strong the other components are. A person with high motivation, good abilities, and a supportive environment who perceives the behavior as impossible (perception = 0) will not act.
Why identity drives behavior more than context:
This is the model’s most contrarian claim. Most behavior change models, especially nudge-based and environmental frameworks, emphasize context. The Behavioral State Model argues that identity components are typically the larger determinant of behavior, for two reasons:
- Self-selection: People choose environments that match their existing personality, abilities, and motivations. The person at the high-end gym didn’t become fit because of the gym. They chose the gym because they were already conscientious and goal-driven. Environmental causation is often illusory.
- Environmental modification: Once people enter an environment, they reshape it to match their identity. A collaborative manager rearranges the workspace. A focused student eliminates distractions. People shape their surroundings just as much as their surroundings shape them.
The practical implication: choose the right behavior, not the right intervention.
If a target behavior scores poorly across the identity components, the answer isn’t to design a better intervention. It’s to choose a different behavior. This is the concept of Behavior Market Fit: a behavior has strong market fit when it aligns with the target population’s personality, abilities, perceptions, emotions, social status concerns, and motivations. Mismatched behaviors fail regardless of context optimization.
This is the single biggest mistake in applied behavior change: spending months designing an intervention for a behavior that was never a good match for the target population in the first place.
What it gets right: Comprehensiveness. The BSM is the only model that explicitly includes personality, distinguishes perception from ability, and treats emotions as separate from motivation. The identity-first framing corrects the field’s overemphasis on environmental interventions. The Behavioral State Score provides a structured diagnostic that reveals where the actual bottleneck is.
What it gets wrong: The BSM has no independent meta-analysis and no published RCTs testing the full eight-component model. That’s a real limitation. The TPB has 185+ studies. COM-B has been adopted by national health systems. The BSM has applied work at named organizations and a published article series. That is a different tier of evidence, and I won’t pretend otherwise.
The component scoring system relies on practitioner judgment, which means two practitioners could score the same person differently on “perception” or “social status.” This is the same subjectivity problem I flagged in COM-B, and it applies here too.
The identity-first claim is supported by personality research showing trait-behavior correlations of r=.40-.60 when aggregated across contexts, but most of that evidence is correlational, not experimental. Situationist critics will argue that the apparent dominance of identity over context reflects measurement artifacts rather than genuine causal priority. But there’s a problem with that objection: situationism lives in social psychology, and social psychology has a roughly 25% reproducibility rate. Personality science, by contrast, reproduces at over 80% (Soto, 2019). The field generating the critique is far less reliable than the field generating the evidence. That doesn’t settle the debate, but it shifts the burden of proof considerably.
A fair critic could also argue the BSM repackages existing constructs under new labels: “perception” overlaps with self-efficacy, “motivations” appears in every model, “abilities” maps to COM-B’s capability. The value-add is the integrative scoring and the identity-first framing, not entirely new constructs.
When to use it: When you need to understand why a behavior isn’t happening and whether you’re targeting the right behavior in the first place. Before designing any intervention. When existing models have failed and you suspect the problem isn’t execution but fit. Product design, program design, coaching, and organizational behavior change.
The Master Comparison Table
| Model | Year | Type | Core Focus | Key Evidence | Biggest Limitation |
|---|---|---|---|---|---|
| Theory of Planned Behavior | 1991 | Cognitive | Attitudes → Intentions → Behavior | R²=27% behavior, 39% intentions (185 studies) | Intention-behavior gap; ignores habits |
| Health Belief Model | 1966 | Cognitive | Threat perception → Health behavior | Low effect sizes across all constructs (18 studies) | Weak predictive validity |
| Social Cognitive Theory | 1986 | Cognitive | Self-efficacy → Behavior | Self-efficacy r=.30-.45 (536 effect sizes, N=421K) | Too broad; hard to operationalize |
| Protection Motivation Theory | 1975 | Cognitive | Threat + Coping → Protection | d=0.52 overall (65 studies, N=30K) | Limited to threat-based motivation |
| Transtheoretical Model | 1983 | Stage | Stages of readiness → Change | Most cited (33% of articles), d=0.20 for internet | Stage-matching doesn’t outperform alternatives |
| Self-Determination Theory | 1985 | Motivational | Autonomous motivation → Lasting change | 184 health datasets; competence β=.35 → motivation | Small effects on actual health outcomes |
| IMB Model | 1992 | Motivational | Info + Motivation + Skills → Behavior | 7% of articles; HIV/adherence focus | Too simple for complex behaviors |
| COM-B / BCW | 2011 | Design | Capability + Opportunity + Motivation | Synthesized 19 frameworks; adopted by NHS, WHO | Complex; requires training |
| Fogg B=MAP | 2009 | Design | Motivation + Ability + Prompt | Limited peer-reviewed evidence | Ignores sustained/complex behaviors |
| BCT Taxonomy | 2013 | Design | 93 techniques in 16 groups | Multiple meta-analyses of effective BCTs | 93 techniques is overwhelming; atheoretical |
| PRIME Theory | 2006 | Design | Plans → Motives → Impulses → Responses | Underpins UK smoking cessation services | Less empirical testing; addiction-focused |
| Nudge / Choice Architecture | 2008 | Policy | Change context, not minds | d=0.43 (200+ studies); d=0.01-0.02 after bias correction | Evidence likely inflated; one-time decisions only |
| EAST | 2014 | Policy | Easy, Attractive, Social, Timely | BIT tax letters +17%; organ donation RCTs | Atheoretical; limited to nudge-type |
| MINDSPACE | 2012 | Policy | 9 automatic behavioral influences | Individual components evidence-based (mostly) | Priming on shaky ground; not tested as system |
| Ecological Model | 1988 | Framing | 5 levels: individual → policy | CDC organizing framework | Too broad for specific intervention design |
| Behavioral State Model | 2024 | Integrative | 8 components (6 identity + 2 context) → Behavioral State | Applied at Walmart, Google; identity-first framing | Newer; less independent empirical testing |
Which Model Should You Use?
The honest answer: it depends on what you’re trying to do.
If you’re designing a health communication about a specific threat: Use Protection Motivation Theory. It tells you exactly what to address: make the threat feel real, show the recommended action works, and build confidence.
If you need to diagnose why a behavior isn’t happening: Start with COM-B. It forces you to consider capability, opportunity, AND motivation. Most other models only address one or two of these.
If you’re designing a sustained behavior change intervention: Use COM-B for diagnosis, the BCW for selecting intervention functions, and the BCT Taxonomy for choosing specific techniques. Add SDT principles to ensure you’re building autonomous motivation, not just compliance.
If you’re a coach, therapist, or healthcare provider: Use SDT’s autonomy-supportive approach for how you communicate. Use TTM stages as a conversational gauge of readiness (but don’t rely on formal stage-matching). Focus on building self-efficacy (from SCT).
If you’re designing a product or app: Fogg’s B=MAP is a reasonable starting point for feature-level design (reduce friction, add prompts). But for the overall behavior change strategy, use COM-B or the ecological model to make sure you’re not missing the bigger picture.
If you’re making policy: Start with the ecological model to ensure you’re addressing multiple levels. Use EAST or MINDSPACE for specific policy design. Use nudge/choice architecture for default and enrollment decisions. Use COM-B/BCW if you’re designing a comprehensive policy package.
If you’re an academic trying to predict behavior: The TPB still explains the most intention variance. But know its limits: intentions explain at most 28% of behavioral variance, and that drops with objective measures and longer follow-ups.
If you suspect you’re targeting the wrong behavior entirely: Use the Behavioral State Model. Score the target behavior across all eight components. If the identity components (personality, perception, emotions, abilities, social status, motivations) score poorly, the problem isn’t your intervention. It’s your behavior choice. Find a better-fitting behavior before designing anything.
If you want the single highest-return construct across all models: Self-efficacy. It’s the most consistent predictor of behavior change in every meta-analysis, across every domain, in every model that includes it. Build genuine competence and confidence, and you’re doing the most evidence-backed thing available.
The Uncomfortable Truth About Behavior Change Models
After reviewing the evidence across all 16 models, three patterns emerge that the field doesn’t talk about enough.
First, no model explains most behavioral variance. The best-performing model (TPB) explains 27% of behavior with self-reports and 21% with objective measures. That means 73-79% of why people do what they do is not captured by the best model we have. This isn’t a failure of any single model. It reflects the genuine complexity of human behavior.
And the gap gets worse outside the lab. Stefano DellaVigna and Elizabeth Linos analyzed 126 RCTs from two major U.S. nudge units (ideas42 and the White House SBST) and found that the average academic nudge trial produces an 8.7 percentage point effect, while the average government trial produces just 1.4 percentage points, an 83% shrinkage. The interventions informed by these models work far less well in messy real-world conditions than in controlled studies.
Second, theory doesn’t reliably improve interventions. Andrew Prestwich and colleagues found that 56% of behavior change interventions report a theoretical base, but 90% of those don’t properly link their techniques to theoretical constructs. More troublingly, the evidence that theory-based interventions outperform atheoretical ones is mixed. Webb and colleagues found that extensive theory use improves internet intervention effects, but the differences between theories were modest (d=0.15 to d=0.36).
Third, the most practically useful insights cut across models. Self-efficacy appears in SCT, PMT, TTM, IMB, and COM-B. Reducing barriers appears in HBM, Fogg, EAST, and choice architecture. Environmental restructuring appears in COM-B, ecological models, and nudge theory. The most powerful constructs aren’t owned by any single model.
The field’s fragmentation into 82+ competing theories may actually be the problem. As Susan Michie’s team demonstrated with COM-B, the way forward is probably integration, not competition. Find the core constructs that reliably predict and change behavior across contexts, regardless of which theoretical tent they live in.
But even more important than picking the right model is picking the right behavior. The biggest failure mode in applied behavior change isn’t using the wrong framework. It’s spending months optimizing an intervention for a behavior that was never a good match for the target population. That’s person-behavior fit. It’s the reason I built the Behavioral State Model, and it’s the question I’d ask before reaching for any of the 15 other frameworks in this article.
For a deeper dive into what actually works for lasting behavior change, see our complete guide to behavior change.
Frequently Asked Questions
What is the most effective behavior change model?
No single model is “most effective” because effectiveness depends on context. The COM-B model (Michie et al., 2011) is the most comprehensive for intervention design, as it was built by synthesizing 19 existing frameworks. For predicting behavior, the Theory of Planned Behavior explains the most variance (27% of behavior across 185 studies). For individual-level predictors, self-efficacy from Social Cognitive Theory has the largest average effect size (r=.30-.45 across 421,880 participants).
How many behavior change models are there?
Rachel Davis and colleagues identified 82 distinct theories of behavior and behavior change in a 2015 scoping review published in Health Psychology Review. Of those 82, just four theories accounted for 63% of published research: the Transtheoretical Model (33%), Theory of Planned Behavior (13%), Social Cognitive Theory (11%), and the Information-Motivation-Behavioral Skills Model (7%).
What is the difference between COM-B and the Behaviour Change Wheel?
COM-B is the behavioral diagnosis tool at the center of the Behaviour Change Wheel. It identifies whether a behavior problem stems from Capability, Opportunity, or Motivation deficits. The Behaviour Change Wheel surrounds COM-B with 9 intervention functions (education, persuasion, training, etc.) and 7 policy categories that address those deficits. COM-B tells you what’s wrong. The BCW tells you what to do about it.
Does it take 66 days to form a habit?
This number comes from Lally et al. (2010), who found a median of 66 days among the 48% of participants who showed the expected automaticity pattern. The range was 18 to 254 days, exercise took 1.5x longer (median 91 days), and the researchers measured automaticity, not habit in the scientific sense. For a detailed breakdown of what this study actually found, see our analysis of the Lally study.
What is the best behavior change model for exercise?
No model has strong evidence specifically for exercise habit formation. The COM-B model provides the best diagnostic framework (identifying whether the barrier is capability, opportunity, or motivation). Self-Determination Theory helps design interventions that build autonomous motivation rather than reliance on external pressure. The Transtheoretical Model’s median exercise asymptote was 91 days in the Lally study, and more than half of exercise participants hadn’t reached automaticity when the study ended at 84 days.
Are behavior change models evidence-based?
Some are, some aren’t. The Theory of Planned Behavior has 185+ studies in its primary meta-analysis. Social Cognitive Theory’s self-efficacy construct has been validated across 536 effect sizes. The Transtheoretical Model has been heavily criticized, with systematic reviews finding no evidence that stage-matched interventions outperform alternatives. The Fogg Behavior Model has limited peer-reviewed evidence despite wide commercial adoption. The nudge literature faces serious publication bias concerns that may inflate reported effect sizes.
References
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211.
Armitage, C. J., & Conner, M. (2001). Efficacy of the Theory of Planned Behaviour: A meta-analytic review. British Journal of Social Psychology, 40, 471-499.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Prentice-Hall.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191-215.
Bridle, C., Riemsma, R. P., Pattenden, J., Sowden, A. J., Mather, L., Watt, I. S., & Walker, A. (2005). Systematic review of the effectiveness of health behavior interventions based on the transtheoretical model. Psychology & Health, 20(3), 283-301.
Carpenter, C. J. (2010). A meta-analysis of the effectiveness of health belief model variables in predicting behavior. Health Communication, 25(8), 661-669.
Davis, R., Campbell, R., Hildon, Z., Hobbs, L., & Michie, S. (2015). Theories of behaviour and behaviour change across the social and behavioural sciences: A scoping review. Health Psychology Review, 9(3), 323-344.
Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation. Psychological Bulletin, 125(6), 627-668.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. Plenum.
DellaVigna, S., & Linos, E. (2022). RCTs to scale: Comprehensive evidence from two nudge units. Econometrica, 90(1), 81-116.
Dolan, P., Hallsworth, M., Halpern, D., King, D., Metcalfe, R., & Vlaev, I. (2012). Influencing behaviour: The mindspace way. Journal of Economic Psychology, 33(1), 264-277.
Fisher, J. D., & Fisher, W. A. (1992). Changing AIDS-risk behavior. Psychological Bulletin, 111(3), 455-474.
Floyd, D. L., Prentice-Dunn, S., & Rogers, R. W. (2000). A meta-analysis of research on protection motivation theory. Journal of Applied Social Psychology, 30(2), 407-429.
Gollwitzer, P. M., & Sheeran, P. (2006). Implementation intentions and goal achievement: A meta-analysis of effects and processes. Advances in Experimental Social Psychology, 38, 69-119.
Hreha, J. (2024). The Behavioral State Model. The Behavioral Scientist. https://www.thebehavioralscientist.com/articles/the-behavioral-state-model
Hummel, D., & Maedche, A. (2019). How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies. Journal of Behavioral and Experimental Economics, 80, 47-58.
Lally, P., van Jaarsveld, C. H. M., Potts, H. W. W., & Wardle, J. (2010). How are habits formed: Modelling habit formation in the real world. European Journal of Social Psychology, 40(6), 998-1009.
Littell, J. H., & Girvin, H. (2002). Stages of change: A critique. Behavior Modification, 26(2), 223-273.
McEachan, R. R. C., Conner, M., Taylor, N. J., & Lawton, R. J. (2011). Prospective prediction of health-related behaviours with the Theory of Planned Behaviour: A meta-analysis. Health Psychology Review, 5(2), 97-144.
McLeroy, K. R., Bibeau, D., Steckler, A., & Glanz, K. (1988). An ecological perspective on health promotion programs. Health Education Quarterly, 15(4), 351-377.
Mertens, S., Herberz, M., Herber-Valdez, L., Sprengholz, P., Siegel, M., Mata, J., & Diefenbach, S. (2022). The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains. Proceedings of the National Academy of Sciences, 119(1), e2107346118.
Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., … & Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques. Annals of Behavioral Medicine, 46(1), 81-95.
Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6, 42.
Ng, J. Y., Ntoumanis, N., Thogersen-Ntoumani, C., Deci, E. L., Ryan, R. M., Duda, J. L., & Williams, G. C. (2012). Self-determination theory applied to health contexts: A meta-analysis. Perspectives on Psychological Science, 7(4), 325-340.
Ntoumanis, N., Ng, J. Y., Prestwich, A., Quested, E., Hancox, J. E., Thogersen-Ntoumani, C., … & Williams, G. C. (2021). A meta-analysis of self-determination theory-informed intervention studies in the health domain. Health Psychology Review, 15(2), 214-244.
Prochaska, J. O., & DiClemente, C. C. (1983). Stages and processes of self-change of smoking: Toward an integrative model of change. Journal of Consulting and Clinical Psychology, 51(3), 390-395.
Rogers, R. W. (1975). A protection motivation theory of fear appeals and attitude change. The Journal of Psychology, 91(1), 93-114.
Rosenstock, I. M. (1966). Why people use health services. Milbank Memorial Fund Quarterly, 44(3), 94-127.
Sheeran, P. (2002). Intention-behaviour relations: A conceptual and empirical review. European Review of Social Psychology, 12(1), 1-36.
Stajkovic, A. D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124(2), 240-261.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.
Webb, T. L., Joseph, J., Yardley, L., & Michie, S. (2010). Using the internet to promote health behavior change. Journal of Medical Internet Research, 12(1), e4.
West, R. (2005). Time for a change: Putting the Transtheoretical (Stages of Change) Model to rest. Addiction, 100(8), 1036-1039.
West, R. (2006). Theory of addiction. Wiley-Blackwell.
Wood, W., & Runger, D. (2016). Psychology of habit. Annual Review of Psychology, 67, 289-314.




