How change happens in business: nudge better

Nudges can help guide people along the path to change, but why do they sometimes give way to bans and mandates? Harvard Professor and author of How Change Happens, Cass Sunstein, explains

Choice architects might have started with a hypothesis, which is that a nudge – say, disclosure of information – will change behaviour in a desirable direction. Perhaps the hypothesis was plausible but turns out to be wrong. Once people are given more information, they keep doing exactly what they have been doing.

Again, the failure of the hypothesis does not, by itself, tell us whether something else should be done. Perhaps people are given a warning about the risks associated with some anti-cancer drug; perhaps they continue to use the drug in spite of the warning. If so, there might be no problem at all. The goal of the warning is to ensure that choices are informed, not that they are different. If people’s behaviour does not change after they receive information about the risks associated with certain activities (say, American football or boxing), nothing might have gone wrong.

Ineffective nudges and the introduction of default rules

Suppose, however, that the underlying problem is significant, and that once people are informed, we have every reason to think at least some of them should do something different. Perhaps people are not applying for benefits for which an application would almost certainly be in their interest. Perhaps they are failing to engage in behaviour that would much improve their economic situation or their health (say, taking prescription medicine or seeing a doctor). If so, then other nudges might be tried and tested instead – for example, a clearer warning, uses of social norms, or a default rule. By itself, information might not trigger people’s attention, and some other approach might be superior. And if a default rule fails, it might make sense to accompany it with information or with warnings. There may well be no alternative but to experiment and to test – to identify a way to nudge better.

Consider a few possibilities. We have seen that if the goal is to change behaviour, choice architects should ‘make it easy’; in the case of an ineffective nudge, the next step might be to ‘make it even easier’. Information disclosure might be ineffective if it is complex but succeed if it is simple. A reminder might fail if it is long and poorly worded but succeed if it is short and vivid. We have also seen that people’s acts often depend on their social meaning, which can work like a subsidy or a tax; if a nudge affects meaning, it can change a subsidy into a tax or vice versa. For example, certain kinds of information disclosure, and certain kinds of warnings, can make risk-taking behaviour seem silly, stupid or uncool. A nudge might be altered so that it affects social meaning in the desired direction. Publicising social norms might move behaviour, but only if they are the norms in a particular community, not in the nation as a whole. If publicising national norms does not work, it might make sense to focus on those that have sway in the relevant community.

Freedom failed

In some cases, freedom of choice itself might be an ambiguous good. For behavioural or other reasons, an apparently welcome and highly effective ‘counternudge’, leading consumers or employees in certain directions, might turn out to be welfare reducing. In extreme cases, it might ruin their lives. People might suffer from present bias, optimistic bias, or a problem of self-control. The counternudge might exploit a behavioural bias of some kind. What might be necessary is some kind of counter-counternudge – for example, a reminder or a warning to discourage people from opting into a programme that generally is not in their interest.

In the case of overdraft protection programmes, some of those who opt in and who end up receiving that protection are probably worse off as a result. Perhaps they do not understand the programme and its costs; perhaps they were duped by a behaviourally informed messaging campaign. Perhaps they are at risk of overdrawing their accounts not because they need a loan, but because they have not focused on those accounts and on how they are about to go over. Perhaps they are insufficiently informed or attentive. To evaluate the existing situation, we need to know a great deal about the population of people who opt in. In fact, this is often the key question, and it is an empirical one. The fact that they have opted in is not decisive.

Ineffective defaults and manipulative counternudges

The example can be taken as illustrative. If a default rule or some other nudge is well-designed to protect people from their own mistakes and it does not stick, then its failure is nothing to celebrate. The fact of its ineffectiveness is a tribute to the success of a self-interested actor seeking to exploit behavioural biases. The counternudge is a form of manipulation or exploitation, something to counteract rather than to celebrate. Perhaps the counternudge encourages people to smoke or to drink excessively; perhaps it undoes the effect of the nudge, causing premature deaths in the process.

The same point applies to strong antecedent preferences, which might be based on mistakes of one or another kind. A GPS device is a defining nudge, and if people reject the indicated route on the ground that they know better, they might end up lost. The general point is that if the decision to opt out is a blunder for many or most, then there is an argument for a more aggressive approach. The overdraft example demonstrates the importance of focusing not only on default rules, but also on two other kinds of rules as well, operating as counter-counternudges: altering rules and framing rules.

Altering rules and framing rules

Altering rules establish how people can change the default. If choice architects want to simplify people’s decisions, and if they lack confidence about whether a default is suitable for everyone, they might say that consumers can opt in or opt out by making an easy phone call (good) or by sending a quick email (even better).

Alternatively, choice architects, confident that the default is right for the overwhelming majority of people, might increase the costs of departing from it. For example, they might require people to fill out complex forms or impose a cooling-off period. They might also say that even if people make a change, the outcome will ‘revert’ to the default after a certain period (say, a year), requiring repeated steps. Or they might require some form of education or training, insisting on a measure of learning before people depart from the default.

Framing rules establish and regulate the kinds of ‘frames’ that people can use when they try to convince people to opt in or opt out. We have seen that financial institutions enlisted loss aversion in support of opt in. Behaviourally informed strategies of this kind could turn out to be highly effective. But that is a potential problem. Even if they are not technically deceptive, they might count as manipulative, and they might prove harmful. Those who believe in freedom of choice but seek to avoid manipulation or harm might want to constrain the permissible set of frames – subject, of course, to existing safeguards for freedom of speech. Framing rules might be used to reduce the risk of manipulation.

Consider an analogy. If a company says that its product is ‘90% fat free’, people are likely to be drawn to it, far more so than if the company says that its product is ‘10% fat’. The two phrases mean the same thing, and the 90% fat-free frame is legitimately seen as a form of manipulation. In 2011, the American government allowed companies to say that their products are 90% fat free – but only if they also say that they are 10% fat. We could imagine similar constraints on misleading or manipulative frames that are aimed to get people to opt out of the default. Alternatively, choice architects might use behaviourally informed strategies of their own, supplementing a default rule with, say, uses of loss aversion or social norms to magnify its impact.

Difficult choices for choice architects

To the extent that choice architects are in the business of choosing among altering rules and framing rules, they can take steps to make default rules more likely to stick, even if they do not impose mandates. They might conclude that mandates and prohibitions would be a terrible idea, but that it makes sense to make it harder for people to depart from default rules. Sometimes that is the right conclusion. The problem is that when choice architects move in this direction, they lose some of the advantages of default rules, which have the virtue of easy reversibility, at least in principle. If the altering rules are made sufficiently onerous, the default rule might not be all that different from a mandate.

There is another possibility: Choice architects might venture a more personalised approach. They might learn that one default rule suits one group of people and that another suits a different group; by tailoring default rules to diverse situations, they might have a larger effect than they would with a mass default rule. Or they might learn that an identifiable subgroup is opting out, either for good reasons or for bad ones. If the reasons do not seem good, choice architects might adopt altering rules or framing rules as safeguards, or they might enlist, say, information and warnings. If they can be made to work well, more personalised approaches have the promise of preserving freedom of choice while simultaneously increasing effectiveness.

When freedom of choice isn’t necessarily a good idea

But preserving freedom of choice might not be a good idea. Indeed, we can easily imagine cases in which a mandate or ban might be justified on behavioural or other grounds. Most democratic nations have mandatory social security systems, based in part on a belief that ‘present bias’ is a potent force and a conclusion that some level of compulsory savings is justified on welfare grounds. Food-safety regulations forbid people from buying goods that pose risks that reasonable people would not run. Such regulations might be rooted in a belief that consumers lack relevant information (and it is too difficult or costly to provide it to them), or they might be rooted in a belief that people suffer from limited attention or optimistic bias. Some medicines are not allowed to enter the market, and for many others a prescription is required; people are not permitted to purchase them on their own.

Many occupational safety and health regulations ultimately have a paternalistic justification, and they take the form of mandates and bans, not nudges. Consider, for example, the domains of fuel economy and energy efficiency. To be sure, they reduce externalities in the form of conventional air pollutants, greenhouse gases, and energy insecurity. But if we consider only those externalities, the benefits of those requirements are usually lower than the costs. The vast majority of the monetised benefits accrue to consumers, in the form of reduced costs of gasoline and energy.

On standard economic grounds, those benefits should not count in the analysis of costs and benefits, because consumers can obtain them through their own free choices; if they are not doing so, it must be because the relevant goods (automobiles, refrigerators) are inferior along some dimension. The US government’s current response is behavioural; it is that in the domain of fuel economy and energy efficiency, consumers are making some kind of mistake, perhaps because of present bias, perhaps because of a lack of sufficient salience. Some people contest this argument. But if it is correct, the argument for some kind of mandate is secure on welfare grounds.

Making the case for incentives, mandates and bans

The same analysis holds, and more simply, if the interests of third parties are involved. Default rules are typically meant to protect choosers, but in some cases, third parties are the real concern. For example, a green default rule, designed to prevent environmental harm, is meant to reduce externalities and to solve a collective action problem, not to protect choosers as such. A nudge, in the form of information disclosure or a default rule, is not the preferred approach to pollution (including carbon emissions). If a nudge is to be used, it is because it is a complement to more aggressive approaches or because such approaches are not feasible (perhaps for political reasons). But if a default rule proves ineffective – for one or another of the reasons sketched here – there will be a strong argument for economic incentives, mandates and bans.

This is an edited extract from How Change Happens by Cass Sunstein. Courtesy of the MIT Press.

Cass Sunstein is a Professor at Harvard, and the founder and director of the Program on Behavioral Economics and Public Policy at Harvard Law School. From 2009 to 2012, he was Administrator of the White House Office of Information and Regulatory Affairs.

You may also like...

employee wellbeing

Breathe easy: how to prepare for workplace presentations

Presentations can be daunting for even the most confident employee; fear of standing up in front of colleagues can quite easily make your heart race. Luckily, Carolyn Cowan is on hand with some timely tips on how to keep the worries at bay so you can focus fully on acing that important presentation

Read More »
New curriculum

A shorter route to an MBA opens up at LBS

London Business School (LBS) has announced the launch of a new one-year MBA for candidates who graduated three or more years ago with a master’s in management (MiM) degree from a reputable institution

Read More »