Uber recently made headlines with this feature by the New York Times, on how it uses behavioural insights to get drivers to work longer hours, and go to areas where there is high passenger demand. As Uber drivers are not employees, Uber has very little formal influence over their behavior – they can’t mandate how much drivers drive, or what area they cover. Behavioural nudges are a relatively costless way of getting drivers to do what Uber wants, instead of using monetary incentives. But is Uber particularly guilty?
A Brief Overview of Behavioural Nudges
The use of behavioural nudges to shape customer, constituent and employee behavior is certainly not unique. Examples abound, including:
- Businesses and customers. Positive reinforcement is the bedrock of modern advertising. Jeff Bezos from Amazon famously said “through our Selling Coach program, we generate a steady stream of automated machine-learned ‘nudges’ (more than 70 million in a typical week).” Games like candy crush turn us into addicts by providing mini rewards in our brains, releasing the neurochemical dopamine and tapping into the same neuro-circuitry involved in addiction.
- Governments and constituents. The UK Government has its own Behavioral Insights team, which has helped sign up an extra 100,000 organ donors a year and doubled the number of army applicants by simply changing default options and how emails were written.
- Employers and employees. One of the reason Google will fix your car, take care of your health and food needs all in one place is because they know it will get you to stay longer and work harder.
Now that you’ve seen some examples, what exactly are “behavioral nudges”? Definitions vary, but I’ll boil it down to two things:
- Applying an insight about a person’s decision-making calculus (that the person might not even know about himself!) to get him to make the decision you want.
- The person is likely unaware that this tool is being used (unlike a law or a policy, which he actively shapes his behavior to comply with).
For example, the principle of “loss aversion” suggests that humans are more likely to respond to a potential loss, than a potential gain. When you want drivers to drive for two more hours, tell them they’d lose out on $200 if they didn’t. Don’t tell them they’ll gain $200. We are also a lot more vulnerable to peer pressure than we think. The UK Government managed to nudge forward the payment of £30m a year in income tax by introducing new reminder letters that informed recipients that most of their neighbours had already paid. Never underestimate the power of inertia – which is why companies adopt “opt-out” rather than “opt-in” clauses.
An in-depth article on leading thinkers in the field of behavioural science (Kahneman, Tversky, Thaler, Lewis), can be found here.
The use of behavioural nudges is not new, but data has made it an increasingly powerful tool.
The potential for behavioural nudges is increasing with the proliferation of data about individuals. The more you understand how people make decisions – to work longer hours, to buy your product, to pay their taxes, to brush their teeth, to play a game, the more effectively you can nudge them towards your desired behaviour. Facebook knows more about me than I do. Uber knows more about their drivers than drivers themselves.
As a result, these companies can push buttons I didn’t even know existed. They have the potential to hack my operating system and change my behaviour.
Hence the ethical question of when a “nudge” becomes outright manipulation is more pertinent than ever.
Here are several ways to think about whether a “nudge” is being used ethically. <By the way, some people argue that it’s never OK to curb someone’s “moral freedom” through nudges, but I find that too idealistic – nudges have been used for time immemorial. It has to be a matter of degree.>
First, what is the inherent goodness of the outcome for the target population?
On the positive extreme, behaviours such as showing up at a doctors’ appointment, attending school or paying bills on time can be seen as actions that are positive for the individual. On the negative extreme, you could have outcomes such as an alcoholic purchasing more alcohol, or a suicidal person being nudged off the ledge.
There is huge scope for debate in between the extremes. Uber could argue that getting drivers to work longer hours during peak period is good for their earnings. Facebook would argue that repeatedly pushing advertisements that users are more likely to click helps them find what they need and like faster.
But here are two sub-questions to consider, in Uber’s case:
- What is the distribution of benefits accruing to Uber vs the driver if the driver changes his behaviour? In this case, there seems to be a direct trade-off between Uber and drivers’ interests. As more drivers come onto the platform as a result of the nudges, drivers don’t benefit from surge pricing. On the other hand, Uber gets the benefit of more rides and hence more earnings.
- Is there an intention to deceive? The author suggests that some of Uber’s methods nudged drivers towards geographical areas on the pretext of a surge, but when drivers got there, they found there was none. Even if this was not the intention, the asymmetry of information is unfair to drivers. More transparency is needed, perhaps by providing drivers a live feed of surge rates in various areas, including when surge is dropping.
Second, how easy is it to “opt-out”?
The ‘opt-out’ technique is one of the most commonly used “nudges”: always set your preferred option as the default, and count on human inertia (or ignorance) to keep people there. If you are a Netflix user, you’ve experienced this: once your episode ends, the next one comes on automatically in ten seconds. It is a nudge to keep you watching, but you can turn off this feature permanently. Google and Facebook will send you personalized ads, but you can opt-out and get those replaced by randomised advertisements instead.
If you are an Uber driver, you can also temporarily turn off the forward-dispatch feature, which dispatches a new ride to you before the current one ends (keeping you constantly driving, just as Netflix keeps you constantly watching). However, there is no permanent way to turn it off. It will keep popping back on when you take a new ride: you have to be constantly proactive about stopping it if you don’t want to overwork. Does the lack of a permanent opt-out feature make Uber more guilty? Perhaps. But I would like to find out more about the design considerations of both Uber and Lyft before giving a definitive view (hit me up if you have further insight!).
Generally, how proactively institutions educate their users/employees about the opt-out function matters, as does how easy it is to opt-out.
A More Important Question
So is Uber particularly guilty? On the surface it seems to. But want to hear my real answer? I have no idea, simply because much of the nudging that institutions do today is invisible, making it impossible to compare. We – as users, employees, constituents – do not even know that it is happening, and there is no legal obligation to tell us.
Hence, rather than ask whether Uber is guiltier than other institutions which deploy “nudges”, I believe the more important question should be: is self-regulation by these institutions sufficient? If not, does anyone have the moral high-ground to arbitrate? Should there be a system where institutions report their use of “nudges” and hold each other accountable? Would love to hear your thoughts.