How Uber and other digital platforms could trick us using behavioral science – unless we act fast

(Photo: freestocks-photos)

The biggest digital platforms – Uber, Airbnb, Facebook, eBay, and others – are collecting so much data on how we live, that they already have the capability to manipulate their users on a grand scale; they can predict behavior and influence our decisions on where to click, share and spend

Abbey Stemler, Joshua E. Perry, and Todd Haugh

Uber’s business model is incredibly simple: It’s a platform that facilitates exchanges between people. And Uber’s been incredibly successful at it, almost eliminating the transaction costs of doing business in everything from shuttling people around town to delivering food.

This is one of the reasons Uber is now among the most valuable companies in the world after its shares began trading on the New York Stock Exchange on May 10.

Yet its US$82.4 billion market capitalization may pale in comparison to the wealth of user data it’s accumulating. If you use Uber – or perhaps even if you don’t – it knows a treasure trove of data about you, including your location, gender, spending history, contacts, phone battery level and even whether you’re on the way home from a one-night stand. It may soon know whether you’re drunk or not.

While that’s scary enough, combine all that data with Uber’s expertise at analyzing it through the lens of behavioral science and you have a dangerous potential to exploit users for profit.

Uber’s hardly alone. Our research shows the biggest digital platforms – Airbnb, Facebook, eBay, and others – are collecting so much data on how we live, that they already have the capability to manipulate their users on a grand scale. They can predict behavior and influence our decisions on where to click, share and spend.

While most platforms aren’t using all these capabilities yet, manipulation through behavioral psychology techniques can occur quietly and leave little trace. If we don’t establish rules of the road now, it’ll be much harder to detect and stop later.

‘Choice architecture’

A platform can be any space that facilitates transactions between buyers and sellers. Traditional examples include flea markets and trading floors.

A digital platform serves the same purpose but gives the owner the ability to “mediate” its users while they’re using it – and often when they’re not. By that we mean it can observe and learn an incredible amount of information about user behavior in order to perfect what behavioral scientists call “choice architectures,” inconspicuous design elements intended to influence human behavior through how decisions are presented.

For example, Uber has experimented with its drivers to determine the most effective strategies for keeping them on the road as long as possible. These strategies include playing into cognitive biases such as loss aversion and overestimating low probability events, even if a driver is barely earning enough money to make it worth her while. Drivers end up like gamblers at a casino, urged to play just a little longer despite the odds.

Uber didn’t immediately respond to a request for comment.

Airbnb also experiments with its users. It has used behavioral science to get hosts to lower their rates and accept bookings without screening guests – which creates real risks for hosts, particularly when they are sharing their own apartment.

While these examples seem relatively benign, they demonstrate how digital platforms are able to quietly design systems to direct users’ actions in potentially manipulative ways.

And as platforms grow, they only become better choice architects. With its IPO’s huge influx of investor money to fund more data and behavioral science, Uber could move into dangerously unethical territory – easy to imagine given its past practices.

For example, if the app recognizes that you are drunk or in a neighborhood you rarely travel to – and one that its data show is high in crime – it could charge you a higher rate, knowing you’re unlikely to refuse.

Legal challenges

And it’s not all speculation.

In an effort to deceive law enforcement trying to investigate the company, Uber actually found a way to identify government regulators trying to use its app and then prevented them from getting rides.

That’s one reason lawmakers and regulators have been discussing the difficult, interrelated roles of behavioral science and tech for years. And some companies, Uber in particular, have been investigated for a host of bad business practices, from discrimination to misusing user data.

But most of the manipulation we’ve identified and worry about is not expressly illegal. And because regulators are often unable to keep pace with the ever-evolving use of technology and choice architecture, that’s likely to remain so.

Given the absence of well-defined and enforceable legal guardrails, platform companies’ propensity to exploit behavioral science at users’ expense will remain largely unchecked.

An ethical code

One solution, in our view, is establishing an ethical code for platform companies to follow. And if they don’t adopt it willingly, investors, employees and users could demand it.

Since the mid-20th century, written codes of ethical conduct have been a staple of U.S. companies. The legal and medical professions have relied on them for millennia. And research suggests they are effective at encouraging ethical behavior at companies.

We reviewed hundreds of ethical codes, including ones targeted at tech and computing companies. Based on our research, we urge digital platforms to adopt five ethical guidelines:

  1. All choice architecture employed on a platform should be fully transparent. Platforms should disclose when they are using the tools of behavioral science to influence user behavior
  2. Users should be able to make choices on the platform freely and easily, and choice architects should limit behavioral interventions to reminders or prompts that are the least harmful to user autonomy
  3. Platforms should avoid “nudging” users in ways that exploit unconscious and irrational decision making based on impulse and emotion. New research shows that transparent choice architecture can work just as well
  4. Platforms should recognize the power they possess and take care not to exploit the markets they’ve created, including by abusing information asymmetries between themselves and users or opposing reasonable regulations
  5. Platforms should avoid using choice architecture that discourages users from acting in their own best interests. As Nobel Prize-winning behavioral economist Richard Thaler put it, we should only “nudge for good.”

Big tech and behavioral science are now integrated in ways that are making companies wildly successful, from buzzing toothbrushes that make cleaning your teeth seem rewarding to using texts to nudge poorer mothers to use health care.

While the results can significantly enhance our lives, it also makes it easier than ever for companies to manipulate users to enhance their bottom lines.

Abbey Stemler is an assistant professor in the Department of Business Law and Ethics at the Kelley School of Business. She is a leading scholar on the sharing economy and her research broadly explores the interesting spaces where law has yet to catch up with technology. In particular, her aim is to expose the evolving realities of Internet-based innovations and find ways to effectively regulate them without hindering their beneficial uses. As she sees it, many modern firms inhabit a world that operates under alien physics—where free is often costly and “smart” is not always better. She therefore employs tools and insights from economics, behavioral science, regulatory theory, and rhetoric to understand how we, as a society, can better protect consumers, privacy, and democracy.  

Professor Stemler is also a practicing attorney, consultant, entrepreneur, and avid traveler.

Joshua E. Perry, J.D., M.T.S., is the co-author of two textbooks exploring issues at the intersection of law and ethics and author or co-author of over thirty published articles, essays, and book chapters that have appeared in a variety of leading law reviews and peer-reviewed journals across the fields of business, medicine, law, and ethics. At the Kelley School of Business, Indiana University-Bloomington he is Faculty Chair of the Undergraduate Program, Glaubinger Chair for Undergraduate Leadership, and Associate Professor in the Department of Business Law & Ethics where he teaches courses on business ethics, critical thinking, and the legal environment of business to both undergraduates and MBA students in both the residential and online programs. He also serves as a Section Editor for the Journal of Business Ethics and a staff editor for the American Business Law Journal.

A graduate of the joint law-divinity program at Vanderbilt University, Professor Perry was previously on faculty at the Center for Biomedical Ethics and Society at Vanderbilt University where he taught bioethics in the medical school and legal ethics in the law school. He is a frequent speaker to local, national, and international audiences on a variety of challenges and issues relevant to leadership, moral discourse and ethical decision-making in business, legal, medical, government and academic environments.

Todd Haugh is an Assistant Professor of Business Law and Ethics at Indiana University’s Kelley School of Business. His scholarship focuses on white collar and corporate crime, business and behavioral ethics, and federal sentencing policy, exploring the decision-making processes of the players most central to the commission and adjudication of economic crime and unethical business conduct. His work has appeared in top law and business journals, including the Northwestern University Law Review, Notre Dame Law Review, Vanderbilt Law Review, and the MIT-Sloan Management Review. Prof. Haugh’s expertise relating to the burgeoning field of behavioral compliance has led to frequent speaking and consulting engagements with major U.S. companies and ethics organizations. He is also regularly quoted in national news publications such as the New York Times, Wall Street Journal, Bloomberg News, and USA Today, as well as various legal, business, and popular blogs.

A graduate of the University of Illinois College of Law and Brown University, Prof. Haugh has extensive professional experience as a white-collar criminal defense attorney, a federal law clerk, and a member of the general counsel’s office of the United States Sentencing Commission. In 2011, he was chosen as one of four Supreme Court Fellows of the Supreme Court of the United States to study the administrative machinery of the federal judiciary.

Prior to joining the Kelley School, where he teaches courses on business ethics, white collar crime, and critical thinking, Prof. Haugh taught criminal procedure and advanced legal writing and advocacy at DePaul University College of Law and Chicago-Kent College of Law. He is a recipient of numerous teaching awards, including a Trustees Teaching Award and multiple Innovative Teaching Awards, and a Jesse Fine Fellowship from the Poynter Center for the Study of Ethics and American Institutions, to which he now serves as a board member. In both his scholarship and teaching, Prof. Haugh takes a unique look at how ethics, law, business, and psychology interact in today’s complex world.

Disclosure statement

Abbey Stemler receives funding from The World Bank Group for her research.

Todd Haugh is affiliated with The Poynter Center for the Study of Ethics and American Institutions. 

Joshua E. Perry does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Note: originally published at theconversation.com; re-published with permission.

Comments