Dan Ariely Interview — A Primer on Behavioral Economics
| TBS Staff
Are you ready to discover your college program?
Dan Ariely is one of the most insightful researchers in the emerging field of behavioral economics. Sidestepping the conventional wisdom of standard economics about what humans, conceived as rational utility maximizers, ought to do, Ariely employs ingenious social psychological experiments to uncover what humans actually do. He finds that human behavior departs from standard economic theory in systematic and predictable ways, a finding he explores at length in several popular-level books (see below).
An incredibly prolific researcher, Ariely is the James B. Duke Professor of Behavioral Economics at the Fuqua School of Business at Duke University. Ariely was born in New York City, but spent much of his youth in Israel, growing up in the coastal city of Ramat Hasharon. During his senior year in high school, he was involved in an explosion, and received third degree burns over 70 percent of his body. It was during this time, as he was recovering in the hospital, that he first began to consider some of the principles behind behavioral economics. After his ordeal, he would go on to research ways in which treatment could be delivered to patients in a way that was less painful and could improve their overall recovery and healing experience (His article "Painful Lessons" chronicles his ordeal).
Ariely founded the Center for Advanced Hindsight in 2007. The center works to develop "behavioral interventions that help people be happier, healthier and wealthier." The team of post-doctoral fellows and research assistants at the center works to further academic research in the field of behavioral economics. To facilitate their research, the Center runs The Start Up Lab, which partners with young companies to develop solutions in the areas of health and finances, that will specifically increase consumer's well-being or improve their finances. The Center also runs the Common Cents Lab, supported by the MetLife Foundation, which works to improve the financial well-being of low and middle class Americans.
Ariely also co-founded BEworks with Nina Mazar in 2010. The BEworks team helps Global 1000 business leaders who are experiencing problems in their operations or marketing strategies. Ariely and his colleagues use behavioral economics to recommend and implement solutions to those problems.
See also: What can I do with an Economics degree?
Ever prolific, Ariely is also a frequent contributor to TED Talks, and the author of three highly respected popular-level books on behavioral economics: Predictably Irrational, The Upside of Irrationality, and The Honest Truth about Dishonesty. The latter book was also turned into a documentary: The (Dis)Honesty Project.
Ariely earned a bachelor's degree in psychology from Tel Aviv University, and his master's and doctorate in cognitive psychology from the University of North Carolina at Chapel Hill. He then went on to complete a second doctorate in business administration from Duke University. He has used his combination of research in psychology and business as the foundation for many of the principles he employs in behavioral economics.
SuperScholar, which conducted an interview with Ariely in 2011, has graciously allowed TheBestSchools.org to reprint it here. In commenting on this interview, Prof. Ariely remarked, "These are some of the best questions I ever got—thanks."
Dan Ariely Interview
Professor Ariely, you are one of the leading lights in the recently emerging field known as behavioral economics. Can you, in plain English, explain what behavioral economics is?
Yes, the best way to think about behavioral economics is in contrast to standard economics. In standard economics, we think — we assume — that people are perfectly rational, which means that they always behave in the best way for them. They can compute everything, they can calculate everything and they can make, always, consistently, the right decisions. In contrast, behavioral economics doesn't assume much about people. Instead of starting from the idea that people are perfectly rational, we say we just don't know, but let's check it out. So, what we do is we put people in different situations to check how they actually make decisions. And what we find in those experiments is that people often don't behave as you would expect from a perfectly rational perspective. So, in essence, it's an empirical and non-idealistic way to start looking at human behavior. And because we find that people behave differently than expected, often irrationally, it also leads often to different conclusions about how companies should be created, what the government should do, and, of course, what individuals should be doing.
How did you get interested in this field of study?
My starting point came from a personal experience. Many years ago I got badly injured in an explosion and I spent a long time in the hospital after that. And hospitals are places where you can observe lots and lots of irrational behaviors. In my particular case, the one that was the most difficult for me was the question of bandage removal. Imagine that you are a burn patient and the nurse had to take the bandages off you and the question is what's the right strategy. Do you rip the bandages off slowly, take a long time, but each second is not as painful, or do you rip them off quickly — not as many seconds but each second is much more painful. Which one of those is the right approach?
The nurses in my department believed in the ripping approach. They felt this was the one that would lead to the least amount of pain, and I would often argue and plead with them to try something different. They told me that they were correct, that they knew what they were doing, and that their approach would cause me the least overall amount of pain. When I got out of the hospital, many years later, I tried it out. I created some experiments to try out whether this is correct. I brought people to the lab. I gave them pain in different profiles and intensities over time. I checked what would get people to have more or less pain. And what I found was that the nurses were actually wrong. In fact, when you look at painful experiences over time, the duration of the pain doesn't matter as much, but the intensity of the pain matters a great deal.
And from that point on I started looking at all kinds of behaviors in which we had a sense of what's rational. We think we know what is the right thing to do: what's good for ourselves, what's good for our clients, what's good for our patients. But we actually get things wrong, and we get things wrong in a systematic and predictable way. And that's basically what my research has been about.
What has been the reaction of conventional economists to behavioral economics?
They don't really like this stuff too much. For a long time, their line of defense was that's these are just small decisions of little people. They would say — there's nothing you can say about the data, the data is the data — and so they would say, “Oh yes, we can see how this happened, but these are just regular people making regular decisions. And if you only took this and gave it to professionals making big important decisions with a lot of money, all the mistakes will go away.” So this has been kind of the defense. But since the financial recession, it's much harder to say this. Even so, the Chicago economists still say, “Oh, this will never happen in the marketplaces.” Most people recognize that this behavior, or these sets of behaviors, which, though irrational, not just happen but happen commonly.
What is the history of behavioral economics? How did the field get started?
Actually, if you think about it, economics had its history very much in behavior — if you look at The Wealth of Nations or The Theory of Moral Sentiments by Adam Smith. These are not writings with sensitivity to human motivation and desire beyond cold calculated fact. So economics historically was much more connected to psychology, but then as economics became more and more mathematical, mostly for reason of tractability, economy started assuming perfect rationality. And in the beginning, it was just so convenient. People say “Oh, it was just as if” — if people were like this, let's compute what would happen. The notion was that this is not really how people are, but we can just pretend that people are like this, and maybe we can get some insight into some ideas. It was just from mathematical tractability. Over time, I think people forgot that it was just for mathematical tractability, and they started treating it as if it was real. And behavioral economics in some sense is trying to reverse this trend and say let's go back and think back about some of the broader set of human motivators,
In the early 1970s, Daniel Kahneman and Amos Tversky showed that our intuitive understanding of probability often departs systematically from what the mathematical theory of probability and statistics dictates.
Is their work the start of behavioral economics, or do you place it elsewhere?
Again, in a broader historical perspective, economics started more behaviorally. But the modern version of behavioral economics definitely started, more or less, with Kahneman and Tversky.
Would you say that is the defining moment, the locus classicus of the field, as we know it today?
For most people one Ph.D. is plenty. You obtained two doctorates, the first in cognitive psychology, the second in business. How did you decide on this academic path?
The reason I got my second Ph.D. was very much due to Danny Kahneman, who, when I was getting my first Ph.D. in psychology, encouraged me strongly to get a Ph.D. in business. His reason was that he saw the field progressing more in business schools than in traditional psychology departments.
How has possessing two doctorates benefited your research in behavioral economics?
I think of behavioral economics as a mixture of psychology and economics. I got some psychology when I got the Ph.D. in cognitive psychology and I got some economics when I got my degree in business, and so it's been a good mix. I also got to spend more time in school, right? So I spent six and a half years getting two Ph.D.s. It's more than most people usually spend on a PhD, so it just gave me more time to take classes.
Which graduate programs would put one in a position to do cutting edge research in behavioral economics?
My favorite researcher in this field is George Loewenstein at Carnegie Mellon. I think there's no training program in behavioral economics right now. It's about mentorship and working with someone. And I think George is probably the most creative, thoughtful, innovative researcher in the field. So, if it were me, I would go and study with him. I'll say one more thing. The Harvard economics program has lots of people in the field. They probably have a very high concentration of people who do behavioral economics, so from the perspective of just being exposed to people, that program is very, very good as well.
In your research, you sometimes correlate subjects' behaviors with their neural activity, using brains scans such as fMRIs. How much of behavioral economics research uses brain scans?
A very small part right now. I think the field in general is trying to figure out what these neural techniques would actually teach us. And some people believe in it more, some people believe in it less. But the field is just trying to figure out what to do with this new kind of data.
Does such research fall under what is now being called neural economics?
Yeah, that's the current title. People who call it neural economics want to bypass psychology. But, I think in general … in principle, it could be a very fruitful direction.
Would you say then that the difference between neural economics and behavioral economics is that behavioral economics still looks primarily to psychology?
Behavioral economics — you know there are different versions of it, there are quite a few versions — but basically it's a method. So in behavioral economics, it's mostly experimental and field of observation. You observe what people do. In neural economics, you try to observe what happens in people's brain. And there's all kinds of restrictions, because you know you can't take somebody to a restaurant and observe what's in their head. So it deals with some of the phenomena but on a different measurement level.
In your research, you found that people often behave in ways that are completely at odds with how they would behave if they were rational utility maximizers. What are some of the most striking examples in your experience of how people “misbehave” in this way?
I think the most common example has to do with the effect of emotion on our behavior. So, anybody who's ever gone to a restaurant, saying that they will be on a diet, and then the waiter comes with a chocolate soufflé and they change their opinion has experienced it. And it's not just in a restaurant, it's about exercising and dieting, it's about saving money — all of those things are irrational. Another kind of very timely example is the question of texting while driving, which many people do. Some fear to admit to be doing it, but lots of people do this. That's a very devastating type of behavior.
Your book Predictably Irrational and its sequel The Upside of Irrationality are both enormously entertaining and informative. How does the one book build upon the other?
So, the two books have slightly different focuses. The first one is mostly about the psychology of money. It's about the mistakes people make when they think and make decisions about money. How do we shop, what happened when something is free? That's kind of the large part of it. How do we fall prey to our habits – that's the focus of Predictably Irrational. The Upside of Irrationality basically takes a kind of side view of this and says, let's examine not how we behave in terms of money, but let's examine two classes of behaviors. One of them is how we behave in the work place and the second one is how do we behave in more personal life. So the workplace things involve, for example, what happens when people get big bonuses. How do the big bonuses change their performance, and how do we take pride and credit in what we've created. Sometimes we take too much pride and credit.
And then the part about personal life has to do with how do we adapt, how do we get used to things. In many of the examples, I use my own personal experiences as a starting point. So, if somebody had a serious injury, I ask the question, What happened if you had a serious injury, how do you get over it? How do you adjust to your new life? And you know, before I had the injury, I had some ideas about who I am and what my capacity is and who would date me and what kind of life I would have. And all that went by the sideways when I got injured. All of a sudden I had to develop a new ideal for who I might be, and how I might function, and how does this process happen. So, this is an example of the topic that is much more personal and about my personal life. Although not everybody experiences a weird kind of tragedy, the principles seem to be quite general.
With the incredibly active research schedule that you keep, how did you find time to write these books and why did you write them?
Actually, when I started, I was hoping to write a cookbook. I had an idea to write a cookbook called Dining Without Crumbs: The Art of Eating Over the Sink. This was going to be looking at life through the behavioral economics perspective, but through the lens of the kitchen. Nobody wanted to publish this book. I tried one publisher after another and eventually somebody said, if I want to write about my cookbook, I need to write something that people would read first, which is a book about my research. I said Ok, if I have to, I have to. That's when I started trying to write. This was the motivation for it. But of course, once I started, I discovered how much fun it was to write to a general audience about my research. So, I loved it and kept on doing it. I really enjoyed this type of writing, but I try to restrict myself to write like this only on the weekends, because otherwise, I feel, it would take over.
How have these books been received by colleagues in your field?
Mostly I think positively. You know everybody has their own complaints and their own particular version of it, but mostly I've been getting good response from people.
In Predictably Irrational, you write “If I were to distill one main lesson from the research described in this book, it is that we are pawns in a game whose forces we largely fail to comprehend. We usually think of ourselves as sitting in the driver's seat, with ultimate control over the decisions we make and the direction our life takes; but, alas, this perception has more to do with our desires — with how we want to view ourselves — than with reality.” Do you think this claim might need to be nuanced? Your research certainly does a wonderful job showing how humans depart from rationality — and systematically so. And yet doesn't ascribing irrationality presume a known standard of rationality that is being violated?
Yeah, so that's actually a very important issue, that not for every behavior do we know what's the standard. So, it's not the case that we can take every behavior and say what's the standard. When we can, we can examine when the people are rational or not but, as you said, it's not always the case that this is possible. When we don't know what's the right standard — it's very hard to know — so it leaves the question unanswered. Take something simple like, what color you like. I don't know what to say if you like the color red or green, there's no rational standard for that. Or in a more extreme example, imagine that you enjoy taking risks. It's very hard to, in an absolute way, tell you whether this is the right way or wrong way to behave in your life. So there's a whole domain of human behavior in which we don't yet have the tools to figure out if a particular behavior indicates a rational or an irrational action. We just don't know how to do it.
So in some instances we don't know the standard.
And yet in some instances we do know the standard. Take Kahneman and Tversky again, who showed that humans, when not looking explicitly to statistical theory, deviate from it in predictable ways. And yet, by taking pencil and paper in hand and consciously applying statistical theory, rationality can be restored. Could you comment on how your research might actually invite a renewed, albeit moderated, rationalism that acknowledges our systematic biases and irrationalities and, as much as possible, corrects them?
I think that's exactly our goal. The goal of the research is not to make fun of people, not to say, Hey, you're wrong here or you're wrong there. The goal is to systematically take the kind of things that people might be doing wrong and think about how we might fix them. So, for example, people often, as we said, eat too much for lunch. They go to lunch and they eat all these very fattening things. We've tried, in the U.S. and particularly in New York, to test this by telling people how many calories each particular piece of food has. It turns out, empirically, that people don't eat less because of this information. The results of the calorie experiments in New York are not encouraging.
Okay, so that's on the sad side, right? It seems the problem with overeating is not a barrier of information. We've therefore tried some other things. We asked, what if we tell people when they come to the fast food place, Hey, do you know that if we give you half a portion of fries you could save 250 calories? Would you like that? It turns out about half the people like it and are willing to take it. Now, what you've basically created is a mechanism that allows people to realize — a bit like the paper and pencil example you gave — that they might fail, which then gets them not to fail to the same degree. So we're constantly thinking about market mechanisms that would make people aware of what they're doing and hopefully change their behavior.
Some videos of you appear on YouTube. In one that's received quite a few views, you describe how people behave very differently when they operate on the basis of financial as opposed to social norms. You described this difference also at length in your books. Are there practical ways this insight could be implemented in formulating public and corporate policies? And if so, how?
Yes, it's actually a very important point that often escapes people. The notion of people as just wanting money, that this is the only real motivator that we have for our lives, I think it's overly simplistic and, frankly, a little offensive to the human spirit. I think that if you understood that people have a range of motivation — that includes money but also includes other things — that would provide a good starting point. Think about what else we might want to create, what other behaviors we might want to create. That I think is the first realization: human behaviors have a whole range of motivation and we could care about lots and lots of things. And then, where could this apply?
For example, how would you create a different financial market if you understood the whole range of human motivation? What would you pay people to do, and how would you pay them, and would you create a system in which people care only about money and we tell them that money is the only thing? What would you do in terms of a benefit for your employee? How would truly motivate them? And it also says something about how do we think about taxes? Do we want to think about them as a financial exchange between a person and a society they live in, the government, or do we want to think about taxes as a part of a larger social norm, a social commitment they are part of? The reason the second book is called The Upside of Irrationality is that the evidence shows people have a tremendous capacity to care about each other and to care about the society they live in — to care about other people. And if it could only happen, I think we could get much better outcomes.
In their book Nudge, behavioral economists Richard Thaler and Cass Sunstein propose what they call “libertarian paternalism,” which essentially argues that policy makers should channel our predictable irrationalities for our own benefit. An example they give is arranging food in school cafeterias so that students will tend to choose the healthier foods. All the foods would still be there and students could eat as they always had, but by understanding our behavioral predispositions (predictable irrationalities), Thaler and Sunstein argue that policy makers (what they call “choice architects”) can act for our benefit, getting us unconsciously to choose healthier foods. This sounds innocuous, but should such policy makers be required to make full disclosure, as in, “Behavioral economists have arranged the present situation so that you will tend to make decisions in ways that benefit you where we define benefit thus-and-so.” The broader question is, Should behavioral economists, when they propose to improve our lives, be obligated to tell us what they are really doing? Is there any chance that behavioral economists may unconsciously resurrect B. F. Skinner's utopian vision of society as outlined in his book Beyond Freedom and Dignity?
Multiple good questions. First of all, I think that Thaler and Cass's notions are very important, but I don't think they encompass the range of what behavioral economics can do. Sure, you can arrange things in a cafeteria, and people would behave slightly differently, but there are things that even if you rearrange differently, people are not going to make the right choice.
Such as the calorie experiment in New York?
For example. But also think about something like drinking and driving. This is something that people know that it's not a good thing to do, but nevertheless we need to have a very strict rule about it. It's not enough to make some suggestion. Or take saving for retirement. It turns out that people really need help saving for retirement. It's not something that people naturally know how to do, not to mention who you want to operate on you if you have a very delicate medical situation, right? In all of those decisions, nudging people can help on the margins in some cases, but it's not going to be the full solution. I like what they're doing, but the question you are bringing up is actually much broader and more difficult in the cases when it's not just nudge but purely paternal.
For example, I prohibit people from texting and driving. Or, I prohibit people from eating too much trans fat. Those cases are very difficult. And now the question is, if people know that somebody is rearranging their environment to trick them, will they be resentful? I think the answer is, most likely, yes. Under those conditions, should we hide the designer of the policies? I think not. So, I think that people have a right to know about who is organizing their environment and for what purpose. Companies can do it any way they want, but when the government is doing it, I think the government needs to be upfront about it. And you know people do get upset, and that's perfectly fine. It's part of the story. I think it's okay to redesign people's environment to drive different behaviors. I don't like the idea that we will hide it from people.
In The Upside of Irrationality, you describe the power of apologies to restore broken relationships. An apology can be a catalyst for forgiveness. And yet, in many cases, apologies are never offered. The person doing the wrong refuses to admit it or make amends. Does your research offer any insight into granting forgiveness in the absence of apologies?
We found that time heals a bit even without an apology. Just the passage of time seems to help a little bit, but we haven't found the remedy that's as good as an apology.
Those who don't offer apologies may not deserve forgiveness, but when a wrong is done and unapologized for, the victim will often bear the resentment and pain from the offense. Unforgiveness can enslave the victim to the perpetrator even after the offense is long past. How can people get beyond such resentment and pain in the absence of apologies?
We have some data suggesting that part of the issue has to do with loss of control. When somebody has mistreated you, you feel that your control over the world has gone down and, to the extent that you can regain your sense of control, that can help a little bit. Again, not easy to do.
Much of your work is highly statistical. You look at samples from various experimental conditions and infer that one group, say, is being more dishonest than a control group when it claims to score, on average, higher on an exam than those whose examination answers could be verified (i.e., the control group). No doubt, your statistical techniques do identify real effects — as groups, one is acting more honestly than another. But that raises the question, Are there individuals — outliers or anomalies, we might call them — who behave irrationally according to conventional economics but also unpredictably?
The question of individual differences is a very important question in many ways. Are some individuals more rational than others? Are some individuals irrational in different ways? So far, every time we do an experiment, not everybody is behaving exactly the same way. But what we haven't been able to do is to find out the ways in which the same individuals behave consistently across different experiments. For example, we haven't found the ueber-rational people, we haven't found people who violate our ideas in similar ways but different from other people — across experiments. Partly, it's hard to do and we haven't done a lot of it, but so far, when we've searched for these, we haven't found them, we haven't been successful in doing it well. But it doesn't mean that this is the end of the days. But this is clearly an important direction.
If you think, for instance, of the Stanley Milgram experiments on obedience to authority from the 1960s, he showed that vast majorities of experimental subjects, when subject to certain authoritarian modes of control, would act in ways that, they thought, inflicted unimaginable pain and even death on others. At the same, Milgram found that about 10 to 15 percent of subjects refused to be coerced into violating their conscience. Have you found anything similar in your research?
You know what, we find it in each particular situation. Some people behave rationally. But the question from the Milgram experiment is what would happen if we repeated that same experiment a month later. Would it be the same 15 percent of people who would refuse to do it or would it be a different 15 percent? The fact that there are some people who behave well — or don't behave irrationally, or don't succumb to pressure, or whatever it is — doesn't necessarily mean that it's the same people who would do it over and over. And we haven't found the people who don't do it repeatedly. But Milgram hasn't found them either.
In one of your apology studies, you tempt subjects to act deceitfully by having an experimenter behave rudely, without apologizing. Moreover, you then find that such subjects exhibit a greater tendency to cheat this experimenter out of money. Yet do any of these subjects, even when treated rudely and not given apologies, still refuse to cheat?
Some people didn't. Funny, I didn't think about the Milgram comparison, but about 15 percent did not cheat. But again, we don't know if it's the same people who would not cheat every time or, for example, are these people who had a particularly good day in the morning, or people who had something very specific. We also don't know if they wouldn't have cheated in another situation. In each experiment, we find people who behave rationally, but we don't yet know what are the causes.
Did the 15 percent give their reason for not cheating?
This is something we don't do. One of the things in our research program in general is that we don't believe that people can actually tell you the reason they are acting a certain way. So we usually don't even ask them. The other thing is that particularly with cheating, we try not to ask people. We try not to temp people to cheat or steal or act dishonestly and then come to them and say “Why did you do it?” That's something the human studies committee doesn't let us do.
Might a book titled Unpredictably Irrational, which focuses on such outliers, have merit in your eyes?
Down the road, I think so. But I think at this point we should focus on the similarity between humans for two reasons. One is that the similarity is larger than the differences, so we should focus on the big effect. The other thing is once we find the similarity, we can think about what to do in terms of policy and help and mechanism and so on. But as the next step, I think it would be very useful to move to a situation where we look at the outliers, where we look at the people who have not been part of the masses that are behaving the same way.
In one of your experiments examining honesty, you prime subjects with the Ten Commandments, thus getting them to think in moral terms. This, you find, when compared to a control group, leads to greater honesty. In discussing this experiment, you raise the question whether it might not be bad to teach the Bible in schools to improve trust in the society, which you see is essential to its smooth functioning. In raising this possibility, and leaving aside First Amendment concerns, do you see such moral prompting purely in pragmatic terms — as in, we need to get people acting better and, if this works, lets use it — or do you see any deeper truth underlying it — as in, there's a moral structure to the universe, possibly given by God who really exists, and we need to pay attention? In short, would you be using the Ten Commandments as a Noble Lie or as Truth with a capital T.
I haven't thought about this question before. The pragmatic part of me is just saying, It works, let's use it. And by the way, I'm not restricting myself to the Ten Commandments. I'm happy to use any kind of symbol to get people to be more honest. For example, if Doctors wearing stethoscopes are becoming more honest, that's great. Some engineer told me that when he has an engineering ring from his school on his hand, and he's using his hand to measure something, it reminds him about the moral standards of engineering. So I'm happy to use whatever it is. As for the question whether there is any kind of moral nature to the universe, I'm not sure yet. But I don't feel that we need to wait until we figure this out before we can use these tools in order to decrease immoral behavior.
Some of your research focuses on the placebo effect. Here you point out that expectations can have a powerful effect on people, which can be manifested even in their health and physical well-being. In this light, would you comment on the following recent study on placebos (reported in The Week):
A new take on placebos. The “placebo effect'' — pills with no active ingredients causing patients' conditions to improve — has always been thought to rely on subjects thinking they're getting the real thing. But that belief may be misplaced, a Harvard Medical School study suggests. Researchers recruited 80 people with irritable bowel syndrome and told them that they'd receive either no pills or placebos as part of a study into a “novel mind-body” therapy. Some of the subjects were instructed to take pills twice a day from bottles labeled “placebo pills” and were repeatedly reminded that the pills were inactive. “They were told so many times, they had it coming out their ears,” lead author Ted Kaptchuk tells ScienceMag.org. Yet after three weeks, 59 percent of people who took the placebos said their symptoms had improved — far more than the 35 percent who'd taken nothing. This suggests, Kaptchuk says, that the body's own healing mechanisms can be triggered by simple attention from another person; placebos serve as an acknowledgment that a person is sick and wants to be well. Beyond “mere positive thinking, there may be significant benefit to the very performance of medical ritual,” he says. “My personal hypothesis is this would not happen without a positive doctor-patient relationship.”
How does your own research make sense of such findings in which the placebo's role, far from being hidden, is even made explicit?
One of the things we found is that the effects of placebos were not about deliberative thinking; they were about implicit associations being raised. So, if somebody gave you a medication and it was at a steep discount, even if you didn't pay it, the idea of a discount brought about notions of low quality that made you believe less in the medication and so on. It's all a question of what inferences you're making. So I think that for people who have bad opinions about placebos and you tell them, Hey, these are the placebos, this would decrease their expectation and get them to experience something worse. But I think that more and more people now are actually kind of enamored of and have a positive attitude toward placebos. In that case, mentioning to people that these are placebos would not necessarily make the effect worse. But again, the real issue is all about automatic expectations and what gets people to start thinking one way or another.
If someone wanted to make a career in behavioral economics, what undergraduate majors would best prepare them for it? In your view, what are the top ten or so graduate programs in behavioral economics?
I think you want to do a mixture of psychology and economics, and you want also to make sure you have enough background in experimental method and statistics. And finally, it's one of those areas where you need hands on experience. So it's important to actually find a lab and start working with somebody on research. In terms of good programs, they would be Carnegie Mellon, Harvard, Cornell, Stanford, MIT, Yale has some very good people, NYU … and I think that would be my list.
You have been an incredibly fruitful researcher examining a wide variety of human behaviors. No doubt you will continue of find plenty of interesting problems to work on. Are there any broader themes that tie together your research?
No, I don't really have a theme for my research. I'm very motivated by things that happen to me around the world. So it's about what's happening out there and where I think we could have an influence. For example, these days I think the questions of income inequality, taxation, and how do we get out of this national debt are very important. I think we'll start adding those. Healthcare is a big important issue that has a lot of implications. I think we are going to study this. It's not so much a theme from a theoretical perspective, but I try to look at problems that I think are important for society, and looking at social science as a tool to make some progress on these.
The next question was going to be whether there was some one thing, some unifying thread, you were trying to understand when you look at such diverse phenomena as the power of decoys in marketing, the effect of apologies in restoring relationships, and the clash between financial and social norms, but it seems you have answered that question.
I think the answer is that I would be very happy if many years from now, one answer would emerge, one kind of theme, or underlying mechanism. But I'm not holding my breath for this, and for the time being I'm trying just to make progress in a pragmatic way.
What do you see as the future of behavioral economics?
I think its role is basically applied social science. The world presents human beings with lots of challenges, and as we invent more technology, sometimes these challenges just become bigger and bigger. And again, think about texting while driving. It wasn't really an issue twenty years ago. As we create these challenges, we need to think carefully about what we are doing to ourselves — how do we build better technology and how do we create situations that don't endanger ourselves. I think social science is basically the way to figure this out.
Will behavioral economics revolutionize economics as a whole?
I don't think so. Economics is a beautiful, wonderful study and people are making lots of interesting progress. In the same way that sociologists are not perfectly right, anthropologists are not perfectly right, and psychologists are not perfectly right, economists are not perfectly right. But this doesn't mean that they don't have the right to study whatever they are studying in their own way and gain better understandings from their perspective. The danger is not in studying economics. The danger is in overusing economics to inform our decisions. What I'm hoping is that we'll keep on studying economics but not overstretch its usefulness.
Will any aspects of conventional economics, conceived in terms of rational utility maximization, remain immune to behavioral economics?
Yes. It's outside of the picture of behavioral economics.
What do you see as the broader impact of behavioral economics on society and culture in the coming years?
My hope is that behavioral economics would have an influence directly on society. For example, imagine that the government is considering a new health plan, or considering a new way to regulate banking behavior, or how to do risk, or what kind of protection consumers need in their credit cards and mortgages. What I'm hoping is that instead of just relying on standard economics for advice, they would broaden their perspective and include people from other parts of social science, including psychologists and behavioral economists, maybe sociologists. Basically what I'm saying is that before we go ahead and implement them, let's have the modesty to realize how little we know. Let's consult multiple perspectives, and ideally let's do some experiments to figure out what really works. One of the lessons for me, from behavioral economics, is how little we know and how often we're wrong. And if this is the case, then we just need to be a bit more modest and we need to test things more explicitly.
Popular with our students.
Highly informative resources to keep your education journey on track.
Take the next step toward your future with online learning.
Discover schools with the programs and courses you’re interested in, and start learning today.