Categories
Interviews

Interview with Prof. Peter Ayton

Peter Ayton is a Professor of Psychology, Associate Dean of Research and Deputy Dean of Social Sciences at City University of London. His research interests cover behavioural decision theory, risk, uncertainty, affect and well-being.

In May, he visited Bocconi University as a part of seminar series co-organised by B.BIAS and BELSS (Bocconi Experimental Lab for Social Sciences) and he was kind enough to give us an interview.

BB: A cliche but necessary question: what got you interested in BE?
Peter Ayton: It was a bit of an accident. After graduating in Psychology (which itself was a lucky outcome as I went to university from school not having much idea what Psychology was), I went on to do a PhD on the psychology of metaphorical language comprehension. At that time, there was almost no research that could explain how people understood metaphors and I found myself completely intrigued by it. However, due to a lack of opportunities in this field, I applied for a job as a postdoctoral research assistant on a project investigating subjective confidence in forecasts and was introduced to the world of decision research and have never looked back.

I became a Behavioural Economist the day that people decided that Psychologists who studied decision making could be called Behavioural Economists. In this way, I am a victim (or beneficiary) of a rebranding exercise. The term Behavioural Economics has been around for a long time but gained real momentum after Kahneman’s Nobel prize. I notice lots of my Psychologist colleagues describing themselves as Behavioural Economists and suspect that one reason they do this is because there is no Nobel prize in Psychology. Of course the use of this term also invites Economists to join in with the investigation of those behaviours that are not anticipated by classical Economics – and that is a tremendous benefit to the research. Before this time Economists and Psychologists viewed each other with suspicion. While governments around the world used to be advised by Economists – and no Psychologists at all – now we see both Economists and Behavioural Economists (aka Psychologists) in a position to influence policy.

BB: Could you tell us a little about your areas of research and the work you’ve done?
PA:
After my PhD research on metaphors I did some work on memory retrieval, before working on judgment and decision making. I started out looking into subjective confidence in forecasts and then looked at probability judgment, the “calibration” of uncertainty judgments and decision making under uncertainty. I have also done work on risk perception and some cognitive illusions, e.g. the sunk cost effect and the hot hand fallacy.

More recently I have been studying human well-being, in particular people’s predictions of how happy they would be under certain circumstances, e.g. if they had a chronic illness, or suffered an amputation. These judgements can be compared with the experience of people under these circumstances. The comparison reveals that people appear to mis-predict the likely effects of these conditions on their own well-being. This has some implications for public policy – specifically how we determine how much money should be devoted to medical research or care for people suffering from particular health conditions. If the predictions of people without the conditions are used as a guide, the spending priorities will be different from the case where the evaluations of the people with the conditions are used.

I am also interested in the impact of computerised advice on decision making. Despite society’s increasing dependence on computerised tools which alert people to risks (e.g. cancers on X-ray images, weapons in air passenger luggage, spell checkers), the understanding of their potential harm is very limited. Sometimes decision aids cause decision errors: one example of this we have found is that when a computer alerting tool misses a “target” (e.g. cancer on X-ray, bomb in luggage, spelling error in your dissertation), then people can be less likely to spot the unprompted target than they would be if they weren’t using the decision support tool in the first place. A phenomenon called “automation bias” occurs whereby people become dependent on the computerised tool. That goes unnoticed because quite often it is easy to demonstrate that people detect more targets when they use the computer than when they don’t, and unfortunately the aggregate improvement conceals the particular errors. This kind of issue is at the junction between Computer Science and Cognitive Psychology and I have been collaborating with some Computer Scientists to try to understand how we can improve the influence of computers on people.

BB: Have you ever had a “professional failure” that was a turning point in your career?
PA:
There are some who seriously propose a CV of failures as an endeavour (see this article), and mine would be much more extensive than my CV of successes. It’s unfortunate that failures are buried, because when you are starting out as a student, you tend to look at successful role models and think “How could I be as good as one of these guys?”, but actually they were pretty bad as well, they just don’t tell you.

Most of the things that I started doing, I didn’t finish. We just stopped because we realized we weren’t going anywhere, or it wasn’t interesting anymore. But sometimes those decisions can be rather questionable. I will give you one good example.

I did some research with a student few years ago about how one can use the compromise effect and the attraction effect in moral reasoning. The attraction effect occurs when you change the relative attractiveness of one option by introducing a new one that is definitely superior to it. For example more people prefer a nice pen to $6 if you add the option of a bad pen.  The compromise effect is similar – when making a choice between, say, two cameras – a basic cheap one and a more elaborate expensive one, you may favour the cheaper one. But upon the introduction of a third highly advanced but extremely expensive camera, you are likely to change your preference to the one in the middle as a compromise. As for the moral choices, take the trolley problem, where you have a runaway train coming down a track where five people are working. You could press a button to divert the train to another track, and save the five people, but that would kill one person working on the other track. We tried to see if the answers that people give to these sort of problems would be similarly malleable like preferences are – maybe the attractiveness of a moral option would vary if you make something really bad close to it.  But it didn’t “work”, it didn’t change people’s decisions. I remember being disappointed because I wanted to write a paper saying people’s moral decisions are really manipulable, that is, people like to think they’ve got moral sense but actually they can be manipulated. I realized only much later that I should have kept on with this, because if I had clearly established that there was no effect of context on moral choices, I could have written a more interesting paper about how context does affect consumer preferences, but not moral choices.

BB: What would you say is your favourite nudge?
PA:
I’m not sure I have a favourite nudge, I’m a bit suspicious of the idea of identifying behaviours as “nudges”. Many “nudges” referred to even in the Nudge book are actually behavioural phenomena discovered by social psychologists many years ago, long before anyone referred to them as nudges! But one that makes me smile is the one with stairs and escalator, and then there is a thin matchstick man pointing to the stairs and a fat matchstick man pointing up to the escalator. You need a bit of nerve to get on the escalator after seeing that.

stairs_escalator_nudge
(Note by BB: Matchstick man nudge)

BB: Is there any finding from behavioural research that surprised you? As in, where you found results contrary to what you expected or to what is accepted as intuitive?
PA:
When I read Joshua Miller’s paper on hot hand, I was so excited that I couldn’t sleep for about 3 days.

(Note by BB: “Hot hand” is the belief that a person who has just experienced success in a task, such as shots in basketball, has a greater probability of success in the upcoming rounds in the task. The hot hand fallacy refers to the finding that such a belief is wrong – for basketball at any rate – and it has been cited as a prominent example of a cognitive illusion by many researchers. However, the paper of Joshua Miller and Adam Sanjuro proves that there may have been flaws in the statistical analyses and that the hot hand indeed exists an so there is no fallacy).

This development is quite fantastic because the hot hand fallacy has been around since 1985 when it was originally discovered by a group including Tom Gilovich and Amos Tversky and (and, in decision research, you don’t get any higher than that – they are royalty). Famously, basketball coaches reacted by saying: “It’s all rubbish, I know that there is a hot-hand effect”.  Some academics too have crashed and burned while trying to contest this phenomenon. Until I understood the Miller and Sanjurjo paper, I was quite certain that the case was rock solid. People have found that there are sequential dependencies in other areas, for other sports even.  However, the case for a hot hand fallacy in Basketball has been scrutinised so much that it’s truly astonishing that somebody’s come up with such a game-changing analysis of the statistics. I got into trouble a few years ago, when I gave a talk called “The cognitive illusion illusion” which somewhat audaciously argued that while there are cognitive illusions, they are mainly suffered by Cognitive Psychologists who think that their subjects suffer from cognitive illusions, when they don’t.  Feeling rather pleased with myself I had the nerve to give this talk at Princeton University with Daniel Kahneman in the room. He made it very clear he wasn’t very impressed with my argument which admittedly was a little overstated. If only Josh and Adam had got their paper out before, I might have been spared admonishment from Kahneman!

The discovery of cognitive illusions is of particular interest for the agenda of business schools. The idea that there is a problem with the way people think is popular for two reasons. Firstly, people need to learn how to run businesses rationally – you don’t want business personnel making mistakes. But also, and more disturbingly, maybe you could exploit the irrationalities of your competitors or the consumer and exploit their vulnerability.

I find it very exciting that maybe people have more competence than has been assumed, do know what they’re doing after all and perhaps some cognitive illusions have been slightly overplayed or misinterpreted. Take, for instance, the well known sunk cost fallacy: while there is an enormous amount of evidence that humans commit the fallacy it has been demonstrated in several studies that animals appear not to be susceptible to it.  There is evidence that animals do violate some rational principles – for example bees’ preference for flowers violate transitivity – but animals live in a tough world and if they behave in a markedly irrational way, evolutionary pressures will probably pick them off. So why, especially if animals don’t, do humans commit the sunk cost fallacy? Aristotle is remembered for claiming that what distinguishes humans from other animals is rationality. That may be true, but perhaps he got it the wrong way around!

By bbiasblog

The official blog of B.BIAS - Bocconi Behavioural Insights Associations of Students

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s