Categories
Our Work

This article will not change your mind

Imagine you are presented with four cards that have a number on one side and a letter on the other (as in Figure 1). You are also provided with a rule: if a card shows a vowel on one face, then its opposite face shows an even number. Which card(s) would you turn to check whether the rule is working? This rather simple experiment turns out to be tricky and most of the subjects are not able to provide the correct answer (which by the way is to turn the cards showing A and 7).

The mistake that most of us commit, is to build an incorrect hypothesis testing architecture and consequently end up with wrong conclusions (for example, you might be induced to turn the sole A card to check whether there is an even number behind, forgetting that also the 7 card is a potential violation of the rule; even worse, you might turn also the 4 card, while this is useless to test the rule, since it cannot falsify it). This particular phenomenon was first studied in the 60s by the psychologist Wason [1], who developed the idea of the so-called confirmation bias; this is defined as the tendency to (unconsciously) build an architecture for the hypothesis testing that is clearly favourable to the confirmation of our prior beliefs [2].  Before diving into an overview of this bias in our daily lives, it is important to say that we will present only the downsides of this bias, while of course there are cases in which we are not subject to it (like in situations that we frequently experience or that are less abstract and more concrete).

Figure 1

The confirmation bias is present in our daily lives and it is potentially harmful. For example, the existence of fake news, and most importantly the fact that they are believed by some people to be reliable, is the proof of it. If fake news is in accordance with our prior beliefs, then we are going to interpret them as true, since they would confirm our ideas [3], even if there is no evidence of their truthfulness. In the era of internet, this effect is further strengthened by the way we search for news: online algorithms “suggest” news that “we might be interested in”. In this way, we build a bubble (called filter bubble [4]) and it gets more and more difficult to compare discordant views on particular topics. This brings our attention towards another important effect that is called attitude polarisation. Since people tend to stick to initial beliefs (and possibly strengthen them) it is easy to understand that, in the long-term, this phenomenon ends up into further polarisation of ideas and attitudes (think about how many times you concluded a discussion with the sentence: “I certainly see your point, but it just does not seem right”). An experiment on death penalty shows this phenomenon in a very precise way. Proponents and opponents were presented with the same articles regarding the capital punishment (one supportive and one criticising). These articles should have influenced the two groups (favourable and not) in the same way; however, the experiment ended up strengthening the initial positions [5].

Considering the potential consequences for consumers, if they are excessively reluctant to changing their mind, their behaviour might be strongly affected. Whenever you need to choose between two products, you will find one better than the other (for whatever reason) and opt for that. Rational consumers, once they receive evidence contradicting their beliefs, they should update them and switch to the other product. However, the confirmation bias explains how we interpret that piece of evidence as supporting (or at least not too much adverse) rather than criticising our first choice. It seems that we are unable to carry out a rational judgement from an external point of view. Analysing this behaviour at a mass level, might give us some insights in order to better understand the lock-in effect: firms build an important customer base that is never going to switch to a different brand. This is of course due to familiarity and other psychological effects, but the presence of a confirmation bias amplifies this effect.

Finally, if justice is of your concern, you might be interested in knowing what happens to judges when they are subject to confirmation bias. Usually, they form an idea in the early stages of trials and then tend to interpret the pieces of evidence that are presented with accordingly to the initial idea that they came up with. Moreover, in case they have preconception about a certain issue, they will most likely interpret evidence in a manner consistent with their assumptions [6]. Of course this does not mean that we should not rely on judges, but it is important to underline the general weaknesses of our judging power (inside and outside courts) once we acknowledged our biases. A similar process takes place in the context of job interviews, in which the interviewer forms ideas about the candidate in the first seconds of the meeting and tends to confirm them afterwards, ending up with a biased opinion about the interviewee.

Figure 2

You might have understood that the prior beliefs are likely to be confirmed because of this bias. An intriguing question is how do we form these beliefs, which is essential to understand since they will influence the following development of our ideas. Of course, these beliefs are grounded on values, on what we have studied and on many other rational factors. However, the primacy effect plays an important role. For example, if before meeting a person you are told that he or she is “intelligent, industrious, impulsive, critical, stubborn, envious” then you are going to think that he or she is a much better person than if you were presented the same adjectives in a reversed order [7]. This is just an example of a framing effect that might influence our first beliefs and consequently our development of a whole idea and thought.

To sum up, we draw a general picture on the way we form ideas: we get a first impression (that might already be biased) and we update it in a non-completely rational way. This is everything but an appealing scenario and there is no way to be fully free from this particular bias, because it is grounded in a very deep psychological sphere (try to think how do you feel when a hypothesis of yours is not confirmed). However, as always, the awareness of our cognitive limitations is one step towards minimization of the problems created by psychological biases. As a rule of thumb, we should always look for counter-ideas and account for different, even opposite, points of view: this is very important since ideas have a strong impact on the social world we live in.

Emiliano Sandri

Sources:

[1] https://en.wikipedia.org/wiki/Wason_selection_task

[2] https://en.wikipedia.org/wiki/Confirmation_bias

[3] https://eu.usatoday.com/story/money/columnist/2018/05/15/fake-news-social-media-confirmation-bias-echo-chambers/533857002/

[4] https://en.wikipedia.org/wiki/Filter_bubble

[5] C. G. Lord, L. Ross and M. R. Lepper, Biased Assimilation and Attitude Polarization: the effects of prior theories on subsequently considered evidence (1979), Journal of Personality and Social Psychology

[6] E. Peer and E. Gamliel, Heuristics and Biases in Judicial Decisions (2013), Court Review

[7] https://en.wikipedia.org/wiki/Confirmation_bias#Preference_for_early_information

If you are interested in the “historical development” of the confirmation bias you might want to consult these articles that face it in a detailed way:

https://www.psychologyinaction.org/psychology-in-action-1/2012/10/07/classic-psychology-experiments-wason-selection-task-part-i

https://psychology-inaction.squarespace.com/psychology-in-action-1/2013/10/07/psychology-classics-wason-selection-task-part-ii

Featured Image source: PsycholoGenie

Leave a comment