“Above all, don’t lie to yourself. The man who lies to himself and listens to his own lie comes to a point that he cannot distinguish the truth within him, or around him…”Fëdor Dostoevskij, The Brothers Karamazov
We are constantly exposed to examples of dishonest behavior. We read about it in the news or we directly experience it in our everyday life. Philosophers like Hobbes, Smith, and the standard economic model of rational human behavior (homo economicus) believe that people act dishonestly if, after comparing the beneficial and detrimental effects related to a dishonest conduct, they expect to get higher gains than losses by adhering to that behavior. This external cost-benefit view is central not only to economic models, but also to the theory of crime and punishment. In general, this standard perspective predicts that there are three key drivers expected to increase the frequency and magnitude of dishonesty: higher external rewards, lower probability of being caught and more lenient punishments.
However, the reasons why people cheat and the magnitude of their cheating in real life do not seem to derive from a mere cost-benefit analysis. We have to consider another set of important inputs affecting the decision-making process, i.e. the ones related to internal rewards. Psychologists have shown that people internalize social norms and values and use them as an internal benchmark to which compare oneself behavior. Compliance with the internal values system provides positive rewards, whereas noncompliance leads to negative rewards. Evidence of the existence of such mechanism is offered by neuroimaging studies, revealing that acts based on social norms, such as social cooperation, activate the same primary reward centers in the brain as external benefits, like favourite food or monetary gains. In particular, it has been demonstrated that people typically value honesty, they like thinking of themselves as honest individuals and want to maintain this aspect of their self-concept. This means that if people fail to comply with their internal standards for honesty, they will need to negatively update their self-concept, which is something aversive. Consequently, in order to keep a positive self-view, we would expect people to comply with their internal standards even if doing so might require effort or sacrifice of financial gains. From this perspective, people are often torn between two competing motivations: gaining from cheating and maintaining a positive self-view. The solution to this motivational dilemma consists in adaptively finding a balance between the two forces.
A set of experiments performed by the behavioral scientist Dan Ariely and his colleagues showed some interesting results. They found support for a theory of self-concept maintenance by demonstrating that when people have the ability to cheat, they do so, but the magnitude of dishonesty per person is relatively low (relative to the possible maximum amount). They also found that, in general, people are insensitive to the expected external costs and benefits associated with the dishonest acts, whereas they are sensitive to contextual manipulations related to the self-concept. In particular, the level of dishonesty drops when people pay more attention to honesty standards (e.g. if they are asked to sign an honor code) and climbs with increased categorization malleability, i.e. if the situation allows them to rationalize their dishonest actions into categories more compatible with their internal standards.
The mechanism of categorization is used, among others, as a self-deception tool, enabling people to be more lenient towards their conduct and not forcing them to negatively update their self-concept. The same reasoning can be applied to the situations in which one’s dishonest behavior can also benefit other people in some way. In this case, individuals are more likely to focus on the positive consequences of their actions, ignoring the possible harm that they are causing, so to convince themselves that they are behaving honestly. We can see this as a form of self-serving altruism.
The huge problem with self-deception mechanisms is that people rarely are aware of what is going on. In a study, participants had to complete a test and an answer key was given to some of them; these obviously cheated, by checking their answers. Afterwards, the experimenters gave them another test to look through and asked, before the beginning, to predict their performance in it (for which they did not have any answer key). As they had cheated in the previous test, it was expected that they would have underestimated their real abilities. Incredibly, the outcome was the exact opposite: those who had looked at the solutions overpredicted their scores in the future test, suggesting the existence of an underlying process of self-deception. One possible explanation is that the ones who cheated incurred in a hindsight bias, convincing themselves that “they knew it all along”. In particular, they attributed more weight to their intelligence or ability than to the presence of the answer key as factors of their success in the first test. Furthermore, even though they received the actual score, reflecting their true abilities (lower than what they had predicted), they later continued to inflate their predictions, when asked to repeat the same task (to look through a new test without solutions and forecast their own performance). However, predictions became more correct as they repeatedly faced evidence (i.e. the actual scores) contrasting their expectations. If, once again, they were given the solutions, the presence of the answer key triggered self-deception in predicting the scores of the subsequent tests, going back to the initial situation.
The result of the previous experiment implies that, when cheating is involved, people are very likely to incur in self-deception; in particular, this mechanism has a slow process of decay, which is encouraged if corrective feedback is repeatedly provided by the environment. This phenomenon, though, is also subject to a quick revival, and this suggests that, even in the long run, people are not always able to update their expectations and could possibly make the same error systematically.
Furthermore, this type of research rises some fundamental questions: can people actually learn from their mistakes? Are there any specific circumstances that can help them do so? Do we systematically fool ourselves? Understanding why and how people fall victims of self-deception is relevant for many disciplines, such as economic and crime theory, psychology and philosophy, but it is especially important for individuals in their everyday life, being self-awareness and the ability to assess oneself performance key factors for personal well-being, satisfaction and success.
Dan Ariely et al. “The slow decay and quick revival of self-deception”
Dan Ariely et al. “Temporal view of the costs and benefits of self-deception”
Dan Ariely et al. “The dishonesty of honest people”