Politics and Public Policy

Risk perception and risk regulation: the public and the experts 

Governments have always had a critical role in managing the chances of adverse outcomes on behalf of the public, relying on regulation to protect citizens from social, environmental, and economic risks. 

Being risk analysis a political as well as a scientific enterprise, the public perception of risk greatly affects risk management and assessment by public authorities, especially in democracies where policies are strongly dependent on their electoral approval and popularity among the public.

Disagreements about the best course of action between technical experts, often consulted by governments in risk assessment, and members of the community originate from discrepancies in attitudes towards risk. On the one hand, the public is generally guided by emotion, largely affected by the availability heuristic, according to which it judges the likeliness of any event based on the ease with which its occurrence is recalled, and it is inadequately sensitive to the difference between low and negligibly low probabilities. On the other hand, experts are less affected by the same biases and superior in dealing with data and amounts. 

This divergence between judgements and preferences about risk brings several implications for risk management, giving rise to two relevant phenomena. 

Firstly, an increasingly lack of trust towards risk governance by the public is being generated by controversies and polarized views on risk assessment. Although the causes of this distrust can be found in our form of participatory democracy, that gives room to multiple and often contradicting views on the same subject, its consequences can lead to an inefficient allocation of resources for risk management and to an excessive influence of specific interest groups that exploit media to influence risk policy debates and decisions. 

Secondly, the relevance of certain events may be distorted by the formation of availability cascades, namely self-reinforcing beliefs formation processes based on the availability heuristic according to which an expressed perception triggers a chain reaction that increase the perceived plausibility of the perception due to its availability in the public discourse.

Regarding the importance to give to the public versus experts in risk regulation, two different views have been carried out in recent years based on different views of the concept of risk.

According to Paul Slovic, a renowned expert in the psychology of risk, the nature of risk is inherently subjective and the notion of risk itself has been invented by humans as an aid in understanding and coping with dangers and uncertainty. Due to its subjectivity, Slovic argues that the evaluation of risk depends on the choice of its measure and thus it is an exercise of power. While experts mainly measure risk by the number of lives lost, the public draws finer legitimate distinctions, differentiating for example deaths of people who engage in an activity by choice benefitting from it and deaths of people involuntary exposed to a hazard without any benefit. 

Therefore, as the public’s reactions to risk are sensible to technical, social and psychological qualities of hazards not well modeled in technical risk assessment, citizens should not be excluded from the management of risk policy and the sole control of risk regulation by experts should be challenged. 

On the contrary, Cass R. Sunstein, a legal scholar who investigated the field of risk perception, supports the view according to which the notion of risk is purely objective, claiming that as long as the effects of poor regulations can be measured objectively in terms of resources wasted and lives lost, governmental intervention should be guided by rational weighting of costs and benefits and risk should be evaluated in terms of number of lives saved and dollar cost to the economy. Defending the role of experts against the excesses of populism, Sunstein analyses the consequences of availability cascades in public policy for risk management, focusing on two examples of availability errors: the Love Canal and Alar cases. 

Love Canal was an abandoned waterway filled with tons of chemicals waste that was then turned into a residential neighborhood by the local government. In subsequent years, the buried toxic waste was exposed during a particularly intense rainy season causing contamination of the water and foul smell. The resident of the community feared that the chemicals had resurfaced, making the neighborhood unlivable. Government officials ordered a series of tests that detected negligibly low levels of carcinogens in basements near the dump site, concluding that the observed toxicity was not at all threatening. Nevertheless, residents were not reassured by the results of the tests, and concern grew steadily as the local and national press continued reporting the episode as an environmental disaster. In particular, the role of Lois Gibbs, a housewife living in the area, was fundamental in reinforcing fear of adverse health effects and mobilizing public attention, giving rise to an availability cascade that led to a government intervention. While scientists claiming that the dangers were overstated were shouted down, several householders were relocated at government expenses, and the control of toxic waste became the major environmental issue of the period. In the aftermath of the Love Canal disaster, some argued that the same amount of money spent for the cleanup of the site could have saved many more lives if it had been redirected to the prevention of other risks, such as poor diet or cigarette smoking. 

Another salient phenomenon in American environmental politics is the “pollutant of the month” syndrome, that in 1989 was identified in Alar, a pesticide long used on apples. Alar’s manufacturer started conducting tests on the cancerogenic effects of their product, finding no change in the incidents of tumors in their preliminary results. However, the fear began to spread through press stories that claimed that the chemicals consumed in large doses caused tumors in mice and rats. This resulted in a significant reduction in the demand for apples by consumers that greatly damaged the food industry, leading to a great loss of profits. Furthermore, the more the scare proliferated, the more media coverage was encouraged to share stories on the toxicity of Alar, leading to a self-reinforcing process of collective panic. The Alar case effectively summarize our basic limitation in the ability of the mind to deal with small risks: we either ignore them completely or give them too much weight. 

To conclude, even though such episodes led to the realization of relevant policies in terms of environmental regulation, they don’t survive any form of cost-benefit analysis and they invoke for a better allocation of public resources for risk management. 


  • “Availability Cascades and Risk Regulation” by Cass R. Sunstein and Timur Kuran
  • “Perceived Risk, Trust, and Democracy” by Paul Slovic
  • “Perception of Risk Posed by Extreme Events” by Paul Slovic

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s