Origins of risk thinking in environmental health

2017-02-17

Where does the concept of environmental “risk” come from? Ask an environmental health scientist and you might get an answer like: risk = hazard × exposure; or perhaps: risk is a complex function of hazard, exposure, and vulnerability. Beyond just a formula and a four-step process outlined in textbooks, risk thinking is a deeply established set of approaches and techniques. But it’s not the only possible way to understand environmental problems, and it hasn’t always been the mainstream. Here I present a brief and visual summary of one source that traces the conceptual roots of risk in US environmental policy and regulation.

One way to understand the origins of today’s risk thinking is to examine the history of scientific and political debates, laws, and public discourse about environmental hazards. That’s what legal scholar William Boyd did in his paper, Genealogies of risk: Searching for safety, 1930s–1970s. [1] It’s a genealogy in the philosophical sense: Boyd examines the messy and path-dependent formation of ideas that don’t necessarily stem from a single origin. What follows is my summary of this article, as a graphic with some narration. I tried to visually trace the relationships between changing ideas by arranging them on a timeline together with some key historical conditions.

Genealogies of risk

Diagram showing concepts of risk from 1500 to 1980

Precaution and hazard avoidance

This may surprise us now, but for much of the 20th century, regulatory approaches to environmental problems in the US were characterised by precaution in the face of scientific uncertainty. The idea of precaution as an organizing principle of environmental policy actually predates the use of risk for this purpose. The “zero tolerance” policy on carcinogenic contaminants in food provided by the 1958 Delaney Clause is often noted as being strongly precautionary. Although politically controversial, it was based on scientific consensus, which held that there is no safe level of exposure to carcinogens.

Interestingly, the idea of an “acceptable risk” first came up around the issue of radioactive fallout. However, with the finding that there was no inherently safe level of radiation, policy-makers set a precautionary “as low as practicable” standard of exposure. As late as the mid–1970s, the US EPA successfully took what might today be called precautionary actions, including bans on chlordane and other organochlorine pesticides. With the inherently uncertain and incomplete science that was available, they tried to establish margins of safety, and exercised expert judgment to prevent harm. This was legally defensible and in fact encouraged at the time.

Redefining safety

During the 1960s the strategy of absolute hazard avoidance was challenged by advances in analytical techniques, which enabled measurement of increasingly low levels of contaminants. The meaning of a “zero” tolerance standard was destabilized by constantly decreasing limits of detection.

Under immense pressure to carry out their duties in a 1970s political climate that was increasingly adversarial towards regulators, agencies like EPA and FDA began to cut themselves loose from absolute “zero” and to explore nonzero, quantitative interpretations of what constitutes an “ample” margin of safety. They developed new techniques for extrapolating dose-response relationships to very low doses (where effects are experimentally unobservable) as they sought ways of approaching “safety” as something concrete and achievable. Boyd gives two examples which coincide with this turning point in the mid–1970s: FDA’s rule defining a 1 in 100 million “acceptable risk” of cancer from exposure to DES as a food contaminant, and EPA’s decision not to set a “zero” air emission standard for vinyl chloride because that action would close down the entire plastics industry. EPA argued that an alternative “as low as technically feasible” standard would be just as effective, and used risk-benefit analysis to justify this.

Risk thinking becomes possible

As Boyd tells it, this was a “moment of possibility”—a confrontation between competing understandings of safety/risk, and of the role of scientific knowledge in policy. What we now know as risk thinking emerged from this confrontation as the new dominant set of ideas, shifting the whole science/policy landscape in several ways, including:

  • from a language dominated by hazard and danger toward one dominated by risk, in the sense of “a future outcome whose probability can be calculated;”
  • from reliance on trust in the expertise of regulators, toward a system of expertise enshrined in administrative and technical protocols that could be defended against direct political attacks;
  • from a precautionary stance with respect to uncertain and incomplete knowledge, toward reliance on quantification and estimation as a way of overcoming uncertainties. [2]

As a result of all this, the concept of safety in several US environmental laws was literally redefined to acceptable risk. Going even further, the 1976 Toxic Substances Control Act required EPA to provide evidence meeting a standard of unreasonable risk before being able to do virtually anything. By 1980, the US Supreme Court in its “Benzene” decision made it clear that quantitative risk assessment and cost-benefit analysis were now essentially a prerequisite to the regulation of environmental hazards.

Although, Boyd says, the undercurrent of precautionary thinking is still there beneath the surface (and, I would say, evidenced by new policy initiatives, research programs, and social movements), we will never know what environmental policy would look like today if that moment of possibility had concluded differently.

Still searching for safety

That’s all for my summary of Genealogies of risk. What follows here is commentary.

There’s no doubt that science has advanced through the study of risk. In retrospect, however, the turn towards quantitative risk assessment appears to have backfired on regulators in ways that are both obvious and subtle. Rather than make it easier to reach policy decisions, it has created an epidemic of “paralysis by analysis.” Rather than insulate risk assessors from politics, it has brought politics right into the science, so that we now have extremely technical rhetoric where we should have open public policy debates.

Reading Boyd’s article after already being familiar with critiques of risk thinking, [3] my one lingering question is: if the 1970s was a moment of possibility shaped by contingent historical conditions, then what possibilities are open to us today? Amidst calls for a new science of risk assessment in the 21st century, [4] is there also a “new precaution” for the 21st century—and is it “possible” now?

I’ve certainly overlooked some important points from Boyd’s 94-page article. I’m OK with that. Not to mention that there’s an even broader literature about risk as a product of social organization, [5] and of modern society in particular. [6] [7] Still, if you weren’t familiar with the “hazard versus risk” debate before you read this, you now have more of the back story than most people who are still debating.


  1. Boyd, W. (2012). Genealogies of risk: Searching for safety, 1930s–1970s. Ecology Law Quarterly, 39(4), 895–987.  ↩

  2. See also: Jasanoff, S. (2007). Technologies of humility. Nature, 450(7166), 33–33.  ↩

  3. For example: O’Brien, M. (2000). Making better environmental decisions: An alternative to risk assessment. Cambridge: MIT Press.  ↩

  4. Birnbaum, L. S., Burke, T. A., & Jones, J. J. (2016). Informing 21st-century risk assessments with 21st-century science. Environmental Health Perspectives, 124(4), A60–A63.  ↩

  5. Douglas, M. (1982). Risk and culture: An essay on the selection of technical and environmental dangers. Berkeley: University of California Press.  ↩

  6. Beck, U. (1992). Risk society: Towards a new modernity. London; Newbury Park, Calif: Sage Publications.  ↩

  7. Jasanoff, S. (1999). The songlines of risk. Environmental Values, 8(2), 135–152.  ↩