My most recently published article, co-authored with Christopher Ansell, tackles the question of how and why the ways in which we govern different kinds of risk change over time . We first describe these changes as a combination of five “trends”, then outline three possible perspectives through which to interpret and explain these trends. We conclude by comparing the three perspectives to see what new insights we can garner into regulatory design and implementation.
The paper is, I admit, highly technical in terms of jargon and theory. I’ll do my best here to summarize the main points as clearly as possible; note that this annotated summary does not necessarily reflect the views of my co-author.
Profound changes in risk regulation have been brewing over the last few decades. These changes include an explosion of new institutional forms and strategies that decenter risk regulation and introduce a role for meta‐regulation, a growing reliance on risk‐based analysis to organize decision making and management, an increasingly preventive approach to regulation that requires an expansion of surveillance to better characterize and monitor risks, and a sharpening of contestation over strategies for evaluating and responding to risk. We distill two perspectives from the existing literature on risk regulation that can plausibly provide overarching explanations for these trends. The first perspective explains these trends as a reflection of the refashioning of state, market, and society to privilege economic liberty—an explanatory framework we call “neoliberal governmentality.” The second perspective argues that these trends reflect practical demands for more efficient and effective risk regulation and management—an explanatory framework we refer to as “functional adaptation.” The central purpose of this paper is to advance a third explanation that we call a “problem definition and control” approach. It argues that trends in risk regulation reflect interactions between how society defines risks and how regulatory regimes seek to control those risks.
To better understand the purpose of this article, it is probably helpful to say a word or two about what a risk is, and why we should try to govern or regulate risks.
What is a risk, anyway?
If something bad happens, we tend to think of it as a harm that hurt people or caused damage to property or the environment. Generally speaking, most people would rather avoid such harms given the choice. Thus we are generally on the lookout for danger, anything around us that could be a source of harm. Many harms are produced from biophysical or social forces beyond the control of individuals, hence people generally have to work together – for example, through law, policy, and regulation – in order to control those dangers and reduce the potential for harm.
However, orienting policy and regulation toward the reduction of harms faces a substantial practical problem: how can regulators (the people trusted with managing the recognized danger) demonstrate successful control over that danger if, by definition, an avoided harm never occurs? Put differently, it is very difficult to tell whether an imagined harm never occurs because regulators successfully averted the danger or because the danger was not as severe as people thought.
To give a concrete example of the conundrum this poses, consider the case of a governor who gives an evacuation warning in advance of a forecasted hurricane. If the resulting storm fails to cause much damage, this could either be because people prepared and evacuated ahead of time or because the predictions of the storm overestimated the danger it posed to communities in its path, a “boy who cried wolf” scenario. The governor is therefore faced with a paradox: the more successful s/he is in averting harm caused by the hurricane, the less sure people may be as to whether the governor’s choices and actions made any difference to the end result.
To get around this central problem, regulators turn to risk to better understand and manage the uncertainty of future events. Ortwin Renn defines risk as “the possibility that an undesirable state of reality (adverse effects) may occur as a result of natural events or human activities” . Risk can therefor be considered a “mental model” made by assembling and weighing a set of hypothetical futures using powerful tools and concepts from probability theory. The point is to leverage information about what has happened in the past in order to render future contingencies into variables that can be factored into decision-making in the present. As we quote Sheila Jasanoff in the article “[Risk] is an important and powerful method of organizing what is known, what is merely surmised, and how sure people are about what they think they know” . In short, risk is a technique to impose order under conditions of uncertainty.
Why should we care about risks?
The purpose of calculating risks is to help us better choose among potential outcomes. Thus risk is also normative, that is it encompasses not just what might be but also what should be. By taming the uncertainty of future events into probabilities, risk brings dangerous futures within the reach of people making decisions in the present. The flip side is that using risk as a tool to assist decision-making also means accepting a certain level of responsibility for those decisions and tends to narrow the range of acceptable outcomes. Almost by definition, risk carves out a role for human decision-making and action within the unfolding of events which might otherwise be attributed to nature or simple chance. Every act of calculating a risk also necessarily marks an assumption of responsibility by someone over the eventual outcome.
By adopting risk as a framework for knowing and acting in the world, we also lower our capacity to accept accidents. People tend to hold higher expectations for other people than we do for “nature” or “chance”. If people have power over outcomes, then those same people can be held accountable for those outcomes. In practice, then, the seemingly straightforward goal of preventing harms entails the more complicated process of calculating and managing risk, which in turn affects the assignment of responsibility and potential blame.
In this way, governing risks means not just managing the technical assessment and evaluation of uncertain dangers, but also managing social relationships: trust, duty, delegation, representation, credibility, legitimacy, and so on. In technical jargon, we would say that risk governance is co-produced through scientific activity (to put knowledge ‘in order’) and political activity (to put society ‘in order’). The point is that explaining how and why the ways in which we try to regulate different kinds of risk change over time requires looking at both the actual physical sources of harm out there in the world and at how we organize ourselves in response to our perceptions of those harms.
Trends in Risk Governance
We identified five trends in the ways in which we are organizing ourselves to regulate (that is, to control) various risks. Note that we refer specifically to regulatory regimes, by which we mean the collection of activities through which people collectively respond to a given type of risk. The two concrete examples of risk regulation regimes that we highlight in the article are the food safety regime – to control the risk of foodborne illness – and the child protection regime – to control the risk of child abuse.
Over time, regimes to regulate risk seem to be decentralizing. Whereas traditionally regulation was thought to be the responsibility of centralized governments, increasingly a wider variety of groups are participating in regulation. This includes multiple levels of government – multi-national, national, state/provincial, and local – as well as industry, civic groups, and even individual citizens. The resulting style has been described as “horizontal” rather than “vertical”, evoking a distributed network rather than a single chain of command.
As a corollary to decentralization, there has been a general shift in the role of central governments. Rather than directly regulating the activities of private industry or individuals through detailed rules of conduct, governments are increasingly taking a step back by crafting templates for industry and individuals to create their own rules. This regulation of regulation is referred to as “meta-regulation”.
3. Risk colonization
This trend is more difficult to explain succinctly, but the basic idea relates to the regulatory paradox I identified above when introducing risk. To recap, regulatory uncertainty results from both scientific uncertainty as to the severity of potential danger and political uncertainty as to the credibility and reliability of the people entrusted with averting that potential danger. Thus scientific efforts to assess and control physical risk bleed into political efforts to assess and control social risk. This is referred to as “risk-colonization”. The most common example of the latter is the risk that the person(s) in charge could lose their position if they screw up, or are perceived to screw up). The upshot is that in the effort to reduce uncertainty of the future, we can often introduce new uncertainty regarding the present relationships of trust, accountability, and reliability. So we have to expend additional effort managing that uncertainty as well.
4. Prevention and Surveillance
As I implied above, one aspect of risk governance seems to be that it focuses attention, resources, and energy toward preventing harms from occurring, as opposed to mitigating their effects or ameliorating the damage after the fact. Prevention seems to build on itself, with apparent shifts from managing risks to managing the risk of risk, i.e. regulating risk factors in addition to risks. To give an example, food safety regulation seeks to control various environmental risk factors that contribute to the probability that a dangerous human pathogen will contaminate food. Prevention is something of a never-ending journey, however, as there are always more potential risk factors, not to mention risk factors for risk factors, that could be taken into account. Risk regulation is data-hungry – it relies on vast amounts of information as to what has happened and what is happening in order to control what might happen. The result has been a corresponding increase in surveillance, the routine and extensive collection of data of all kinds from all sources.
The final trend relates to the increasing incidence of challenges to risk regulation, and a correspondingly more frequent inability for risk regulation regimes to stabilize around a common consensus. We see more disagreements playing out in legislatures, courtrooms, boardrooms, academic conferences, and popular media as a result.
Explaining the Trends
We looked at the ways in which different scholars have explained these trends, both in the abstract and for particular cases of risk regulation. Based on this review, we found two overarching theories of change that “help scholars make sense of specific developments in risk governance.” We called them neoliberal governmentality and functional adaptation.
Each of these theories of change carries a certain normative commitment. This affects the tone of analysis – for example, optimistic versus pessimistic – as well as the purpose of analysis – that is, what is understood to be at stake and for whom.
We argued that neither paradigm captures the full range of possible explanations and interpretations for the trends I described above. So we mapped out a third theory of change, problem definition and control, that highlights different mechanisms and relationships driving changes in risk regulation.
This theory of change tends to read changes in risk regulation as “the triumph of the market over the state and a decline of publicness in favor of private provision.” It is critical in tone, and tends to resist change. Neoliberalism refers to a political economic ideology in which the public good is best served when individuals are free to pursue their own self-serving ends . It aligns well with governmentality, a mode of government in which individuals and markets are encouraged to govern themselves. To give an example, governmentality refers to strategies that rely on “passive” mechanisms of social regulation such as standards, norms, indicators, and metrics . Overall, this perspective tends to emphasize the struggle between private self-interest and public altruism, implying a skeptical stance toward the capacity of people to cooperate toward common goals.
The first two trends can be explained as a result of applying that ideology in practice. “Freeing” individuals and markets necessarily flattens a centralized chain of command in a regulatory regime, and forces government agencies to take a more hands-off, “meta” role in regulation (Trends 1 and 2). Risk operates to shift responsibility away from public or collective decision-makers onto individuals (Trend 3). The lack of public capacity to pay for costly harms after the fact likewise leads to an emphasis on prevention, and government’s role shifts to reducing “transaction costs” through data-gathering, i.e. surveillance (Trend 4). However, there is a fundamental tension between the ideals of individual freedom and public security that leads to conflict and contestation (Trend 5).
This perspective tends to view regulatory change as a series of “positive [and rational] responses to the limits or failures of prior strategies.” It is progressive in tone, and tends to celebrate change. We used the word functional to emphasize that pragmatic tenor of this theory of change. It is concerned with solving problems efficiently and effectively, and tends to downplay “politics.” Likewise, we used the term adaptation to stress how this perspective emphasizes the process of social learning – that is, how we (presumably) get better at solving problems over time. This implies a certain assumption that people act in good faith for the common welfare, and tend to cooperate more than they tend to compete.
As human society grows and our technologies continue to develop, the inherent risks require very specific technical expertise to manage. It is inefficient (and likely ineffective) to attempt to duplicate this expertise, thus regulators must work with the communities they regulate more cooperatively as opposed to in a top-down manner. Decentralized networks are thought to address problems more cooperatively and flexibly, in which government agencies take on the meta role of “steering” the network rather than directing it (Trends 1 and 2). Framing problems as risks simply allows regulators a rational and more efficient means to allocate scarce resources to target the most pressing sources of danger (Trend 3). Likewise, it is expensive to fix harms after the fact – it is more efficient to prevent them from happening in the first place (Trend 4). Lastly, some risks are simply incalculable, and some have irreversible or catastrophic consequences, which leaves a great deal of uncertainty in how to control them – opening the door for contestation (Trend 5).
Problem Definition and Control
Both of the theories of change I just described place a special emphasis on deliberative decision-making and action, what we can think of as agency. Neoliberal governmentality tends to critique the pernicious agency of powerful groups that wield both economic and political clout, while functional adaptation tends to celebrate the benign agency of expert bureaucrats. Our theory of change, in contrast, starts from a different assumption: regulatory change may not result from intentional agency at all, but rather may emerge from the complex interaction of many independent interests.
In the paper, we try to take into account “the interplay of diverse political agendas” that drive problem definition (i.e. define risk) as well as “the practical difficulties risk regulators confront when they try to govern risks” (i.e. control risk).
Problem definition includes both the visible efforts of “politicians, advocacy groups, and the media [to] define public problems” as well as the behind-the-scenes work performed by ostensibly apolitical groups including government agencies, courts, scientists, and other experts to frame the conversation around risk. Risk control includes “the scientific, technological, institutional, and organizational strategies for preventing, managing, and responding to risk.” This aspect accounts for the day-to-day work of dealing with problems.
The ways in which we collectively define problems and the way we set about dealing with them (as risks) are closely linked. Thus “problem definitions often entail a conception of how they should be solved,” which implies that sometimes the problem definition might be massaged to fit a pre-determined solution. As the old adage goes, when you have a hammer, everything starts to look like a nail.
At the same time, failures or advances in risk control might open new opportunities to adjust or even fundamentally redefine a problem. As I discussed earlier, the more successful a regulator is in controlling risk, the less evidence there is that the danger is worthy of continued public vigilance – as a result, the problem definition may drift away from its original focus. Likewise, a massive regulatory failure – such as a large outbreak of foodborne illness or the revelation of a mass of hitherto unseen cases of child abuse – may open the door for a redefinition of the problem.
Based on this understanding of how problem definition and risk control relate to one another, we argue that the trends observed above result from two outcomes of this dynamic relationship. First, the iterations of problem definition and control tend toward finer granularity (increasing detail and specificity) and wider scope, broadening “the range of causes or consequences recognized for a given risk.”
This pattern of increasing scope and granularity drives risk regulation regimes toward more systemic, or holistic, approaches as opposed to reductionist, narrowly-bounded approaches: “regulatory regimes will shift toward more systemic control; at the same time, as systemic control becomes institutionalized, it will further reinforce the credibility and legitimacy of systemic problem definitions in a mutually reinforcing dynamic.”
Thus, “a shift toward systemic problem definition encourages more systemic control strategies,” which tend to enroll more people and organizations into the regulatory process (Trend 1). Introducing more people into the regulatory regime, however, raises significant challenges for coordination and accountability; government regulators have to step back into the “meta” role in order to manage these emergent concerns (Trend 2). The attempt to control a broader system, however, opens the door for imagining a wider variety of potential risks and risk factors, including the possibility that the control system itself might pose a novel threat (Trend 3). The search for hidden risk factors and novel systemic risks leads regulators to look “‘upstream’ to avoid future failures and preserve reputation” (Trend 4). Lastly, the scope of the problem definition may expand faster than the capacity to control it, leading to a conflict between further systemic expansion or back-tracking toward a narrower, reductionist focus (Trend 5).
We conclude by arguing that this theory of change implies several negative, though unexpected, outcomes from the trends in risk regulation. Specifically, “escalating interactions between problem definition and control may make it harder for risk regimes to achieve stable closure, may produce institutional ‘brittleness,’ and may intensify tension among competing social objectives.” In other words, interpreting regulatory regime change through this third framework predicts further instability and conflict in the way we collectively govern risks, and foresees an increasing likelihood of cascading and/or catastrophic failures of regulation.
Notes and References
 Ansell, Christopher, and Patrick Baur. 2018. “Explaining Trends in Risk Governance: How Problem Definitions Underpin Risk Regimes.” Risk, Hazards & Crisis in Public Policy (early access online).
 Renn, Ortwin. 2008. Risk Governance: Coping with Uncertainty in a Complex World. Earthscan.
 Jasanoff, Sheila. 1999. “The Songlines of Risk.” Environmental Values 8 (2): 135–52.
 Defined as “a theory of political economy that proposes that human well‐being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong private property rights, free markets, and free trade.” Harvey, David. 2005. A Brief History of Neoliberalism. Oxford; New York: Oxford University Press. p. 2.
 I wrote “passive” in quotes because there is actually quite a bit of political maneuvering and strategy that goes into setting standards and crafting indicators. See, for example, Busch, Lawrence. 2011. Standards: Recipes for Reality. Cambridge, Mass.: MIT Press.