Conflicts and Compromises

I am co-editing a research topic on Conflicts and Compromises between Food Safety Policies and Environmental Sustainability for the journal Frontiers in Sustainable Food Systems. I and my co-editors, Janne Lundén and Michele Jay-Russell, are actively seeking submissions of research articles to further develop this area, which has been central to my research for the past eight years now.

This opportunity is very timely. Regulatory, market, and consumer pressures to improve food safety remain high, affecting food producers from farm to fork. Massive and sometimes deadly outbreaks of foodborne illness continue to sweep across domestic and international populations, as many Americans were reminded during the Romaine lettuce E. coli O157:H7 outbreak earlier this year. At the same time, emerging pathogens and global environmental change introduce unfamiliar threats and exacerbate existing vulnerabilities.

Government, industry, and research institutions are racing full-speed to keep up. The science of foodborne pathogens is rapidly developing, yielding new methods for evaluating risk and detecting contamination across the food system. Novel technologies offer the food industry an increasingly wide array of tools to combat dangerous germs. And the best management practices, production standards, and operational recommendations continue to evolve as food producers work to incorporate the latest developments.

The result appears to be a dynamic and high-stakes arms race between foodborne infectious diseases and the food industry. Amid this rush to get out ahead of pathogens such as E. coliSalmonella, Listeria, and norovirus, we should not blind ourselves to the potential for collateral damage [1]. This open call for new research will help identify a wide range of potential environmental impacts (conflicts) of this arms race, and also point to strategies for minimizing these impacts or even finding synergies between food system sustainability and safety.

While this collection will focus on environmental impacts — to biodiversity, water conservation, pollution, energy efficiency, waste management, etc. — further work is needed on the socioeconomic impacts. In particular, we need to pay special attention to who pays and who benefits as food safety reforms unfold. One of my personal goals in all of this is to bring sustainability and equity into greater mutual focus for researchers, industry, government, and the public at large. The future of our shared food system depends on people and nature working together, and research and policy need to adapt to better reflect this interdependence.

[1]. There are many bacteria, viruses, and parasites that travel on food and are capable of making people sick. See the FDA’s Bad Bug Book for a comprehensive list of biological agents that cause foodborne illness.

Share

Leave a Comment

Filed under Announcements, Research

New Article: Explaining Trends in Risk Governance

My most recently published article, co-authored with Christopher Ansell, tackles the question of how and why the ways in which we govern different kinds of risk change over time [1]. We first describe these changes as a combination of five “trends”, then outline three possible perspectives through which to interpret and explain these trends. We conclude by comparing the three perspectives to see what new insights we can garner into regulatory design and implementation.

The paper is, I admit, highly technical in terms of jargon and theory. I’ll do my best here to summarize the main points as clearly as possible; note that this annotated summary does not necessarily reflect the views of my co-author.

Article Abstract

Profound changes in risk regulation have been brewing over the last few decades. These changes include an explosion of new institutional forms and strategies that decenter risk regulation and introduce a role for meta‐regulation, a growing reliance on risk‐based analysis to organize decision making and management, an increasingly preventive approach to regulation that requires an expansion of surveillance to better characterize and monitor risks, and a sharpening of contestation over strategies for evaluating and responding to risk. We distill two perspectives from the existing literature on risk regulation that can plausibly provide overarching explanations for these trends. The first perspective explains these trends as a reflection of the refashioning of state, market, and society to privilege economic liberty—an explanatory framework we call “neoliberal governmentality.” The second perspective argues that these trends reflect practical demands for more efficient and effective risk regulation and management—an explanatory framework we refer to as “functional adaptation.” The central purpose of this paper is to advance a third explanation that we call a “problem definition and control” approach. It argues that trends in risk regulation reflect interactions between how society defines risks and how regulatory regimes seek to control those risks.

Risk Governance

To better understand the purpose of this article, it is probably helpful to say a word or two about what a risk is, and why we should try to govern or regulate risks.

What is a risk, anyway?

If something bad happens, we tend to think of it as a harm that hurt people or caused damage to property or the environment. Generally speaking, most people would rather avoid such harms given the choice. Thus we are generally on the lookout for danger, anything around us that could be a source of harm. Many harms are produced from biophysical or social forces beyond the control of individuals, hence people generally have to work together – for example, through law, policy, and regulation – in order to control those dangers and reduce the potential for harm.

However, orienting policy and regulation toward the reduction of harms faces a substantial practical problem: how can regulators (the people trusted with managing the recognized danger) demonstrate successful control over that danger if, by definition, an avoided harm never occurs? Put differently, it is very difficult to tell whether an imagined harm never occurs because regulators successfully averted the danger or because the danger was not as severe as people thought.

To give a concrete example of the conundrum this poses, consider the case of a governor who gives an evacuation warning in advance of a forecasted hurricane. If the resulting storm fails to cause much damage, this could either be because people prepared and evacuated ahead of time or because the predictions of the storm overestimated the danger it posed to communities in its path, a “boy who cried wolf” scenario. The governor is therefore faced with a paradox: the more successful s/he is in averting harm caused by the hurricane, the less sure people may be as to whether the governor’s choices and actions made any difference to the end result.

To get around this central problem, regulators turn to risk to better understand and manage the uncertainty of future events. Ortwin Renn defines risk as “the possibility that an undesirable state of reality (adverse effects) may occur as a result of natural events or human activities” [1]. Risk can therefor be considered a “mental model” made by assembling and weighing a set of hypothetical futures using powerful tools and concepts from probability theory. The point is to leverage information about what has happened in the past in order to render future contingencies into variables that can be factored into decision-making in the present. As we quote Sheila Jasanoff in the article “[Risk] is an important and powerful method of organizing what is known, what is merely surmised, and how sure people are about what they think they know” [2]. In short, risk is a technique to impose order under conditions of uncertainty.

Why should we care about risks?

The purpose of calculating risks is to help us better choose among potential outcomes. Thus risk is also normative, that is it encompasses not just what might be but also what should be. By taming the uncertainty of future events into probabilities, risk brings dangerous futures within the reach of people making decisions in the present. The flip side is that using risk as a tool to assist decision-making also means accepting a certain level of responsibility for those decisions and tends to narrow the range of acceptable outcomes. Almost by definition, risk carves out a role for human decision-making and action within the unfolding of events which might otherwise be attributed to nature or simple chance. Every act of calculating a risk also necessarily marks an assumption of responsibility by someone over the eventual outcome.

By adopting risk as a framework for knowing and acting in the world, we also lower our capacity to accept accidents. People tend to hold higher expectations for other people than we do for “nature” or “chance”.  If people have power over outcomes, then those same people can be held accountable for those outcomes. In practice, then, the seemingly straightforward goal of preventing harms entails the more complicated process of calculating and managing risk, which in turn affects the assignment of responsibility and potential blame.

In this way, governing risks means not just managing the technical assessment and evaluation of uncertain dangers, but also managing social relationships: trust, duty, delegation, representation, credibility, legitimacy, and so on. In technical jargon, we would say that risk governance is co-produced through scientific activity (to put knowledge ‘in order’) and political activity (to put society ‘in order’). The point is that explaining how and why the ways in which we try to regulate different kinds of risk change over time requires looking at both the actual physical sources of harm out there in the world and at how we organize ourselves in response to our perceptions of those harms.

Trends in Risk Governance

We identified five trends in the ways in which we are organizing ourselves to regulate (that is, to control) various risks. Note that we refer specifically to regulatory regimes, by which we mean the collection of activities through which people collectively respond to a given type of risk. The two concrete examples of risk regulation regimes that we highlight in the article are the food safety regime – to control the risk of foodborne illness – and the child protection regime – to control the risk of child abuse.

1. Decentralization

Over time, regimes to regulate risk seem to be decentralizing. Whereas traditionally regulation was thought to be the responsibility of centralized governments, increasingly a wider variety of groups are participating in regulation. This includes multiple levels of government – multi-national, national, state/provincial, and local – as well as industry, civic groups, and even individual citizens. The resulting style has been described as “horizontal” rather than “vertical”, evoking a distributed network rather than a single chain of command.

2. Meta-regulation

As a corollary to decentralization, there has been a general shift in the role of central governments. Rather than directly regulating the activities of private industry or individuals through detailed rules of conduct, governments are increasingly taking a step back by crafting templates for industry and individuals to create their own rules. This regulation of regulation is referred to as “meta-regulation”.

3. Risk colonization

This trend is more difficult to explain succinctly, but the basic idea relates to the regulatory paradox I identified above when introducing risk. To recap, regulatory uncertainty results from both scientific uncertainty as to the severity of potential danger and political uncertainty as to the credibility and reliability of the people entrusted with averting that potential danger. Thus scientific efforts to assess and control physical risk bleed into political efforts to assess and control social risk. This is referred to as “risk-colonization”. The most common example of the latter is the risk that the person(s) in charge could lose their position if they screw up, or are perceived to screw up). The upshot is that in the effort to reduce uncertainty of the future, we can often introduce new uncertainty regarding the present relationships of trust, accountability, and reliability. So we have to expend additional effort managing that uncertainty as well.

4. Prevention and Surveillance

As I implied above, one aspect of risk governance seems to be that it focuses attention, resources, and energy toward preventing harms from occurring, as opposed to mitigating their effects or ameliorating the damage after the fact. Prevention seems to build on itself, with apparent shifts from managing risks to managing the risk of risk, i.e. regulating risk factors in addition to risks. To give an example, food safety regulation seeks to control various environmental risk factors that contribute to the probability that a dangerous human pathogen will contaminate food. Prevention is something of a never-ending journey, however, as there are always more potential risk factors, not to mention risk factors for risk factors, that could be taken into account. Risk regulation is data-hungry – it relies on vast amounts of information as to what has happened and what is happening in order to control what might happen. The result has been a corresponding increase in surveillance, the routine and extensive collection of data of all kinds from all sources.

5. Contestation

The final trend relates to the increasing incidence of challenges to risk regulation, and a correspondingly more frequent inability for risk regulation regimes to stabilize around a common consensus. We see more disagreements playing out in legislatures, courtrooms, boardrooms, academic conferences, and popular media as a result.

Explaining the Trends

We looked at the ways in which different scholars have explained these trends, both in the abstract and for particular cases of risk regulation. Based on this review, we found two overarching theories of change that “help scholars make sense of specific developments in risk governance.” We called them neoliberal governmentality and functional adaptation.

Each of these theories of change carries a certain normative commitment. This affects the tone of analysis – for example, optimistic versus pessimistic – as well as the purpose of analysis – that is, what is understood to be at stake and for whom.

We argued that neither paradigm captures the full range of possible explanations and interpretations for the trends I described above. So we mapped out a third theory of change, problem definition and control, that highlights different mechanisms and relationships driving changes in risk regulation.

Neoliberal Governmentality

This theory of change tends to read changes in risk regulation as “the triumph of the market over the state and a decline of publicness in favor of private provision.” It is critical in tone, and tends to resist change. Neoliberalism refers to a political economic ideology in which the public good is best served when individuals are free to pursue their own self-serving ends [4]. It aligns well with governmentality, a mode of government in which individuals and markets are encouraged to govern themselves. To give an example, governmentality refers to strategies that rely on “passive” mechanisms of social regulation such as standards, norms, indicators, and metrics [5]. Overall, this perspective tends to emphasize the struggle between private self-interest and public altruism, implying a skeptical stance toward the capacity of people to cooperate toward common goals.

The first two trends can be explained as a result of applying that ideology in practice. “Freeing” individuals and markets necessarily flattens a centralized chain of command in a regulatory regime, and forces government agencies to take a more hands-off, “meta” role in regulation (Trends 1 and 2). Risk operates to shift responsibility away from public or collective decision-makers onto individuals (Trend 3). The lack of public capacity to pay for costly harms after the fact likewise leads to an emphasis on prevention, and government’s role shifts to reducing “transaction costs” through data-gathering, i.e. surveillance (Trend 4). However, there is a fundamental tension between the ideals of individual freedom and public security that leads to conflict and contestation (Trend 5).

Functional Adaptation

This perspective tends to view regulatory change as a series of “positive [and rational] responses to the limits or failures of prior strategies.” It is progressive in tone, and tends to celebrate change. We used the word functional to emphasize that pragmatic tenor of this theory of change. It is concerned with solving problems efficiently and effectively, and tends to downplay “politics.” Likewise, we used the term adaptation to stress how this perspective emphasizes the process of social learning – that is, how we (presumably) get better at solving problems over time. This implies a certain assumption that people act in good faith for the common welfare, and tend to cooperate more than they tend to compete.

As human society grows and our technologies continue to develop, the inherent risks require very specific technical expertise to manage. It is inefficient (and likely ineffective) to attempt to duplicate this expertise, thus regulators must work with the communities they regulate more cooperatively as opposed to in a top-down manner. Decentralized networks are thought to address problems more cooperatively and flexibly, in which government agencies take on the meta role of “steering” the network rather than directing it (Trends 1 and 2). Framing problems as risks simply allows regulators a rational and more efficient means to allocate scarce resources to target the most pressing sources of danger (Trend 3). Likewise, it is expensive to fix harms after the fact – it is more efficient to prevent them from happening in the first place (Trend 4). Lastly, some risks are simply incalculable, and some have irreversible or catastrophic consequences, which leaves a great deal of uncertainty in how to control them – opening the door for contestation (Trend 5).

Problem Definition and Control

Both of the theories of change I just described place a special emphasis on deliberative decision-making and action, what we can think of as agency. Neoliberal governmentality tends to critique the pernicious agency of powerful groups that wield both economic and political clout, while functional adaptation tends to celebrate the benign agency of expert bureaucrats. Our theory of change, in contrast, starts from a different assumption: regulatory change may not result from intentional agency at all, but rather may emerge from the complex interaction of many independent interests.

In the paper, we try to take into account “the interplay of diverse political agendas” that drive problem definition (i.e. define risk) as well as “the practical difficulties risk regulators confront when they try to govern risks” (i.e. control risk).

Problem definition includes both the visible efforts of “politicians, advocacy groups, and the media [to] define public problems” as well as the behind-the-scenes work performed by ostensibly apolitical groups including government agencies, courts, scientists, and other experts to frame the conversation around risk. Risk control includes “the scientific, technological, institutional, and organizational strategies for preventing, managing, and responding to risk.” This aspect accounts for the day-to-day work of dealing with problems.

The ways in which we collectively define problems and the way we set about dealing with them (as risks) are closely linked. Thus “problem definitions often entail a conception of how they should be solved,” which implies that sometimes the problem definition might be massaged to fit a pre-determined solution. As the old adage goes, when you have a hammer, everything starts to look like a nail.

At the same time, failures or advances in risk control might open new opportunities to adjust or even fundamentally redefine a problem. As I discussed earlier, the more successful a regulator is in controlling risk, the less evidence there is that the danger is worthy of continued public vigilance – as a result, the problem definition may drift away from its original focus. Likewise, a massive regulatory failure – such as a large outbreak of foodborne illness or the revelation of a mass of hitherto unseen cases of child abuse – may open the door for a redefinition of the problem.

Based on this understanding of how problem definition and risk control relate to one another, we argue that the trends observed above result from two outcomes of this dynamic relationship. First, the iterations of problem definition and control tend toward finer granularity (increasing detail and specificity) and wider scope, broadening “the range of causes or consequences recognized for a given risk.”

This pattern of increasing scope and granularity drives risk regulation regimes toward more systemic, or holistic, approaches as opposed to reductionist, narrowly-bounded approaches: “regulatory regimes will shift toward more systemic control; at the same time, as systemic control becomes institutionalized, it will further reinforce the credibility and legitimacy of systemic problem definitions in a mutually reinforcing dynamic.”

Thus, “a shift toward systemic problem definition encourages more systemic control strategies,” which tend to enroll more people and organizations into the regulatory process (Trend 1). Introducing more people into the regulatory regime, however, raises significant challenges for coordination and accountability; government regulators have to step back into the “meta” role in order to manage these emergent concerns (Trend 2). The attempt to control a broader system, however, opens the door for imagining a wider variety of potential risks and risk factors, including the possibility that the control system itself might pose a novel threat (Trend 3). The search for hidden risk factors and novel systemic risks leads regulators to look “‘upstream’ to avoid future failures and preserve reputation” (Trend 4). Lastly, the scope of the problem definition may expand faster than the capacity to control it, leading to a conflict between further systemic expansion or back-tracking toward a narrower, reductionist focus (Trend 5).

We conclude by arguing that this theory of change implies several negative, though unexpected, outcomes from the trends in risk regulation. Specifically,  “escalating interactions between problem definition and control may make it harder for risk regimes to achieve stable closure, may produce institutional ‘brittleness,’ and may intensify tension among competing social objectives.” In other words, interpreting regulatory regime change through this third framework predicts further instability and conflict in the way we collectively govern risks, and foresees an increasing likelihood of cascading and/or catastrophic failures of regulation. 

Notes and References

[1] Ansell, Christopher, and Patrick Baur. 2018. “Explaining Trends in Risk Governance: How Problem Definitions Underpin Risk Regimes.” Risk, Hazards & Crisis in Public Policy (early access online).
[2] Renn, Ortwin. 2008. Risk Governance: Coping with Uncertainty in a Complex World. Earthscan.
[3] Jasanoff, Sheila. 1999. “The Songlines of Risk.” Environmental Values 8 (2): 135–52.
[4] Defined as “a theory of political economy that proposes that human well‐being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong private property rights, free markets, and free trade.” Harvey, David. 2005. A Brief History of Neoliberalism. Oxford; New York: Oxford University Press. p. 2.
[5] I wrote “passive” in quotes because there is actually quite a bit of political maneuvering and strategy that goes into setting standards and crafting indicators. See, for example, Busch, Lawrence. 2011. Standards: Recipes for Reality. Cambridge, Mass.: MIT Press.

Share

Leave a Comment

Filed under Basic Concepts, Research

The Blue Skies Strollair and the Challenge to Collective Action

A good friend of mine, Jason Munster, has a PhD in Environmental Engineering. In his doctoral program, he researched the atmospheric chemistry related to measuring air pollutants, which is critical to understanding climate change. Or, in his own words, “I build instruments to measure processes related to climate change.”

Recently, Munster took what he learned about how water droplets in clouds absorb pollutants to design a personal, transportable air filter that filters out several hazardous criteria air pollutants – NO­2, SO2, and PM.

He founded a company, Blue Skies, to manufacture, market, and sell this air filter primarily to parents of young children in the form of a device that attaches to strollers and car-seats, creating a pocket of relatively cleaner air around the child. Infants and toddlers are particularly vulnerable to the toxic effects of these pollutants, which have been linked to asthma among other illnesses, and Blue Skies markets itself as a champion of children’s health:

Our mission is to reduce asthma and deaths from air pollution worldwide. In developed countries, we plan to carry out the first-ever trials of the benefits of reducing ambient pollution exposure in children. In developing countries, we aim to save lives.

The Blue Skies filter, which he has named the Strollair, is now the subject of a fundraising campaign on Indiegogo, where backers can preorder the device for half of its eventual retail price of $300.

I’ve written this blog post both to acknowledge Jason’s efforts in developing this protective device, which I have every confidence works as advertised, but also, as is my wont, to use the Strollair as a conversation starter. This personal air filter presents a tidy encapsulation of a momentous conundrum facing our society – from the local to global scale – right now:

Do we fight for long-term, collective, and systematic economic and political change that will benefit all people eventually?

Or, do we take short-term action (if we can afford it) to individually protect ourselves and our families from the worst symptoms of a rapidly degrading global biosphere that industrial commodification has pushed past its limits?

The Strollair is an almost textbook example of an inverted quarantine, a concept from sociologist Andrew Szasz that I discussed in a post from several years ago. To recap, what Szasz is talking about is a widespread social phenomenon in which people, especially those who think they are affluent or self-reliant enough to handle every problem on their own, pursue “an individualized response to a collective threat”, which he notes is “the opposite of [a] social movement”:

There is awareness of hazard, a feeling of vulnerability, of being at risk. That feeling, however, does not lead to political action aimed at reducing the amounts or the variety of toxics present in the environment. It leads, instead, to individualized acts of self-protection, to just trying to keep those contaminants out of one’s body.[i]

I think now, in this political moment more than ever, it’s vitally important for Americans to be aware of and critically reflect on this tendency to stop at saving one’s self, because this reaction is increasingly pervasive. We see it everywhere from education to infrastructure to food, and that’s a potential problem. The good news is that we also see renewed calls for long-term solutions that depend on collective action and intense cooperation, such as the growing movement for a single-payer healthcare system or global attempts at cooperative agreements to reign in greenhouse gas emissions to reign in climate change.

This post is not a criticism of people who want to take action to protect themselves or their families. That’s a perfectly rational response, Szasz notes, especially “if one feels that there is nothing to be done, that conditions will not change, cannot be changed”, or not be changed fast enough to make a difference. But we must acknowledge that this is a form of fatalism, which can sap the strength from our creative aspirations and enervate political will to take control of our situation and build the world we want to live in.

There is also the problem of the cognitive blinders that practicing inverted quarantine can reinforce.

Inverted quarantine is implicitly based on denial of complexity and interdependence. It mistakenly reduces the question of an individual’s well-being to nothing more than the maintenance of the integrity of the individual’s body.[ii]

In other words, relying too heavily on the effectiveness of inverted quarantine strategies such as a portable air filter which can allow parents to “opt out” of the consequences of air pollution for their children may lead those parents to believe that this is enough.

But an air filter all on its own will never be enough. Only sustained collective effort to control air pollution – by regulating emissions, developing cleaner fuels and industrial processes, living more efficiently, and so forth – will ever attack air pollution at its source.

And the same difficulty holds true for inverted quarantine responses in general. They offer partial stop-gap solutions to persistent problems that will only get worse over time. Moreover, even those partial solutions are only available to some people – others won’t be able to opt out, most likely because they can’t afford to.

None of this is to say that someone living in an area with high air pollution absolutely should not buy a Strollair for their child. But it is to say that before they do so, they should spend some time learning about the root causes of pollution and its unequal impacts on people by wealth, race, age, sex, etc. And they should take the time to learn about, and ideally get involved in, true collective responses to air pollution that other people are working on to improve the situation for everyone.

The environmental justice movement in particular would be a good point of engagement. There are many different groups and organizations working toward environmental justice in various ways. I think a good example is the California Environmental Justice Alliance, which has a powerful mission statement that demonstrates the collective will to action which must complement the self-protective reaction inherent in inverted quarantine:

We unite the powerful local organizing of our members in the communities most impacted by environmental hazards – low-income communities and communities of color – to create comprehensive opportunities for change at a statewide level. We build the power of communities across California to create policies that will alleviate poverty and pollution. Together, we are growing the statewide movement for environmental health and social justice.

Many pragmatic arguments could be made to support the conclusion that social movements such as environmental justice are good for individuals in the long run (I could appeal to game theory, for example). Szasz offers many examples in his book, most a variation on the theme that you can’t hide from your problems forever – at some point, the state of the world will devolve to the point where no one will be able to buy their way to safety.

But I’d rather end on a different note, an emotional and moral appeal to collective action to match the explicit emotional appeal of the Blue Skies Strollair.

In short, working together feels good and feels empowering. There is a lot to be said for balancing fear and fatalism – the emotions that make people reach for inverted quarantine – in the face of collective threats with love, community, and hope – the emotions that give people the resolve and strength to overcome their differences and work together on big solutions. In short, keeping sight of a purpose larger than ourselves can help us keep our self-protective urges in proper proportion.

As I was writing this post, I pulled out my copy of Robert Bullard’s landmark treatise on environmental justice, Dumping in Dixie. I have the third edition, and in the preface Dr. Bullard writes a line that encapsulates precisely what I mean when I refer to a purpose larger than ourselves. I’ll conclude with his words: “I carried out this research under the assumption that all Americans have a basic right to live, work, play, go to school, and worship in a clean and healthy environment”.[iii]

 


[i] Szasz, Andrew. 2007. Shopping Our Way to Safety: How we changed from protecting the environment to protecting ourselves. University of Minnesota Press. p. 2-3.

[ii] Ibid, p. 222.

[iii] Bullard, Robert D. 2000. Dumping in Dixie: Race, Class, and Environmental Quality. Third Edition. Westview Press: Boulder, CO. p. xiii.

Share

Leave a Comment

Filed under Current Events, Musings

Contradictions, consequences and the human toll of food safety culture

I recently published an article (abstract below), with my colleagues Christy Getz and Jennifer Sowerwine, on the problems with a growing trend for governing pathogens in our food supply: food safety culture. I’ll take a moment to explain what that is.

The primary problem in food safety governance is that our food travels a long way from the field to our plates, changing hands (and jurisdictions) multiple times on its journey. This makes it hard for consumers to know precisely what has happened to their food along the way, and in particular whether something bad happened that makes the food dangerous to eat. So, quite reasonably, people want assurance that their food is safe. But how to get this assurance?

In theory, the food industry should have plenty of incentives to provide safe food. Operators who cut corners and sell food that makes people sick should, again in theory, face negative repercussions: bad PR, loss of market share, lawsuits, and even criminal prosecution in extreme cases. However, given the complexity of the food chain, the sheer amount of different foods that people eat every day (which are often mixed together), and the lag time between eating contaminated food and actually getting sick from it, it turns out that in many cases a sick consumer cannot determine who is at fault. Even when they do, the damage has often already been done.

For these reasons, this reactive approach to governing food safety does not really provide sufficient reassurance that our food supply is safe. So in the US, we also have preventive strategies that try to avert food safety problems before they arise. Government agencies have rules and standards for farmers, packers, shippers, wholesalers, retailers, restaurants and all the other people who grow our food and get it to us. Examples of preventive standards include good agricultural practices (GAPs) for farmers and good manufacturing practices (GMPs) for food handlers and processors. But the food distribution system is huge, and the government’s surveillance capacity is limited (constant monitoring is very expensive). Just like with reactive sanctions, preventive rules provide only partial reassurance. Sometimes, that’s not good enough. One simply needs to look at the persistent occurrence of outbreaks of E. coliSalmonella, norovirus, or other foodborne pathogens to see the limits of this system.

So the “captains” of the food industry — the largest retail and foodservice chains — came up with another method, which Walmart’s vice-president for food safety coined “food safety culture”. As we explain in our article (which is unfortunately behind a paywall), this strategy encourages each individual worker throughout the food supply chain to take continuous and personal responsibility for the safety of the food that passes through their hands. Every worker, manager, and owner is supposed to be ever-vigilant and should continuously seek to improve the safety of food. Food safety culture is a way of delegating the oversight work that would otherwise need to be done by government or a private third-party onto workers themselves, and on the surface would seem to promise a more efficient and comprehensive approach to ensuring that food is safe.

However, our research showed that the primary motivation behind the spread of this strategy is as much about protecting the reputation of powerful brands and shielding them from liability as it is about the public health mission to prevent foodborne illness. Food safety culture is, in many important ways, about preserving the “illusion of safety” amidst a continual cycle of outbreak-related crises and government- or industry-led reforms. Unfortunately, the question of whose safety? gets obscured in the shuffle.  And this can carry a very real human toll in the form of increased anxiety and stress for line workers across the food system who must shoulder the burden of protecting (primarily wealthy) consumers from the spiraling problems of industrially mass-produced food. At the same time, narrow scrutiny on foodborne pathogens — which are one symptom of our industrial food system — distracts from understanding root causes of “food-system-borne disease”.

So while clearly we as a society should work to minimize foodborne illness, we must also balance our efforts to do so against the many other threats to our health and well-being posed by the ways in which we grow, process, and distribute our food. The only way to do this is to critically and rigorously examine each and every form of governance, no matter how seemingly benign and no matter whether government-led or industry-driven, to suss out all the consequences, intended or not.

Article Abstract

In an intensifying climate of scrutiny over food safety, the food industry is turning to “food safety culture” as a one-size-fits-all solution to protect both consumers and companies. This strategy focuses on changing employee behavior from farm to fork to fit a universal model of bureaucratic control; the goal is system-wide cultural transformation in the name of combatting foodborne illness. Through grounded fieldwork centered on the case of a regional wholesale produce market in California, we examine the consequences of this bureaucratization of food safety power on the everyday routines and lived experiences of people working to grow, pack, and deliver fresh produce. We find that despite rhetoric promising a rational and universal answer to food safety, fear and frustration over pervasive uncertainty and legal threats can produce cynicism, distrust, and fragmentation among agrifood actors. Furthermore, under the cover of its public health mission to prevent foodborne illness, food safety culture exerts a new moral economy that sorts companies and employees into categories of ‘good’ and ‘bad’ according to an abstracted calculation of ‘riskiness’ along a scale from safe to dangerous. We raise the concern that ‘safety’ is usurping other deeply held values and excluding cultural forms and experiential knowledges associated with long-standing food-ways. The long-term danger, we conclude, is that this uniform and myopic response to real risks of foodborne illness will not lead to a holistically healthy or sustainable agrifood system, but rather perpetuate a spiraling cycle of crisis and reform that carries a very real human toll.

(If you would like a copy of this article, please contact me.)

Share

Leave a Comment

Filed under Research

Machines and Workers: The deceptive framing of automation

I was recently quoted in an article in MUNCHIES on the incoming US Secretary of Labor, Andrew Puzder. Puzder has (notoriously) argued that government regulations (to protect the welfare and rights of workers) drive the cost of labor up, which forces employers to automate their businesses by replacing human workers with machines. He has quipped of machines, “They’re always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

In the article, I responded by pointing out how “Puzder is implying that deregulation will slow or halt automation by keeping labor cheap. Notably, the only two options Puzder presents for workers are (1) low-paying, uncertain, and exploitative employment or (2) no employment.”

This damned-if-you-do, damned-if-you-don’t analysis of mechanization, automation, and robotization has plagued workers facing technological developments since the industrial revolution. And it is still a major problem today. Especially with the increasing prevalence of Big Data and advances in artificial intelligence, we’re looking at a future in which not just manual jobs—such as picking heads of lettuce or riveting widgets—but also desk jobs like legal work will be in danger of replacement by computerized machines. President Obama even went so far as to warn of the dangers of automation in his farewell address: “The next wave of economic dislocations won’t come from overseas,” he said. “It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete.” The question is, of course, what can we do about it? Are we doomed to be stuck in Puzder’s  de-humanizing catch-22 scenario?

It just so happens that this is a problem I’ve recently been researching. Together with Prof. Alastair Iles and several undergraduate student researchers, I’ve been examining cases of machine labor replacing, or threatening to replace, human labor in agriculture. In particular, we’re looking at the rise of mechanical harvesters for vegetable, fruit and nut crops during the post-War years. While we anticipate the research to continue for several more years, a few important points on automation are already clear. I’ll outline them in the rest of this blog post.

1. Machine labor is not the same as human labor

Historically, machines have outperformed humans when it comes to performing the same exact task over and over again. This is an area in which people do not excel. As I alluded to in my comment on Puzder, trying to compete with machines dehumanizes human workers. Americans have a national fable about a heroic individual, John Henry, out-working a steam-powered drilling machine. However, the feat costs John Henry his life, whereas the machine presumably went on to a long “career” driving steel for the railroad magnates.

The problem highlighted in the fable is the industrial standardization of production. Machines only “work” in a standardized, scripted environment; humans, by contrast, are flexible and adaptive workers, capable of performing many complex tasks depending on the situation. One deception of the catch-22 is that people and machines must inevitably compete with one another in a zero-sum game, even though the actual labor is very different.

2. Machines change the nature of production and the nature of what is produced

It’s not just that machines are “naturally” better suited to perform many tasks than are people, but rather that machines actually shape the nature of work, the working environment, and the product itself to better suit their standard requirements.

To take an example from our research, when California lettuce farmers in the 1960s were considering switching to mechanical harvesters, they ran into a problem: their fields and their plants were too variable and heterogeneous for the machines. Whereas human workers could differentiate between healthy, mature heads of lettuce and damaged or immature plants, a machine could not. Human harvesters likewise could handle humps and dips or other quirks of the field and furrows, but a machine needed perfectly level, uniform, and even rows to cut the heads.

In an effort to accommodate the envisioned mechanical harvesters, farmers, breeders, farm advisors, and agronomists set out to craft a new form of farming, what they called “precision agriculture.” The idea would be to create a perfectly level field of perfectly uniform rows populated by perfectly uniform crops that would all mature at precisely the same time. There would be no weeds, no irregularities, nothing to disrupt the machine’s ability to do exactly the same task over and over again. The field was made to suit the machine.

3. Mechanization is not automatic

New technologies are always embedded in an existing matrix of other technologies, behaviors, and organizational structures—what might better be referred to as a technological system. For a technological development to be truly transformative, to propagate and redefine a given production process, the rest of the system has to transform as well.

In the case of lettuce, the mechanical harvester was accompanied by a host of other technological, behavioral, and organizational changes. Breeders had to develop new seeds that farmers could use to shift toward a “plant-to-stand”—one seed, one plant—approach that would remove the entire production stage of thinning, that is, removing the unhealthy plants from the stand; a highly selective and irregular task not conducive to machine labor. At the same time, chemical herbicides were ushered in to eliminate the need for weeding, a form of chemically automating another form of selective labor. Lastly, farmers had to adapt to a once-over harvest. Whereas harvest crews comprising human laborers with hand knives could sweep a field multiple times over the course of several days to harvest lettuce that matured at different rates, a machine can only really harvest a field once, cutting most everything in its path in one pass.

The point is that mechanization is never “automatic”—ironically, it takes a lot of work to accommodate land, resources, businesses, and even consumers to the strict requirements of machine labor. Automation is a process that must be designed, guided, and even coerced into being. Importantly, this means that people determine how mechanization happens, and we have the power to do it differently.

4. A job is not an end in itself

In an era of rapid advances in artificial intelligence, machine learning, robotics, and other technologies that can perform human tasks, the fixation on “jobs” as the only metric of well-being is problematic. A lot of those jobs will no longer require a human to perform them, as McKinsey & Company’s new report on automation argues. The question we should be asking ourselves as a society is, do Americans really need to be doing tedious, repetitive tasks that can be left to a robot?

As long as life, liberty, and the pursuit of happiness—plus housing, food, healthcare, and so forth—is tied to whether a person has a “job” or not, the answer has to be “yes”. And as more and more machines come online that can perform human tasks, the more and more human workers will have to compete against the robot, hypothetical or real, that threatens to take their job. This can only drive the value of labor down, meaning that Americans will continually be asked to work more for less. That’s not a sustainable trend, and will only exacerbate our already high levels of socioeconomic inequality.

The United States is desperately in need of new public policies that can deal with this fundamental trend of working more for less. Basically, we need ways to ensure that the productivity gains generated by new technologies improve the lives of all Americans, not just the small percentage who happen to own a business and can afford the capital investment of the latest robots. Trying to preserve jobs as the sole route for people to improve their lives without changing the underlying pattern I’ve described above is a downward spiral that will harm many and benefit a few.

5. The problem with automation is one of distribution of wealth

Which leads to my penultimate point in this post. The problem is not that new technologies replace menial or repetitive jobs. It’s that they supplant peoples’ livelihoods, a concept which we should think of as a way of life rather than merely an occupation. That technologies can destroy people’s ways of life has been understood for centuries—it’s what spurred the Luddite movement during the early 19th century in England and also underpinned Karl Marx’s critique of capitalism some fifty years later. In many ways, communism was based on the idea that technological development is a good thing if the productivity gains are shared equally among everyone. While the practical implementation of that idea turned out to be oversimplified, the basic point—that new technologies raise questions about the distribution of wealth as much as they do about productivity gains—still applies today.

As farm worker movements in the 1960s and 1970s attested, the debate cannot focus on the simple ratio of input/output efficiency or even on jobs saved versus jobs lost. Those farm workers protested against mechanized harvesting and processing technologies in agriculture because they realized that all of the productivity gains would go to the farm owners, the patent holders, and the manufacturing companies. None would go to the people who had done the sweaty, back-breaking work in the fields that underpinned the entire agricultural industry and put the owners in the position to consider mechanization in the first place.

This is one of the reasons why people are starting to talk about a universal basic income, an idea which Robert Reich (himself a former Secretary of Labor under Bill Clinton) nicely outlined in a short video last fall. Basically, the idea is that if machines can do all the work to meet peoples’ needs, then people will not have jobs, and therefore will not have the money to buy the basic goods those machines produce. The universal basic income circumvents the problem by separating purchasing power from jobs, which has the effect of more equally distributing the productivity gains of automation such that people will actually be able to enjoy their lives.

6. Democratizing technology

I see the universal basic income as a solution to a symptom, the maldistribution of wealth. What we should be thinking about, if we want to address root-causes, is the maldistribution of power.

I had the opportunity to visit Immokalee, Florida a couple of years ago. Immokalee grows a large share of the nation’s fresh tomatoes, a delicate crop that is generally harvested by hand because machines tend to destroy or damage the fruit. Immokalee is also the location of some of the most recent cases of modern-day slavery—workers held in captivity and forced to work the fields. But it’s also home to an empowered workers movement, the Coalition of Immokalee Workers, which has fought for, and won, improvements in pay and working conditions for tomato harvesters.

On the visit, one Coalition spokesperson demonstrated how field workers harvest tomatoes. They walk through the fields filling up large buckets, which weigh around thirty pounds when full. It was a big deal for the Coalition when they bargained a penny-per-pound increase in the piece-rate compensation for pickers. As the spokesperson explained, the Coalition also achieved success in setting a new bucket-filling standard, under which workers would no longer be forced to fill their buckets over the rim, a practice that used to be required, but for which workers were not paid extra (see p. 32 of the 2015 Annual Report for a picture).

As the presentation continued, I was struck by the impressive intensity of the struggle around field workers’ rights, but also by the fact that the bucket itself was never questioned. Ergonomically, the buckets are terrible. They are plain plastic, with no handles or straps to ease lifting them onto the shoulder or help keep the bucket stable and distribute its weight once loaded. Why, I wondered at the time, does all the negotiating around the tomato picking seem to take these back-breaking buckets for granted? Is there no way to design a better tool for collecting the tomatoes and hauling them out of the field?

Of course there is, but that’s not how we tend to think of technological development. Which is why I argue that it’s not just about where the money goes after the fact, but about who has a say in how technologies are developed before the fact. The direction of technological development is open and changeable: technological development can be democratized. Spreading out the power to drive technological development will be the route to designing machines first to improve the conditions of labor and ways of life, and only second to increase productivity.

Our ongoing research into the history agricultural mechanization is motivated by a desire to understand how and why power over technological development was consolidated in the first place, in order that we might understand how to spread that power out moving forward.

Share

2 Comments

Filed under Current Events, Research

Dissertation Complete

Yesterday I finalized my dissertation, titled Ordering People and Nature through Food Safety Governance. It has been quite a long journey, and despite recent societal turbulence it is deeply satisfying to wrap up this chapter in my career.

Here is the abstract:

We are constantly reminded that eating fresh fruits and vegetables is healthy for us. But in the face of repeated outbreaks of foodborne illness linked to fresh produce, whether these foods are safe for us has become an entirely different, and difficult to answer, question. In the name of food safety, both government and industry leaders are adopting far-reaching policies intended to prevent human pathogens from contaminating crops at the farm level, but these policies meet friction on the ground. Through a case study of the California leafy greens industry, this dissertation examines the web of market, legal, technological, and cultural forces that shape how food safety policy is crafted and put into practice in fields.

Controlling dangerous pathogens and protecting public health are not the only goals served by expanding food safety regulation—food safety also serves to discipline and order people and nature for other purposes. Private firms use the mechanisms of food safety governance to shift blame and liability for foodborne pathogens to other sectors or competitors and to secure a higher market share for themselves. Food safety experts, capitalizing on the lack of available science upon which to base standards, carve out for themselves a monopoly in setting and interpreting food safety standards. And government agents wield their expanded policing powers primarily to make examples of a few bad actors in order to shore up public confidence in the food system and the government’s ability to protect its citizens, but fail to address underlying structural causes.

Zealous fixation with driving risk of microbial contamination toward an always out-of-reach “zero” draws attention away from the systemic risks inherent in the food system status quo and stifles alternative pathways for growing and distributing food, raising thorny complications for diversifying—ecologically, economically, or culturally—our country’s food provisioning system. The narrow scope of existing food safety policy must be broadened and developed holistically with other societal goals if the future of US agriculture is to be sustainable and resilient in the long term.

I will not post the full dissertation here, but am happy to share a PDF copy for anyone who is interested. Just send me an email.

Share

Leave a Comment

Filed under Research

Guest blog post for National Sustainable Agriculture Coalition

I wrote a guest blog post along with several colleagues for the National Sustainable Agriculture Coalition summarizing the findings of our recently published research article, Inconsistent food safety pressures complicate environmental conservation for California produce growers. The paper is freely available to the general public.

We discuss how the complex patchwork of rules, standards, audits, and other requirements to “enhance” food safety in produce agriculture puts inconsistent and problematic pressure on farmers. These intense pressures can make farmers feel that they must adopt environmentally damaging practices to be extra safe. We are particularly concerned that, because of a food safety concern, many farmers are trying to prevent wildlife from entering farm fields by setting poison bait, removing habitat, and installing extensive fences.

Recent research, however, shows that these practices do not make food safer, and may even increase the risk that pathogens will contaminate crops in the field. Conservation and safety, in other words, can be practiced together on farms.

Share

Leave a Comment

Filed under Announcements, Research

The Unintended Ecological and Social Impacts of Food Safety Regulations in California’s Central Coast Region

My paper with Daniel Karp and co-authors just came out today in the journal BioScience. In it, we show the complex linkages that tie people together with nature, often through surprising and indirect routes. Supply chains, disease surveillance, regulations, farmer decisions, and ecosystem services like pest control or soil fertility all play a role in the “the cascading consequences of a foodborne disease outbreak” as we show in our conceptual diagram:

Cascading Consequences

I apologize that the paper itself is behind a pay-wall. That’s the reality of academic publishing these days, though I keep my fingers crossed that the recent upsurge in open access journals in signaling a paradigm shift. In any case, you can at least read the abstract:

In 2006, a multistate Escherichia coli O157:H7 outbreak linked to spinach grown in California’s Central Coast region caused public concerns, catalyzing far-reaching reforms in vegetable production. Industry and government pressured growers to adopt costly new measures to improve food safety, many of which targeted wildlife as a disease vector. In response, many growers fenced fields, lined field edges with wildlife traps and poison, and removed remaining adjacent habitat. Although the efficacy of these and other practices for mitigating pathogen risk have not been thoroughly evaluated, their widespread adoption has substantial consequences for rural livelihoods, biodiversity, and ecological processes. Today, as federal regulators are poised to set mandatory standards for on-farm food safety throughout the United States, major gaps persist in understanding the relationships between farming systems and food safety. Addressing food-safety knowledge gaps and developing effective farming practices are crucial for co-managing agriculture for food production, conservation, and human health.

Share

Leave a Comment

Filed under Uncategorized

Tensions Between Safety and Sustainability in the Field

No Animals, Food Safety Violation II

I wrote a short blog post on friction between food safety and environment (particularly animals) for my department’s website last week. It’s based on a recent trip I took to conduct fieldwork in Imperial Valley, California and Yuma, Arizona — two of the most important produce growing regions in the country. I met with vegetable growers and food safety auditors in both states, and even got to tag along on a night harvest of baby spinach.

DSC_0139

This sort of fieldwork is both exhilarating and exhausting. I get the chance to meet people whose occupations most of us hardly know exist, let alone have any sense of what that work entails, and to visit places which likewise don’t make it onto the radar. At the same time, making the most of fieldwork means being “on” all the time, ready to absorb and sift information with all sensory channels open and receiving. At the end of each day, I have to be able to tell a story, to weave the events and observations and impressions into a coherent narrative in my field notes. These notes are critical records for me later — sometimes years later — when I have to synthesize and write up my final report for publication. So I thought it might be interesting to provide a short example of my field notes look like. Here’s a sample, verbatim with no edits, of what I wrote about my trip to the night harvest.

I spoke with the food safety manager, whom I met about 5:30 in the evening, and two foremen whose crews were harvesting that night. The crews begin about 5 in the evening (when it starts to cool off), and work for 8-10 hours, sometimes a bit less like this night, when the manager thought they would wrap up by about midnight. These crews are maybe 10-13 workers (even fewer if the greens are packed in large bins rather than the relatively small totes, due to the labor of packing). Crews for harvesting whole product, such as head lettuce or romaine hearts, can be much larger, on the order of 20-50 workers (since these have to be harvested by hand).

We met at the staging area, where the foremen (mayordomos) and tractor/harvester drivers park and meet (the crews show up directly at the field, later). The foremen and drivers are skilled workers, usually with many years of experience both on the line and running a crew, and are employed year round by the harvester (who is based out of Salinas, and also does growing). Several of the cars had boxes of spring mix in the front seats—already washed, the kind that goes out to retail or foodservice. I asked the manager about it, and he said that the buyers regularly send the crews a palette of product, as sort of a thank you to the workers (for reference, a crew might harvest about 30 palettes in one night). I later had a conversation with one of the foremen about spinach, and a gleam came into his eyes as he started recounting all of the delicious meals his wife cooks with the spinach. When we finished the visit at that field and were saying goodbye, he gave me a 2-lb. box of organic spring mix to take home with me.

Before I could go into the field, I had to observe the requirements of the visitor SOP. I washed my hands vigorously with soap and water (supposed to be 20 seconds, or the time it takes to sing Happy Birthday), donned a hairnet and a reflective vest, and removed my watch (to my pocket). The harvest crew was moving away from us, and we walked across the already harvested beds (which no longer mattered since they were finished for the season). Sometimes they do prep beds for a second pass harvest, especially in times of high demand, but it was unclear how that changes the treatment of the harvested beds.

For the baby greens – e.g. spring mix or spinach – the product is mowed up with a large harvesting machine, which draws the cut leaves onto a conveyor belt. The leaves are carried up to the back of the machine, where several workers stand on a special platform. They are arranged as on a factory line, which begins on the trailer being pulled along by a tractor parallel to the harvesting machine. A couple of workers on the trailer prepare empty bins and place them on the start of the conveyor belt. The bins pass along to the 2-3 workers (apparently generally women) who gather the leaves as they flow over the lip of the conveyor belt and pack them loosely in the totes. The packed totes are then conveyed back to the trailer where 3-4 workers cover, stack and secure them.

The harvesting machine has an anti-rodent sound machine affixed to the front, near the blade and the headlights. The machine emits a sort of throbbing high-pitched chirp that is supposed to make any rodents who are nearby run away. The manager says he tested it out in a friend’s house who had a mouse problem, and it seemed effective there. There are also two workers who walk between the beds ahead of the machine, to look for any signs of animal activity or other problem (like a bit of trash) that would require the harvesters to skip a section. That said, according to the manager, it is very rare to see animals in the field during the actual growing season—in his telling, all the people who are around the fields all the time during the growing season keep them off. It is much more common to see animals in the off-season, when the fields are left alone for a while.

As a final note, I will say that the two pounds of spinach were a real boon to me in a time of need. After the harvest, I faced an hour drive back to Yuma, arriving near midnight. There was no food and nothing open, let alone with vegetarian options, so I ate lots and lots of salad.

Salad greens

Share

1 Comment

Filed under Research

Common Sense, Science and Government Part III: Manufacturing the sweet tooth

I ended the last post with the idea that making policy and engaging in government is a process of shaping common sense. The reason is that government, unlike direct rule, relies upon the consent of the governed (government must ‘go with the grain’). People generally consent to live under a particular social order when that order seems perfectly natural and normal; consent is most assured at the point when people can’t envision an alternative way of doing things, or shrug their shoulders and lament “that’s just the way it is.” In this way, consent to be governed a certain way by a certain set of people is grounded in the terrain of common sense.[i] Without consent, there can be no government.[ii]

In the best case, the need for consent produces a “government of the people, by the people, for the people”, as President Lincoln famously proclaimed in the Gettysburg Address. In the ideal, democracy bubbles up from the grassroots: the citizenry consents to the rule of law because they define the law, and trust that the state they have chosen to implement that law serves their best interests. In other words, in an ideal case the consent of the governed is an active consent, predicated on the assumption that the law is a fair application of good sense to common public problems.

However, the populace can also passively consent to a government imposed from the top down. Thoughtful public deliberation can be bypassed altogether if an agenda of government can be made to fit smoothly within the existing framework of common sense. As we know from Gramsci, common sense is inherently dynamic. It changes and adapts over time due to chance and the aggregated choices of individuals, but common sense may also change to accommodate new realities of life (e.g. novel technologies or occupations) or by intentional manipulation (e.g. through media, education, propaganda, etc.). Here’s how David Harvey puts it:

What Gramsci calls ‘common sense’ (defined as ‘the sense held in common’) typically grounds consent. Common sense is constructed out of longstanding practices of cultural socialization often rooted deep in regional or national traditions. It is not the same as the ‘good sense’ that can be constructed out of critical engagement with the issues of the day. Common sense can, therefore, be profoundly misleading, obfuscating or disguising real problems under cultural prejudices. Cultural and traditional values (such as belief in God and country or views on the position of women in society) and fears (of communists, immigrants, strangers, or ‘others’) can be mobilized to mask other realities.[iii]

More importantly, these elements of common sense can be mobilized to mask the redistribution of benefits and burdens, to advantage some at the expense of others. But that’s a lot of abstraction to begin with, so I’ll turn now to some concrete examples to try paint a clearer picture of the dangers inherent in fiddling with common sense (as counterpoint to the previous post, in which I argued for the dangers inherent in not fiddling).

In the rest of this post and the next, we will look at two parallel, historical cases in which people changed their eating habits abruptly for reasons largely beyond their immediate For the remainder of this post, we will look at the development of a sweet tooth among the British working classes in the 17th through 19th centuries and the ways in which this dietary shift dovetailed with consent to an industrial capitalist mode of organizing peoples’ relationship with nature. In the next post we will look at the introduction of white bread to the American middle-class at the turn of the 20th century, and the ways in which the store-bought loaf acclimated Americans to the idea that experts know best how to organize relations between people and nature.

Habituating to Sweetness and Consenting to Industrial Capitalism

In his classic treatise Sweetness and Power, Sydney Mintz takes a deep look at the deceptively simple idea of the ‘sweet tooth’. While today we often take as fact that people like sweet foods, even to the point of self-harm, societal relationships with sweetness vary widely across both geography and time.[iv] At a time when the ubiquity of sugar in our diets is under intense scrutiny, even in the UK[v] (the birthplace of the modern sweet-tooth, as we’ll see), the irony that this problem was intentionally engineered is especially striking.

Just a few centuries ago, concentrated sweetness such as sugar was rare and expensive, and most people didn’t have it or even realize that they might want it.[vi] Such was the case in Medieval England, where merchants sold sugar, at exorbitant prices, as a prized luxury only the very rich and powerful could afford.[vii]  Between the mid-17th and mid-19th centuries, however, sugar experienced a reversal of fortunes. From spice of kings to common man’s fare in a mere two centuries, by 1900 sugar accounted for 20% of dietary calories in England. “What turned an exotic, foreign and costly substance into the daily fare of even the poorest and humblest people?” asks Mintz. What he is trying to understand is a sea-change in common sense about food, the ways in which people, seemingly out of the blue, become “firmly habituated” to eating sugar and consuming sweetness.

Unraveling the puzzle takes close attention to the everyday ways in which people decide what to eat and the political, economic, health and environmental repercussions of diet. Toward this end, Mintz breaks his main arguments into three sections, titled Production, Consumption, and Power. From their origins in the orient, sugar plantations slowly spread to the Arab empire and eventually to the Mediterranean and, by the 17th century, to the New World. The salient point is that sugar plantations pioneered industrial and capitalist forms of organizing production and labor long before the start of the Industrial Revolution and the advent of capitalism (at least, so far as these things are conventionally dated by historians). During the late 1600s and early 1700s, plantations in the West Indies combined the field and the factory in one centralized operation designed to maximize output of a single commodity for export, with the single-minded goal of reaping large, rapid profits for absentee owners and investors back in England and the continent (these English speculators kept their personal costs down by using slave labor). [viii] The following images, dating from the 17th to 19th centuries, illustrate how “factory and field are wedded in sugar making.”

Sugar boiling house

“Most like a factory was the boiling house,” writes Mintz (p. 47), who in addition to this print, attributed to R. Bridgens c. 19th-century (courtesy of the British Library), included the following descriptive passage from a plantation owner in Barbados, describing the boiling house c. 1700: “In short, ‘tis to live in perpetual Noise and Hurry, and the only way to Render a person Angry, and Tyrannical, too; since the Climate is so hot, and the labor so constant, that the Servants [or slaves] night and day stand in great Boyling Houses, where there are Six or Seven large Coppers or Furnaces kept perpetually Boyling; and from which with heavy Ladles and Scummers they Skim off the excrementitious parts of the Canes, till it comes to its perfection and cleanness, while other as Stoakers, Broil as it were, alive, in managing the Fires; and one part is constantly at the Mill, to supply it with Canes, night and day, during the whole Season of making Sugar, which is about six Months of the year.”

Sugar mill in the Antilles, 1665

A sugar mill belonging to Phillippe de Longvilliers de Poincy, from: Charles de Rochefort. Histoire naturelle et morale des iles Antilles de l’Amérique. A Roterdam: chez Arnould Leers, 1665 [FCO Historical Collection, via King’s College London].

Digging the Cane-holes - Ten Views in the Island of Antigua (1823), plate II - BL

“Digging the Cane-holes”, in Ten Views in the Island of Antigua (1823), plate II – BL. Curated by William Clark, via Wikimedia Commons. More images from Ten Views are available at Wikimedia Commons.

Importantly, the plantations initiated an exponential growth in sugar production before demand existed to consume all that sweetness. This posed something of a problem, for the many new goods produced by exploiting people and nature in the colonies of the British Empire and its peers—including also coffee, tea, and cocoa—threatened to swamp the limited commodity markets back in Europe. What was needed was a rapidly expanding consumer demand for all of these new goods, and Mintz points out that this is exactly what happened with the widespread transformation of sugar from “costly treat into a cheap food”. To make a long and very detailed chapter short, the use of sugar as a status symbol among the rich and powerful percolated (with encouragement) down to the lower classes, who sought to emulate their social superiors. At the same time, the uses of sugar in diet diversified, especially through synergies with other plantation commodities (chocolate, tea, and coffee) and through displacing other traditional staples that people no longer produced for themselves (because they now spent their time working in factories instead of farms). For example, people who couldn’t afford butter (and no longer had access to cows to produce their own) could instead eat jams and preserves with their daily bread.

At the same time as the working classes were accepting sugar as food, the powerful—first aristocrats and merchants, and later the rising industrial and trade capitalists—were also adjusting their relationship to sugar. From a simple vehicle for affirming social status through direct consumption, sugar came to be seen and used as a vehicle for accumulating wealth and solidifying the British nation (and empire). In a paradigm shift of contemporary attitudes toward consumption, it was during this time period that political economists first recognized that demand didn’t have to remain constant (tied to existing subsistence levels), but that rather it could be elastic.[ix] Not only did this realization mean that capitalists could extract greater effort from laborers, who “worked harder in order to get more”, but it unleashed the hitherto unanticipated growth engine of the working class purchasing power, providing a ready sponge to soak up increasing commodity production and make owners an obscene amount of money.

So all of these things happened at about the same time: sugar production boomed, capitalists made lots of money, basic foods were no longer produced at home, and people developed a taste and preference for sweetness. Rejecting the idea that such a coincidence happened by mere chance, Mintz contends that these events are related through the intricate dance of power to bestow meaning on vegetable matter, to transform a simple reed into food, and thence into a pillar of empire and the birth of capitalism. A slippery concept in the absence of overt force or coercion, power grapples with the question of who ultimately guides the reins of common sense, and thus steers the vast course of social organization. Such power is very difficult to observe directly, and does not necessarily fit tidily into bins of ‘cause’ and ‘effect’. So Mintz instead turns to indirect evidence in the form of motive: who profited and who didn’t from the new British sweet tooth?[x]

While “there was no conspiracy at work to wreck the nutrition of the British working classes, to turn them into addicts, or to ruin their teeth,” clearly widespread use of sugar as food benefited the sugar plantation owners, and also those who ran and operated the wheels of empire. It benefited manufacturers by making factory workers and their families dependent upon their jobs and wages to buy the new imported food goods so they could continue living. Mintz, through careful anthropologic interpretation, shows that the common people had no more free will in what to consume than they did in how to produce (i.e. by selling their labor power for wages): “the meanings people gave to sugar arose under conditions prescribed or determined not so much by the consumers as by those who made the product available” (p. 167). Though the mostly trivial webs of meaning spun by individuals lead us to believe in free choice in the marketplace, observation shows that our small individual webs of meaning are contained in and subsumed by “other webs of immense scale, surpassing single lives in time and space” (p. 158). Whoever can gain control of the shape and direction of these larger webs—such as common sense—can gain control over the mass of the people in a way that is not readily recognizable.

 


[i] I take this basic point from David Harvey’s chapter on “The Construction of Consent” in A Brief History of Neoliberalism, p. 39-41.

[ii] Out of concern for space, I grossly abbreviated the continuity of relationship between common sense, consent of the governed, and government. I wanted to note here that Foucault’s last writings, e.g. History of Sexuality, Vol. II & III, deal extensively with the idea of ethics, or “techniques of the self”. In a way, an ethic is used to describe rules that people have to regulate, or govern, our own personal behavior. If we want to talk about a government that rules with the grain, then it has to be a government that engages with these personal ethics—consent of the governed, then, can also be construed as the alignment of individual ethics with government, of techniques of the self with techniques of discipline (the relationship of ruler to subject). Given that personal ethics are informed as much by normalized (i.e. taken for granted) habits and patterns of behavior as by rational thought and decisive action, common sense can also be taken to describe the terrain of ethics to which the populace subscribes.

[iii] David Harvey, A Brief History of Neoliberalism (Oxford University Press 2005), p. 39. This paragraph is from his chapter on “The Construction of Consent”, to explain why people have accepted the ‘neoliberal turn’ in global governance, which basically holds to the philosophy that the social good is maximized by funneling all human relations through the mechanism of a market transaction, even though many of the policies pursued under this program have demonstrably negative effects on the well-being of hundreds of millions of people while simultaneously lining the pockets of a far smaller set.

[iv] “There is probably no people on earth that lacks the lexical means to describe that category of tastes we call ‘sweet’… But to say that everyone everywhere likes sweet things says nothing about where such taste fits into the spectrum of taste possibilities, how important sweetness is, where it occurs in taste-preference hierarchy, or how it is thought of in relation to other tastes.” Mintz, Sidney. 1985. Sweetness and Power : The Place of Sugar in Modern History. New York, N.Y.: Penguin Books, 1985. (17-18).

[v] The BBC, for example, ran a lengthy series stories throughout 2014 on sugar. For example, in March there was “WHO: Daily sugar  intake ‘should be halved’”, in June there was “How much sugar do we eat?”, and in September there was “Sugar intake must be slashed further, say scientists”. And just this week (Jan. 5, 2015), BBC ran “Cut back amount of sugar children consume, parents told”. Today, the entire ethics (see endnote ii) and government of sugar consumption are changing together, and more consciously than perhaps at any previous point in history.

[vi] Coincidentally, after I had already composed most of this post, I saw the BBC documentary Addicted to Pleasure, which first aired in 2012. Hosted by actor Brian Cox, who himself suffers from diabetes and must carefully manage his personal sugar intake, the documentary covers much of the story told by Mintz, albeit minus most of the scholarly critique of colonial exploitation and oppression and the exploitation of working class people.

[vii] In the 16th century, the English royalty were noted for “their too great use of sugar”, which was used to demonstrate wealth and status at court—Queen Elizabeth I’s teeth, for example, turned black from eating so many sweets. Mintz, p. 134.

[viii] Mintz, p. 55 and 61. The classic definition of capitalism requires well-developed markets, monetary currency, profit-seeking owners of capital, the alienation of consumers from intimate involvement in production, and ‘free’ labor (i.e. not slaves or land-bound peasants, but rather workers paid in wages). The sugar plantations of the mercantile colonial period fit some but not all of these criteria.

[ix] Mintz is careful to demonstrate how political economists changed their thinking on the role of consumers in national economies. Whereas mercantilists had assumed national demand for any given good to be more or less fixed, a new wave of capitalist thinking held that demand could increase by enrolling the people of a nation more completely in market relations—people should no longer subsist on the good they produce for themselves, but should get everything they consume through the market (note that this thinking forms a direct connection between personal ethics and social organization, or government). In return, they should sell their labor on the market as well. See p. 162-165.

[x] “To omit the concept of power,” Mintz writes, “is to treat as indifferent the social, economic, and political forces that benefited from the steady spread of demand for sugar… The history of sugar suggests strongly that the availability, and also the circumstances of availability, of sucrose… were determined by forces outside the reach of the English masses themselves.” p. 166.

Share

Leave a Comment

Filed under Basic Concepts