Author Archives: pbaur

The Blue Skies Strollair and the Challenge to Collective Action

A good friend of mine, Jason Munster, has a PhD in Environmental Engineering. In his doctoral program, he researched the atmospheric chemistry related to measuring air pollutants, which is critical to understanding climate change. Or, in his own words, “I build instruments to measure processes related to climate change.”

Recently, Munster took what he learned about how water droplets in clouds absorb pollutants to design a personal, transportable air filter that filters out several hazardous criteria air pollutants – NO­2, SO2, and PM.

He founded a company, Blue Skies, to manufacture, market, and sell this air filter primarily to parents of young children in the form of a device that attaches to strollers and car-seats, creating a pocket of relatively cleaner air around the child. Infants and toddlers are particularly vulnerable to the toxic effects of these pollutants, which have been linked to asthma among other illnesses, and Blue Skies markets itself as a champion of children’s health:

Our mission is to reduce asthma and deaths from air pollution worldwide. In developed countries, we plan to carry out the first-ever trials of the benefits of reducing ambient pollution exposure in children. In developing countries, we aim to save lives.

The Blue Skies filter, which he has named the Strollair, is now the subject of a fundraising campaign on Indiegogo, where backers can preorder the device for half of its eventual retail price of $300.

I’ve written this blog post both to acknowledge Jason’s efforts in developing this protective device, which I have every confidence works as advertised, but also, as is my wont, to use the Strollair as a conversation starter. This personal air filter presents a tidy encapsulation of a momentous conundrum facing our society – from the local to global scale – right now:

Do we fight for long-term, collective, and systematic economic and political change that will benefit all people eventually?

Or, do we take short-term action (if we can afford it) to individually protect ourselves and our families from the worst symptoms of a rapidly degrading global biosphere that industrial commodification has pushed past its limits?

The Strollair is an almost textbook example of an inverted quarantine, a concept from sociologist Andrew Szasz that I discussed in a post from several years ago. To recap, what Szasz is talking about is a widespread social phenomenon in which people, especially those who think they are affluent or self-reliant enough to handle every problem on their own, pursue “an individualized response to a collective threat”, which he notes is “the opposite of [a] social movement”:

There is awareness of hazard, a feeling of vulnerability, of being at risk. That feeling, however, does not lead to political action aimed at reducing the amounts or the variety of toxics present in the environment. It leads, instead, to individualized acts of self-protection, to just trying to keep those contaminants out of one’s body.[i]

I think now, in this political moment more than ever, it’s vitally important for Americans to be aware of and critically reflect on this tendency to stop at saving one’s self, because this reaction is increasingly pervasive. We see it everywhere from education to infrastructure to food, and that’s a potential problem. The good news is that we also see renewed calls for long-term solutions that depend on collective action and intense cooperation, such as the growing movement for a single-payer healthcare system or global attempts at cooperative agreements to reign in greenhouse gas emissions to reign in climate change.

This post is not a criticism of people who want to take action to protect themselves or their families. That’s a perfectly rational response, Szasz notes, especially “if one feels that there is nothing to be done, that conditions will not change, cannot be changed”, or not be changed fast enough to make a difference. But we must acknowledge that this is a form of fatalism, which can sap the strength from our creative aspirations and enervate political will to take control of our situation and build the world we want to live in.

There is also the problem of the cognitive blinders that practicing inverted quarantine can reinforce.

Inverted quarantine is implicitly based on denial of complexity and interdependence. It mistakenly reduces the question of an individual’s well-being to nothing more than the maintenance of the integrity of the individual’s body.[ii]

In other words, relying too heavily on the effectiveness of inverted quarantine strategies such as a portable air filter which can allow parents to “opt out” of the consequences of air pollution for their children may lead those parents to believe that this is enough.

But an air filter all on its own will never be enough. Only sustained collective effort to control air pollution – by regulating emissions, developing cleaner fuels and industrial processes, living more efficiently, and so forth – will ever attack air pollution at its source.

And the same difficulty holds true for inverted quarantine responses in general. They offer partial stop-gap solutions to persistent problems that will only get worse over time. Moreover, even those partial solutions are only available to some people – others won’t be able to opt out, most likely because they can’t afford to.

None of this is to say that someone living in an area with high air pollution absolutely should not buy a Strollair for their child. But it is to say that before they do so, they should spend some time learning about the root causes of pollution and its unequal impacts on people by wealth, race, age, sex, etc. And they should take the time to learn about, and ideally get involved in, true collective responses to air pollution that other people are working on to improve the situation for everyone.

The environmental justice movement in particular would be a good point of engagement. There are many different groups and organizations working toward environmental justice in various ways. I think a good example is the California Environmental Justice Alliance, which has a powerful mission statement that demonstrates the collective will to action which must complement the self-protective reaction inherent in inverted quarantine:

We unite the powerful local organizing of our members in the communities most impacted by environmental hazards – low-income communities and communities of color – to create comprehensive opportunities for change at a statewide level. We build the power of communities across California to create policies that will alleviate poverty and pollution. Together, we are growing the statewide movement for environmental health and social justice.

Many pragmatic arguments could be made to support the conclusion that social movements such as environmental justice are good for individuals in the long run (I could appeal to game theory, for example). Szasz offers many examples in his book, most a variation on the theme that you can’t hide from your problems forever – at some point, the state of the world will devolve to the point where no one will be able to buy their way to safety.

But I’d rather end on a different note, an emotional and moral appeal to collective action to match the explicit emotional appeal of the Blue Skies Strollair.

In short, working together feels good and feels empowering. There is a lot to be said for balancing fear and fatalism – the emotions that make people reach for inverted quarantine – in the face of collective threats with love, community, and hope – the emotions that give people the resolve and strength to overcome their differences and work together on big solutions. In short, keeping sight of a purpose larger than ourselves can help us keep our self-protective urges in proper proportion.

As I was writing this post, I pulled out my copy of Robert Bullard’s landmark treatise on environmental justice, Dumping in Dixie. I have the third edition, and in the preface Dr. Bullard writes a line that encapsulates precisely what I mean when I refer to a purpose larger than ourselves. I’ll conclude with his words: “I carried out this research under the assumption that all Americans have a basic right to live, work, play, go to school, and worship in a clean and healthy environment”.[iii]

 


[i] Szasz, Andrew. 2007. Shopping Our Way to Safety: How we changed from protecting the environment to protecting ourselves. University of Minnesota Press. p. 2-3.

[ii] Ibid, p. 222.

[iii] Bullard, Robert D. 2000. Dumping in Dixie: Race, Class, and Environmental Quality. Third Edition. Westview Press: Boulder, CO. p. xiii.

Share

Leave a Comment

Filed under Current Events, Musings

Contradictions, consequences and the human toll of food safety culture

I recently published an article (abstract below), with my colleagues Christy Getz and Jennifer Sowerwine, on the problems with a growing trend for governing pathogens in our food supply: food safety culture. I’ll take a moment to explain what that is.

The primary problem in food safety governance is that our food travels a long way from the field to our plates, changing hands (and jurisdictions) multiple times on its journey. This makes it hard for consumers to know precisely what has happened to their food along the way, and in particular whether something bad happened that makes the food dangerous to eat. So, quite reasonably, people want assurance that their food is safe. But how to get this assurance?

In theory, the food industry should have plenty of incentives to provide safe food. Operators who cut corners and sell food that makes people sick should, again in theory, face negative repercussions: bad PR, loss of market share, lawsuits, and even criminal prosecution in extreme cases. However, given the complexity of the food chain, the sheer amount of different foods that people eat every day (which are often mixed together), and the lag time between eating contaminated food and actually getting sick from it, it turns out that in many cases a sick consumer cannot determine who is at fault. Even when they do, the damage has often already been done.

For these reasons, this reactive approach to governing food safety does not really provide sufficient reassurance that our food supply is safe. So in the US, we also have preventive strategies that try to avert food safety problems before they arise. Government agencies have rules and standards for farmers, packers, shippers, wholesalers, retailers, restaurants and all the other people who grow our food and get it to us. Examples of preventive standards include good agricultural practices (GAPs) for farmers and good manufacturing practices (GMPs) for food handlers and processors. But the food distribution system is huge, and the government’s surveillance capacity is limited (constant monitoring is very expensive). Just like with reactive sanctions, preventive rules provide only partial reassurance. Sometimes, that’s not good enough. One simply needs to look at the persistent occurrence of outbreaks of E. coliSalmonella, norovirus, or other foodborne pathogens to see the limits of this system.

So the “captains” of the food industry — the largest retail and foodservice chains — came up with another method, which Walmart’s vice-president for food safety coined “food safety culture”. As we explain in our article (which is unfortunately behind a paywall), this strategy encourages each individual worker throughout the food supply chain to take continuous and personal responsibility for the safety of the food that passes through their hands. Every worker, manager, and owner is supposed to be ever-vigilant and should continuously seek to improve the safety of food. Food safety culture is a way of delegating the oversight work that would otherwise need to be done by government or a private third-party onto workers themselves, and on the surface would seem to promise a more efficient and comprehensive approach to ensuring that food is safe.

However, our research showed that the primary motivation behind the spread of this strategy is as much about protecting the reputation of powerful brands and shielding them from liability as it is about the public health mission to prevent foodborne illness. Food safety culture is, in many important ways, about preserving the “illusion of safety” amidst a continual cycle of outbreak-related crises and government- or industry-led reforms. Unfortunately, the question of whose safety? gets obscured in the shuffle.  And this can carry a very real human toll in the form of increased anxiety and stress for line workers across the food system who must shoulder the burden of protecting (primarily wealthy) consumers from the spiraling problems of industrially mass-produced food. At the same time, narrow scrutiny on foodborne pathogens — which are one symptom of our industrial food system — distracts from understanding root causes of “food-system-borne disease”.

So while clearly we as a society should work to minimize foodborne illness, we must also balance our efforts to do so against the many other threats to our health and well-being posed by the ways in which we grow, process, and distribute our food. The only way to do this is to critically and rigorously examine each and every form of governance, no matter how seemingly benign and no matter whether government-led or industry-driven, to suss out all the consequences, intended or not.

Article Abstract

In an intensifying climate of scrutiny over food safety, the food industry is turning to “food safety culture” as a one-size-fits-all solution to protect both consumers and companies. This strategy focuses on changing employee behavior from farm to fork to fit a universal model of bureaucratic control; the goal is system-wide cultural transformation in the name of combatting foodborne illness. Through grounded fieldwork centered on the case of a regional wholesale produce market in California, we examine the consequences of this bureaucratization of food safety power on the everyday routines and lived experiences of people working to grow, pack, and deliver fresh produce. We find that despite rhetoric promising a rational and universal answer to food safety, fear and frustration over pervasive uncertainty and legal threats can produce cynicism, distrust, and fragmentation among agrifood actors. Furthermore, under the cover of its public health mission to prevent foodborne illness, food safety culture exerts a new moral economy that sorts companies and employees into categories of ‘good’ and ‘bad’ according to an abstracted calculation of ‘riskiness’ along a scale from safe to dangerous. We raise the concern that ‘safety’ is usurping other deeply held values and excluding cultural forms and experiential knowledges associated with long-standing food-ways. The long-term danger, we conclude, is that this uniform and myopic response to real risks of foodborne illness will not lead to a holistically healthy or sustainable agrifood system, but rather perpetuate a spiraling cycle of crisis and reform that carries a very real human toll.

(If you would like a copy of this article, please contact me.)

Share

Leave a Comment

Filed under Research

Machines and Workers: The deceptive framing of automation

I was recently quoted in an article in MUNCHIES on the incoming US Secretary of Labor, Andrew Puzder. Puzder has (notoriously) argued that government regulations (to protect the welfare and rights of workers) drive the cost of labor up, which forces employers to automate their businesses by replacing human workers with machines. He has quipped of machines, “They’re always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

In the article, I responded by pointing out how “Puzder is implying that deregulation will slow or halt automation by keeping labor cheap. Notably, the only two options Puzder presents for workers are (1) low-paying, uncertain, and exploitative employment or (2) no employment.”

This damned-if-you-do, damned-if-you-don’t analysis of mechanization, automation, and robotization has plagued workers facing technological developments since the industrial revolution. And it is still a major problem today. Especially with the increasing prevalence of Big Data and advances in artificial intelligence, we’re looking at a future in which not just manual jobs—such as picking heads of lettuce or riveting widgets—but also desk jobs like legal work will be in danger of replacement by computerized machines. President Obama even went so far as to warn of the dangers of automation in his farewell address: “The next wave of economic dislocations won’t come from overseas,” he said. “It will come from the relentless pace of automation that makes a lot of good, middle-class jobs obsolete.” The question is, of course, what can we do about it? Are we doomed to be stuck in Puzder’s  de-humanizing catch-22 scenario?

It just so happens that this is a problem I’ve recently been researching. Together with Prof. Alastair Iles and several undergraduate student researchers, I’ve been examining cases of machine labor replacing, or threatening to replace, human labor in agriculture. In particular, we’re looking at the rise of mechanical harvesters for vegetable, fruit and nut crops during the post-War years. While we anticipate the research to continue for several more years, a few important points on automation are already clear. I’ll outline them in the rest of this blog post.

1. Machine labor is not the same as human labor

Historically, machines have outperformed humans when it comes to performing the same exact task over and over again. This is an area in which people do not excel. As I alluded to in my comment on Puzder, trying to compete with machines dehumanizes human workers. Americans have a national fable about a heroic individual, John Henry, out-working a steam-powered drilling machine. However, the feat costs John Henry his life, whereas the machine presumably went on to a long “career” driving steel for the railroad magnates.

The problem highlighted in the fable is the industrial standardization of production. Machines only “work” in a standardized, scripted environment; humans, by contrast, are flexible and adaptive workers, capable of performing many complex tasks depending on the situation. One deception of the catch-22 is that people and machines must inevitably compete with one another in a zero-sum game, even though the actual labor is very different.

2. Machines change the nature of production and the nature of what is produced

It’s not just that machines are “naturally” better suited to perform many tasks than are people, but rather that machines actually shape the nature of work, the working environment, and the product itself to better suit their standard requirements.

To take an example from our research, when California lettuce farmers in the 1960s were considering switching to mechanical harvesters, they ran into a problem: their fields and their plants were too variable and heterogeneous for the machines. Whereas human workers could differentiate between healthy, mature heads of lettuce and damaged or immature plants, a machine could not. Human harvesters likewise could handle humps and dips or other quirks of the field and furrows, but a machine needed perfectly level, uniform, and even rows to cut the heads.

In an effort to accommodate the envisioned mechanical harvesters, farmers, breeders, farm advisors, and agronomists set out to craft a new form of farming, what they called “precision agriculture.” The idea would be to create a perfectly level field of perfectly uniform rows populated by perfectly uniform crops that would all mature at precisely the same time. There would be no weeds, no irregularities, nothing to disrupt the machine’s ability to do exactly the same task over and over again. The field was made to suit the machine.

3. Mechanization is not automatic

New technologies are always embedded in an existing matrix of other technologies, behaviors, and organizational structures—what might better be referred to as a technological system. For a technological development to be truly transformative, to propagate and redefine a given production process, the rest of the system has to transform as well.

In the case of lettuce, the mechanical harvester was accompanied by a host of other technological, behavioral, and organizational changes. Breeders had to develop new seeds that farmers could use to shift toward a “plant-to-stand”—one seed, one plant—approach that would remove the entire production stage of thinning, that is, removing the unhealthy plants from the stand; a highly selective and irregular task not conducive to machine labor. At the same time, chemical herbicides were ushered in to eliminate the need for weeding, a form of chemically automating another form of selective labor. Lastly, farmers had to adapt to a once-over harvest. Whereas harvest crews comprising human laborers with hand knives could sweep a field multiple times over the course of several days to harvest lettuce that matured at different rates, a machine can only really harvest a field once, cutting most everything in its path in one pass.

The point is that mechanization is never “automatic”—ironically, it takes a lot of work to accommodate land, resources, businesses, and even consumers to the strict requirements of machine labor. Automation is a process that must be designed, guided, and even coerced into being. Importantly, this means that people determine how mechanization happens, and we have the power to do it differently.

4. A job is not an end in itself

In an era of rapid advances in artificial intelligence, machine learning, robotics, and other technologies that can perform human tasks, the fixation on “jobs” as the only metric of well-being is problematic. A lot of those jobs will no longer require a human to perform them, as McKinsey & Company’s new report on automation argues. The question we should be asking ourselves as a society is, do Americans really need to be doing tedious, repetitive tasks that can be left to a robot?

As long as life, liberty, and the pursuit of happiness—plus housing, food, healthcare, and so forth—is tied to whether a person has a “job” or not, the answer has to be “yes”. And as more and more machines come online that can perform human tasks, the more and more human workers will have to compete against the robot, hypothetical or real, that threatens to take their job. This can only drive the value of labor down, meaning that Americans will continually be asked to work more for less. That’s not a sustainable trend, and will only exacerbate our already high levels of socioeconomic inequality.

The United States is desperately in need of new public policies that can deal with this fundamental trend of working more for less. Basically, we need ways to ensure that the productivity gains generated by new technologies improve the lives of all Americans, not just the small percentage who happen to own a business and can afford the capital investment of the latest robots. Trying to preserve jobs as the sole route for people to improve their lives without changing the underlying pattern I’ve described above is a downward spiral that will harm many and benefit a few.

5. The problem with automation is one of distribution of wealth

Which leads to my penultimate point in this post. The problem is not that new technologies replace menial or repetitive jobs. It’s that they supplant peoples’ livelihoods, a concept which we should think of as a way of life rather than merely an occupation. That technologies can destroy people’s ways of life has been understood for centuries—it’s what spurred the Luddite movement during the early 19th century in England and also underpinned Karl Marx’s critique of capitalism some fifty years later. In many ways, communism was based on the idea that technological development is a good thing if the productivity gains are shared equally among everyone. While the practical implementation of that idea turned out to be oversimplified, the basic point—that new technologies raise questions about the distribution of wealth as much as they do about productivity gains—still applies today.

As farm worker movements in the 1960s and 1970s attested, the debate cannot focus on the simple ratio of input/output efficiency or even on jobs saved versus jobs lost. Those farm workers protested against mechanized harvesting and processing technologies in agriculture because they realized that all of the productivity gains would go to the farm owners, the patent holders, and the manufacturing companies. None would go to the people who had done the sweaty, back-breaking work in the fields that underpinned the entire agricultural industry and put the owners in the position to consider mechanization in the first place.

This is one of the reasons why people are starting to talk about a universal basic income, an idea which Robert Reich (himself a former Secretary of Labor under Bill Clinton) nicely outlined in a short video last fall. Basically, the idea is that if machines can do all the work to meet peoples’ needs, then people will not have jobs, and therefore will not have the money to buy the basic goods those machines produce. The universal basic income circumvents the problem by separating purchasing power from jobs, which has the effect of more equally distributing the productivity gains of automation such that people will actually be able to enjoy their lives.

6. Democratizing technology

I see the universal basic income as a solution to a symptom, the maldistribution of wealth. What we should be thinking about, if we want to address root-causes, is the maldistribution of power.

I had the opportunity to visit Immokalee, Florida a couple of years ago. Immokalee grows a large share of the nation’s fresh tomatoes, a delicate crop that is generally harvested by hand because machines tend to destroy or damage the fruit. Immokalee is also the location of some of the most recent cases of modern-day slavery—workers held in captivity and forced to work the fields. But it’s also home to an empowered workers movement, the Coalition of Immokalee Workers, which has fought for, and won, improvements in pay and working conditions for tomato harvesters.

On the visit, one Coalition spokesperson demonstrated how field workers harvest tomatoes. They walk through the fields filling up large buckets, which weigh around thirty pounds when full. It was a big deal for the Coalition when they bargained a penny-per-pound increase in the piece-rate compensation for pickers. As the spokesperson explained, the Coalition also achieved success in setting a new bucket-filling standard, under which workers would no longer be forced to fill their buckets over the rim, a practice that used to be required, but for which workers were not paid extra (see p. 32 of the 2015 Annual Report for a picture).

As the presentation continued, I was struck by the impressive intensity of the struggle around field workers’ rights, but also by the fact that the bucket itself was never questioned. Ergonomically, the buckets are terrible. They are plain plastic, with no handles or straps to ease lifting them onto the shoulder or help keep the bucket stable and distribute its weight once loaded. Why, I wondered at the time, does all the negotiating around the tomato picking seem to take these back-breaking buckets for granted? Is there no way to design a better tool for collecting the tomatoes and hauling them out of the field?

Of course there is, but that’s not how we tend to think of technological development. Which is why I argue that it’s not just about where the money goes after the fact, but about who has a say in how technologies are developed before the fact. The direction of technological development is open and changeable: technological development can be democratized. Spreading out the power to drive technological development will be the route to designing machines first to improve the conditions of labor and ways of life, and only second to increase productivity.

Our ongoing research into the history agricultural mechanization is motivated by a desire to understand how and why power over technological development was consolidated in the first place, in order that we might understand how to spread that power out moving forward.

Share

2 Comments

Filed under Current Events, Research

Dissertation Complete

Yesterday I finalized my dissertation, titled Ordering People and Nature through Food Safety Governance. It has been quite a long journey, and despite recent societal turbulence it is deeply satisfying to wrap up this chapter in my career.

Here is the abstract:

We are constantly reminded that eating fresh fruits and vegetables is healthy for us. But in the face of repeated outbreaks of foodborne illness linked to fresh produce, whether these foods are safe for us has become an entirely different, and difficult to answer, question. In the name of food safety, both government and industry leaders are adopting far-reaching policies intended to prevent human pathogens from contaminating crops at the farm level, but these policies meet friction on the ground. Through a case study of the California leafy greens industry, this dissertation examines the web of market, legal, technological, and cultural forces that shape how food safety policy is crafted and put into practice in fields.

Controlling dangerous pathogens and protecting public health are not the only goals served by expanding food safety regulation—food safety also serves to discipline and order people and nature for other purposes. Private firms use the mechanisms of food safety governance to shift blame and liability for foodborne pathogens to other sectors or competitors and to secure a higher market share for themselves. Food safety experts, capitalizing on the lack of available science upon which to base standards, carve out for themselves a monopoly in setting and interpreting food safety standards. And government agents wield their expanded policing powers primarily to make examples of a few bad actors in order to shore up public confidence in the food system and the government’s ability to protect its citizens, but fail to address underlying structural causes.

Zealous fixation with driving risk of microbial contamination toward an always out-of-reach “zero” draws attention away from the systemic risks inherent in the food system status quo and stifles alternative pathways for growing and distributing food, raising thorny complications for diversifying—ecologically, economically, or culturally—our country’s food provisioning system. The narrow scope of existing food safety policy must be broadened and developed holistically with other societal goals if the future of US agriculture is to be sustainable and resilient in the long term.

I will not post the full dissertation here, but am happy to share a PDF copy for anyone who is interested. Just send me an email.

Share

Leave a Comment

Filed under Research

Guest blog post for National Sustainable Agriculture Coalition

I wrote a guest blog post along with several colleagues for the National Sustainable Agriculture Coalition summarizing the findings of our recently published research article, Inconsistent food safety pressures complicate environmental conservation for California produce growers. The paper is freely available to the general public.

We discuss how the complex patchwork of rules, standards, audits, and other requirements to “enhance” food safety in produce agriculture puts inconsistent and problematic pressure on farmers. These intense pressures can make farmers feel that they must adopt environmentally damaging practices to be extra safe. We are particularly concerned that, because of a food safety concern, many farmers are trying to prevent wildlife from entering farm fields by setting poison bait, removing habitat, and installing extensive fences.

Recent research, however, shows that these practices do not make food safer, and may even increase the risk that pathogens will contaminate crops in the field. Conservation and safety, in other words, can be practiced together on farms.

Share

Leave a Comment

Filed under Announcements, Research

The Unintended Ecological and Social Impacts of Food Safety Regulations in California’s Central Coast Region

My paper with Daniel Karp and co-authors just came out today in the journal BioScience. In it, we show the complex linkages that tie people together with nature, often through surprising and indirect routes. Supply chains, disease surveillance, regulations, farmer decisions, and ecosystem services like pest control or soil fertility all play a role in the “the cascading consequences of a foodborne disease outbreak” as we show in our conceptual diagram:

Cascading Consequences

I apologize that the paper itself is behind a pay-wall. That’s the reality of academic publishing these days, though I keep my fingers crossed that the recent upsurge in open access journals in signaling a paradigm shift. In any case, you can at least read the abstract:

In 2006, a multistate Escherichia coli O157:H7 outbreak linked to spinach grown in California’s Central Coast region caused public concerns, catalyzing far-reaching reforms in vegetable production. Industry and government pressured growers to adopt costly new measures to improve food safety, many of which targeted wildlife as a disease vector. In response, many growers fenced fields, lined field edges with wildlife traps and poison, and removed remaining adjacent habitat. Although the efficacy of these and other practices for mitigating pathogen risk have not been thoroughly evaluated, their widespread adoption has substantial consequences for rural livelihoods, biodiversity, and ecological processes. Today, as federal regulators are poised to set mandatory standards for on-farm food safety throughout the United States, major gaps persist in understanding the relationships between farming systems and food safety. Addressing food-safety knowledge gaps and developing effective farming practices are crucial for co-managing agriculture for food production, conservation, and human health.

Share

Leave a Comment

Filed under Uncategorized

Tensions Between Safety and Sustainability in the Field

No Animals, Food Safety Violation II

I wrote a short blog post on friction between food safety and environment (particularly animals) for my department’s website last week. It’s based on a recent trip I took to conduct fieldwork in Imperial Valley, California and Yuma, Arizona — two of the most important produce growing regions in the country. I met with vegetable growers and food safety auditors in both states, and even got to tag along on a night harvest of baby spinach.

DSC_0139

This sort of fieldwork is both exhilarating and exhausting. I get the chance to meet people whose occupations most of us hardly know exist, let alone have any sense of what that work entails, and to visit places which likewise don’t make it onto the radar. At the same time, making the most of fieldwork means being “on” all the time, ready to absorb and sift information with all sensory channels open and receiving. At the end of each day, I have to be able to tell a story, to weave the events and observations and impressions into a coherent narrative in my field notes. These notes are critical records for me later — sometimes years later — when I have to synthesize and write up my final report for publication. So I thought it might be interesting to provide a short example of my field notes look like. Here’s a sample, verbatim with no edits, of what I wrote about my trip to the night harvest.

I spoke with the food safety manager, whom I met about 5:30 in the evening, and two foremen whose crews were harvesting that night. The crews begin about 5 in the evening (when it starts to cool off), and work for 8-10 hours, sometimes a bit less like this night, when the manager thought they would wrap up by about midnight. These crews are maybe 10-13 workers (even fewer if the greens are packed in large bins rather than the relatively small totes, due to the labor of packing). Crews for harvesting whole product, such as head lettuce or romaine hearts, can be much larger, on the order of 20-50 workers (since these have to be harvested by hand).

We met at the staging area, where the foremen (mayordomos) and tractor/harvester drivers park and meet (the crews show up directly at the field, later). The foremen and drivers are skilled workers, usually with many years of experience both on the line and running a crew, and are employed year round by the harvester (who is based out of Salinas, and also does growing). Several of the cars had boxes of spring mix in the front seats—already washed, the kind that goes out to retail or foodservice. I asked the manager about it, and he said that the buyers regularly send the crews a palette of product, as sort of a thank you to the workers (for reference, a crew might harvest about 30 palettes in one night). I later had a conversation with one of the foremen about spinach, and a gleam came into his eyes as he started recounting all of the delicious meals his wife cooks with the spinach. When we finished the visit at that field and were saying goodbye, he gave me a 2-lb. box of organic spring mix to take home with me.

Before I could go into the field, I had to observe the requirements of the visitor SOP. I washed my hands vigorously with soap and water (supposed to be 20 seconds, or the time it takes to sing Happy Birthday), donned a hairnet and a reflective vest, and removed my watch (to my pocket). The harvest crew was moving away from us, and we walked across the already harvested beds (which no longer mattered since they were finished for the season). Sometimes they do prep beds for a second pass harvest, especially in times of high demand, but it was unclear how that changes the treatment of the harvested beds.

For the baby greens – e.g. spring mix or spinach – the product is mowed up with a large harvesting machine, which draws the cut leaves onto a conveyor belt. The leaves are carried up to the back of the machine, where several workers stand on a special platform. They are arranged as on a factory line, which begins on the trailer being pulled along by a tractor parallel to the harvesting machine. A couple of workers on the trailer prepare empty bins and place them on the start of the conveyor belt. The bins pass along to the 2-3 workers (apparently generally women) who gather the leaves as they flow over the lip of the conveyor belt and pack them loosely in the totes. The packed totes are then conveyed back to the trailer where 3-4 workers cover, stack and secure them.

The harvesting machine has an anti-rodent sound machine affixed to the front, near the blade and the headlights. The machine emits a sort of throbbing high-pitched chirp that is supposed to make any rodents who are nearby run away. The manager says he tested it out in a friend’s house who had a mouse problem, and it seemed effective there. There are also two workers who walk between the beds ahead of the machine, to look for any signs of animal activity or other problem (like a bit of trash) that would require the harvesters to skip a section. That said, according to the manager, it is very rare to see animals in the field during the actual growing season—in his telling, all the people who are around the fields all the time during the growing season keep them off. It is much more common to see animals in the off-season, when the fields are left alone for a while.

As a final note, I will say that the two pounds of spinach were a real boon to me in a time of need. After the harvest, I faced an hour drive back to Yuma, arriving near midnight. There was no food and nothing open, let alone with vegetarian options, so I ate lots and lots of salad.

Salad greens

Share

1 Comment

Filed under Research

Common Sense, Science and Government Part III: Manufacturing the sweet tooth

I ended the last post with the idea that making policy and engaging in government is a process of shaping common sense. The reason is that government, unlike direct rule, relies upon the consent of the governed (government must ‘go with the grain’). People generally consent to live under a particular social order when that order seems perfectly natural and normal; consent is most assured at the point when people can’t envision an alternative way of doing things, or shrug their shoulders and lament “that’s just the way it is.” In this way, consent to be governed a certain way by a certain set of people is grounded in the terrain of common sense.[i] Without consent, there can be no government.[ii]

In the best case, the need for consent produces a “government of the people, by the people, for the people”, as President Lincoln famously proclaimed in the Gettysburg Address. In the ideal, democracy bubbles up from the grassroots: the citizenry consents to the rule of law because they define the law, and trust that the state they have chosen to implement that law serves their best interests. In other words, in an ideal case the consent of the governed is an active consent, predicated on the assumption that the law is a fair application of good sense to common public problems.

However, the populace can also passively consent to a government imposed from the top down. Thoughtful public deliberation can be bypassed altogether if an agenda of government can be made to fit smoothly within the existing framework of common sense. As we know from Gramsci, common sense is inherently dynamic. It changes and adapts over time due to chance and the aggregated choices of individuals, but common sense may also change to accommodate new realities of life (e.g. novel technologies or occupations) or by intentional manipulation (e.g. through media, education, propaganda, etc.). Here’s how David Harvey puts it:

What Gramsci calls ‘common sense’ (defined as ‘the sense held in common’) typically grounds consent. Common sense is constructed out of longstanding practices of cultural socialization often rooted deep in regional or national traditions. It is not the same as the ‘good sense’ that can be constructed out of critical engagement with the issues of the day. Common sense can, therefore, be profoundly misleading, obfuscating or disguising real problems under cultural prejudices. Cultural and traditional values (such as belief in God and country or views on the position of women in society) and fears (of communists, immigrants, strangers, or ‘others’) can be mobilized to mask other realities.[iii]

More importantly, these elements of common sense can be mobilized to mask the redistribution of benefits and burdens, to advantage some at the expense of others. But that’s a lot of abstraction to begin with, so I’ll turn now to some concrete examples to try paint a clearer picture of the dangers inherent in fiddling with common sense (as counterpoint to the previous post, in which I argued for the dangers inherent in not fiddling).

In the rest of this post and the next, we will look at two parallel, historical cases in which people changed their eating habits abruptly for reasons largely beyond their immediate For the remainder of this post, we will look at the development of a sweet tooth among the British working classes in the 17th through 19th centuries and the ways in which this dietary shift dovetailed with consent to an industrial capitalist mode of organizing peoples’ relationship with nature. In the next post we will look at the introduction of white bread to the American middle-class at the turn of the 20th century, and the ways in which the store-bought loaf acclimated Americans to the idea that experts know best how to organize relations between people and nature.

Habituating to Sweetness and Consenting to Industrial Capitalism

In his classic treatise Sweetness and Power, Sydney Mintz takes a deep look at the deceptively simple idea of the ‘sweet tooth’. While today we often take as fact that people like sweet foods, even to the point of self-harm, societal relationships with sweetness vary widely across both geography and time.[iv] At a time when the ubiquity of sugar in our diets is under intense scrutiny, even in the UK[v] (the birthplace of the modern sweet-tooth, as we’ll see), the irony that this problem was intentionally engineered is especially striking.

Just a few centuries ago, concentrated sweetness such as sugar was rare and expensive, and most people didn’t have it or even realize that they might want it.[vi] Such was the case in Medieval England, where merchants sold sugar, at exorbitant prices, as a prized luxury only the very rich and powerful could afford.[vii]  Between the mid-17th and mid-19th centuries, however, sugar experienced a reversal of fortunes. From spice of kings to common man’s fare in a mere two centuries, by 1900 sugar accounted for 20% of dietary calories in England. “What turned an exotic, foreign and costly substance into the daily fare of even the poorest and humblest people?” asks Mintz. What he is trying to understand is a sea-change in common sense about food, the ways in which people, seemingly out of the blue, become “firmly habituated” to eating sugar and consuming sweetness.

Unraveling the puzzle takes close attention to the everyday ways in which people decide what to eat and the political, economic, health and environmental repercussions of diet. Toward this end, Mintz breaks his main arguments into three sections, titled Production, Consumption, and Power. From their origins in the orient, sugar plantations slowly spread to the Arab empire and eventually to the Mediterranean and, by the 17th century, to the New World. The salient point is that sugar plantations pioneered industrial and capitalist forms of organizing production and labor long before the start of the Industrial Revolution and the advent of capitalism (at least, so far as these things are conventionally dated by historians). During the late 1600s and early 1700s, plantations in the West Indies combined the field and the factory in one centralized operation designed to maximize output of a single commodity for export, with the single-minded goal of reaping large, rapid profits for absentee owners and investors back in England and the continent (these English speculators kept their personal costs down by using slave labor). [viii] The following images, dating from the 17th to 19th centuries, illustrate how “factory and field are wedded in sugar making.”

Sugar boiling house

“Most like a factory was the boiling house,” writes Mintz (p. 47), who in addition to this print, attributed to R. Bridgens c. 19th-century (courtesy of the British Library), included the following descriptive passage from a plantation owner in Barbados, describing the boiling house c. 1700: “In short, ‘tis to live in perpetual Noise and Hurry, and the only way to Render a person Angry, and Tyrannical, too; since the Climate is so hot, and the labor so constant, that the Servants [or slaves] night and day stand in great Boyling Houses, where there are Six or Seven large Coppers or Furnaces kept perpetually Boyling; and from which with heavy Ladles and Scummers they Skim off the excrementitious parts of the Canes, till it comes to its perfection and cleanness, while other as Stoakers, Broil as it were, alive, in managing the Fires; and one part is constantly at the Mill, to supply it with Canes, night and day, during the whole Season of making Sugar, which is about six Months of the year.”

Sugar mill in the Antilles, 1665

A sugar mill belonging to Phillippe de Longvilliers de Poincy, from: Charles de Rochefort. Histoire naturelle et morale des iles Antilles de l’Amérique. A Roterdam: chez Arnould Leers, 1665 [FCO Historical Collection, via King’s College London].

Digging the Cane-holes - Ten Views in the Island of Antigua (1823), plate II - BL

“Digging the Cane-holes”, in Ten Views in the Island of Antigua (1823), plate II – BL. Curated by William Clark, via Wikimedia Commons. More images from Ten Views are available at Wikimedia Commons.

Importantly, the plantations initiated an exponential growth in sugar production before demand existed to consume all that sweetness. This posed something of a problem, for the many new goods produced by exploiting people and nature in the colonies of the British Empire and its peers—including also coffee, tea, and cocoa—threatened to swamp the limited commodity markets back in Europe. What was needed was a rapidly expanding consumer demand for all of these new goods, and Mintz points out that this is exactly what happened with the widespread transformation of sugar from “costly treat into a cheap food”. To make a long and very detailed chapter short, the use of sugar as a status symbol among the rich and powerful percolated (with encouragement) down to the lower classes, who sought to emulate their social superiors. At the same time, the uses of sugar in diet diversified, especially through synergies with other plantation commodities (chocolate, tea, and coffee) and through displacing other traditional staples that people no longer produced for themselves (because they now spent their time working in factories instead of farms). For example, people who couldn’t afford butter (and no longer had access to cows to produce their own) could instead eat jams and preserves with their daily bread.

At the same time as the working classes were accepting sugar as food, the powerful—first aristocrats and merchants, and later the rising industrial and trade capitalists—were also adjusting their relationship to sugar. From a simple vehicle for affirming social status through direct consumption, sugar came to be seen and used as a vehicle for accumulating wealth and solidifying the British nation (and empire). In a paradigm shift of contemporary attitudes toward consumption, it was during this time period that political economists first recognized that demand didn’t have to remain constant (tied to existing subsistence levels), but that rather it could be elastic.[ix] Not only did this realization mean that capitalists could extract greater effort from laborers, who “worked harder in order to get more”, but it unleashed the hitherto unanticipated growth engine of the working class purchasing power, providing a ready sponge to soak up increasing commodity production and make owners an obscene amount of money.

So all of these things happened at about the same time: sugar production boomed, capitalists made lots of money, basic foods were no longer produced at home, and people developed a taste and preference for sweetness. Rejecting the idea that such a coincidence happened by mere chance, Mintz contends that these events are related through the intricate dance of power to bestow meaning on vegetable matter, to transform a simple reed into food, and thence into a pillar of empire and the birth of capitalism. A slippery concept in the absence of overt force or coercion, power grapples with the question of who ultimately guides the reins of common sense, and thus steers the vast course of social organization. Such power is very difficult to observe directly, and does not necessarily fit tidily into bins of ‘cause’ and ‘effect’. So Mintz instead turns to indirect evidence in the form of motive: who profited and who didn’t from the new British sweet tooth?[x]

While “there was no conspiracy at work to wreck the nutrition of the British working classes, to turn them into addicts, or to ruin their teeth,” clearly widespread use of sugar as food benefited the sugar plantation owners, and also those who ran and operated the wheels of empire. It benefited manufacturers by making factory workers and their families dependent upon their jobs and wages to buy the new imported food goods so they could continue living. Mintz, through careful anthropologic interpretation, shows that the common people had no more free will in what to consume than they did in how to produce (i.e. by selling their labor power for wages): “the meanings people gave to sugar arose under conditions prescribed or determined not so much by the consumers as by those who made the product available” (p. 167). Though the mostly trivial webs of meaning spun by individuals lead us to believe in free choice in the marketplace, observation shows that our small individual webs of meaning are contained in and subsumed by “other webs of immense scale, surpassing single lives in time and space” (p. 158). Whoever can gain control of the shape and direction of these larger webs—such as common sense—can gain control over the mass of the people in a way that is not readily recognizable.

 


[i] I take this basic point from David Harvey’s chapter on “The Construction of Consent” in A Brief History of Neoliberalism, p. 39-41.

[ii] Out of concern for space, I grossly abbreviated the continuity of relationship between common sense, consent of the governed, and government. I wanted to note here that Foucault’s last writings, e.g. History of Sexuality, Vol. II & III, deal extensively with the idea of ethics, or “techniques of the self”. In a way, an ethic is used to describe rules that people have to regulate, or govern, our own personal behavior. If we want to talk about a government that rules with the grain, then it has to be a government that engages with these personal ethics—consent of the governed, then, can also be construed as the alignment of individual ethics with government, of techniques of the self with techniques of discipline (the relationship of ruler to subject). Given that personal ethics are informed as much by normalized (i.e. taken for granted) habits and patterns of behavior as by rational thought and decisive action, common sense can also be taken to describe the terrain of ethics to which the populace subscribes.

[iii] David Harvey, A Brief History of Neoliberalism (Oxford University Press 2005), p. 39. This paragraph is from his chapter on “The Construction of Consent”, to explain why people have accepted the ‘neoliberal turn’ in global governance, which basically holds to the philosophy that the social good is maximized by funneling all human relations through the mechanism of a market transaction, even though many of the policies pursued under this program have demonstrably negative effects on the well-being of hundreds of millions of people while simultaneously lining the pockets of a far smaller set.

[iv] “There is probably no people on earth that lacks the lexical means to describe that category of tastes we call ‘sweet’… But to say that everyone everywhere likes sweet things says nothing about where such taste fits into the spectrum of taste possibilities, how important sweetness is, where it occurs in taste-preference hierarchy, or how it is thought of in relation to other tastes.” Mintz, Sidney. 1985. Sweetness and Power : The Place of Sugar in Modern History. New York, N.Y.: Penguin Books, 1985. (17-18).

[v] The BBC, for example, ran a lengthy series stories throughout 2014 on sugar. For example, in March there was “WHO: Daily sugar  intake ‘should be halved’”, in June there was “How much sugar do we eat?”, and in September there was “Sugar intake must be slashed further, say scientists”. And just this week (Jan. 5, 2015), BBC ran “Cut back amount of sugar children consume, parents told”. Today, the entire ethics (see endnote ii) and government of sugar consumption are changing together, and more consciously than perhaps at any previous point in history.

[vi] Coincidentally, after I had already composed most of this post, I saw the BBC documentary Addicted to Pleasure, which first aired in 2012. Hosted by actor Brian Cox, who himself suffers from diabetes and must carefully manage his personal sugar intake, the documentary covers much of the story told by Mintz, albeit minus most of the scholarly critique of colonial exploitation and oppression and the exploitation of working class people.

[vii] In the 16th century, the English royalty were noted for “their too great use of sugar”, which was used to demonstrate wealth and status at court—Queen Elizabeth I’s teeth, for example, turned black from eating so many sweets. Mintz, p. 134.

[viii] Mintz, p. 55 and 61. The classic definition of capitalism requires well-developed markets, monetary currency, profit-seeking owners of capital, the alienation of consumers from intimate involvement in production, and ‘free’ labor (i.e. not slaves or land-bound peasants, but rather workers paid in wages). The sugar plantations of the mercantile colonial period fit some but not all of these criteria.

[ix] Mintz is careful to demonstrate how political economists changed their thinking on the role of consumers in national economies. Whereas mercantilists had assumed national demand for any given good to be more or less fixed, a new wave of capitalist thinking held that demand could increase by enrolling the people of a nation more completely in market relations—people should no longer subsist on the good they produce for themselves, but should get everything they consume through the market (note that this thinking forms a direct connection between personal ethics and social organization, or government). In return, they should sell their labor on the market as well. See p. 162-165.

[x] “To omit the concept of power,” Mintz writes, “is to treat as indifferent the social, economic, and political forces that benefited from the steady spread of demand for sugar… The history of sugar suggests strongly that the availability, and also the circumstances of availability, of sucrose… were determined by forces outside the reach of the English masses themselves.” p. 166.

Share

Leave a Comment

Filed under Basic Concepts

Common Sense, Science and Government Part II: A Case of Quinoa

The first case I’ll discuss focuses on quinoa, a grain-like staple more closely related to beets and spinach than to the true grasses such as wheat. Once a rather obscure food in the US, quinoa experienced a rapid popularity spike beginning in 2007 when consumers in the global north fed into a new narrative extoling its virtues for health and social justice alike. It began with nutrition-minded journalists hailing quinoa as a “new health food darling”. High in protein, in fact featuring all nine amino acids, quinoa could also serve as a gluten-free substitute for wheat or barley. The added mystique of a “rediscovered” (read: Columbused) ancient staple, a “lost” Inca Treasure, also dovetailed nicely with the popular paleo-diet trend which urged a dietary return to an idealized simpler time when people were more closely attuned with nature. For all of these reasons, quinoa received great publicity as a sacred, super crop.

An interrelated second narrative presented quinoa as a way for consumers to also support fair trade and sustainable development. Buying quinoa meant supporting farmers in developing countries such as Peru and Bolivia while allowing them to maintain a traditional way of life. The pitch for quinoa on Eden Organic’s website, for example, reads, “The most ancient American staple grain. Sustainably grown at over 12,000 feet in the Andes helping preserve native culture.” In 2012, when I first looked into the quinoa case, I came across a fair trade certified brand, La Yapa (now defunct), which summed up a stronger iteration of this marketing narrative:

“In the past few years, the income of quinoa farmers has doubled with the increase in volume and prices… The farmer’s quality of life also has increased steadily… By choosing this Fair Trade Certified™ product, you are directly supporting a better life for farming families through fair prices, direct trade, community development, and environmental stewardship.”

The global food security community also picked up on the quinoa fanfare, culminating in the 2011 decision by the United Nations General Assembly to declare 2013 “The International Year of the Quinoa”. The press release for the occasion cites the potential contributions of quinoa to then-Secretary-General Ban Ki-moon’s Zero Hunger Challenge, “not only because of its nutritional value but also because most quinoa is currently produced by smallholder farmers… ‘The crop holds the promise of improved incomes – a key plank of the Zero Hunger Challenge,’ Ban said.” A special report was released with the goal of “improving knowledge and dissemination of this ancient crop, which has a significant strategic value for the food and nutritional security of humanity.

The other piece of this global food security narrative touted the environmental advantages of traditional subsistence crops like quinoa (e.g. amaranth, teff, fonio, etc.), especially their resilience in the face of global climate change. A recent article from National Geographic captures the essence of this line of thinking:

“[Sustainable agriculture advocates are] increasingly turning to grains that have been the basis of subsistence farmers’ diets in Africa, South Asia, and Central and South America since the time of earliest agriculture. Because such grains adapted to grow on marginal land without irrigation, pesticides, or fertilizers, they are often more resilient than modern commodity crops are.” (emphasis added, also see note at [ii]).

Taken altogether, quinoa has been presented in and to the global north as a win-win-win superfood—good for the health of wealthy consumers, the wealth of poor farmers, and the ecological stability of global agriculture[i]. The overall message to the savvy shopper in New York or Berkeley or Chicago, then, was that quinoa was good to buy.

But complications with that rosy narrative arose just as rapidly as quinoa’s acclaim spread. Demand rose so quickly that the price of quinoa tripled from 2007 to 2010 (Fig. 1).

Prices

Fig. 1: Prices of Quinoa at the Farm Gate, 1993-2012 (constant Int. $/tonne). Source: http://faostat3.fao.org/.

 

Ironically, the food which was celebrated as a “cultural anchor and a staple in the diet of millions of people throughout the Andes for thousands of years” seemed to have been priced out of their budget by the “agricultural gold rush.”Over the same time period, production volume accelerated its growth and the area cultivated for quinoa expanded substantially, especially in Bolivia (Figs. 2 and 3).

Production

Fig. 2: Tonnes of Quinoa Produced Annually, 1994-2013. Source: http://faostat3.fao.org/.

Area Harvested

Fig. 3: Hectares of Quinoa Harvested Annually, 1994-2013. Source: http://faostat3.fao.org/.

These numbers seemed to paint a much more complex story than win-win-win: Was high consumer demand in the US and the EU actually taking a staple food away from South America smallholders? Were record-level prices encouraging farmers to plant quinoa on ecologically marginal lands, courting disaster in the form of an Andean equivalent to the Dust Bowl? With soaring prospects for fat profit-margins and a global development community hungry for a silver bullet crop, were Andean smallholder farmers in danger of losing control over quinoa and being pushed out of their own market?[ii]

All of these questions, however, boiled down to one media snippet for global north publics: to eat or not to eat? A rash of headlines in early 2013 posed titillating provocative challenges to the quinoa fad. “Is eating quinoa evil?,” quipped The Week, while The Guardian challenged, “Can vegans stomach the unpalatable truth about quinoa?”. Tom Philpott, writing for Mother Jones, tried to restore some sanity with his more nuanced article, “Quinoa: Good, Evil, or Just Really Complicated?”, but the overarching point of reference for the American and European publics had been set. Whether a question of health, the viability of smallholder farming, or environmental sustainability, it had to be framed, in Hamlett-like fashion, to buy or not to buy?

Lost amid all the hand-wringing were the voices cautioning that the public and the media had fixated on the wrong question. Tanya Kerssen, an analyst at the non-profit organization Food First writing for Common Dreams, pointed out that to consume or not to consume was a false choice:

“In short, the debate has largely been reduced to the invisible hand of the marketplace, in which the only options for shaping our global food system are driven by (affluent) consumers either buying more or buying less… [T]he so-called quinoa quandary demonstrates the limits of consumption-driven politics. Because whichever way you press the lever (buy more/buy less) there are bound to be negative consequences, particularly for poor farmers in the Global South. To address the problem we have to analyze the system itself, and the very structures that constrain consumer and producer choices…

Consumption-driven strategies, while part of the toolbox for effecting change, are not the only tools. Only by facing the reality that we can’t consume our way to a more just and sustainable world—and examining the full range of political options and strategies—can we start coming up with real solutions.” (emphasis added).

So there we have one example to help illustrate why good policy cannot rely solely upon common sense for guidance. As Gramsci warned, common sense “takes countless different forms” and “even in the brain of one individual, is fragmentary, incoherent” (as quoted in my previous post). Relying upon common sense alone is to follow a fickle and partial guide. The assumptions and tacit beliefs underlying common sense will not always hold up under scrutiny, meaning that developing good policy requires continual critical reflection, public debate, and learning.

Quinoa has risen to prominence because it can link key points of contention in global agricultural policy—often voiced in highly abstract statistics on population demographics, epidemiological findings, economic indicators, and environmental qualities—to the daily concern with what to eat that makes intuitive sense to powerful publics in the global north. While certain policy programs (e.g. leveraging the ‘traditional ecological knowledge’ of smallholder farmers or folding peasants into global commodity food markets) may have gained political traction by adapting their arguments to the contours of common sense, such compromise comes at a cost. In this case, the experiences and perceptions of first-world consumers were naively accepted as the “terrain” of common sense upon which public debates about global poverty, health, and climate change can and should be debated. However, this common sense represents only a narrow slice of daily life around the globe.

Missing from the common sense of affluent consumers are, for example, the experiences and perspectives of the Andean farmers who grow quinoa and the poor whose health and development so many are concerned with. And this is not to mention the underrepresentation of nonhuman organisms and ecosystems, especially those not explicitly contributing to food commodities (e.g. the ecosystems on marginal lands into which quinoa farming has begun to spread). That translates to a large number of options and strategies that will never even be considered and a large number of unintended consequences that will never be recognized because they are outside the realm of what is commonly familiar to the consumer classes. As Kerssen writes, it would be a rational ideal to “examine the full range”, but if we want to take that process seriously, then we also need to examine the full range of common sense.

As I argued in the previous post, good policy, including environmental and natural resource policy, cannot ignore common sense, but must work with the grain of existing preconceptions and ways of living in the world. What this case highlights is that we also cannot rely solely on common sense to guide us to good policy. As is shown with the quinoa case, common sense—such as the idea that the only lever we have with which to move the world comes in the form of our fork or our wallet—often misses important pieces of the story and can lead us far afield or into a seemingly intractable impasse or an impossible (or false) choice. Critical reflection on the strengths and shortcomings of basic common sense is needed.

From this insight, we can infer that good policy emerges from critical consideration of common sense. Good policy must be built on that existing foundation, but also must do productive work on people to direct them toward better habits, better ways of living in the world. In short, in order to better approximate good sense. Next time, we’ll consider upon what basis, if not common sense, good sense can be gauged.


[i] For an example of the kind of utopian visions that experts began attaching to quinoa potential future, a 2014 article by Lisa Hamilton in Harper’s Magazine quotes a prominent Dutch agronomist, saying, “If you ask for one crop that can save the world and address climate change, nutrition, all these things—the answer is quinoa. There’s no doubt about it.”

[ii] This poses a very thorny political economic question, and one that doesn’t lend itself easily to a simple yes or no response. The Harper’s article (ibid) tackles the complexity in greater depth, but the short version is that with great potential comes great prosperity, and then a great struggle over who has the right to enjoy that prosperity. In past epochs, newly “discovered” crops could be expropriated and spread around the world; examples include potatoes, tomatoes, or maize, all of which are native to the Americas. These plants didn’t just naturally evolve as desirable food crops, however. Rather, the ancestors of the Aztec, Incan and other indigenous peoples spent millennia worth of work breeding them from wild plants. Yet they never saw a penny for sharing those crops with the rest of the world. Instead, that privilege was assumed by European colonists and their descendants while Native American peoples were instead violently repressed (and killed). The Andean peoples of Bolivia and Ecuador are savvy to the long history of indigenous groups losing control over their germplasm heritages and have thus imposed strict sanctions over any sharing of quinoa seeds and genetic information. Thus quinoa finds itself at the heart of a struggle between food sovereignty and food security—an impasse seems to have been reached with “the poor of the Andes pitted against the poor of the world” (ibid). There are doubtless sensible and just ways to negotiate out of this impasse, which I won’t try to guess at here, but again the point I would like to make is that complex problems require complex (and often messy) responses. Pretending that a simple solution can be found by applying basic common sense (i.e. the needs of the world’s many outweigh the needs of the Andean few, so world development organizations should just go ahead and take quinoa from Bolivians and Peruvians) is not a route to sound policy or good governance.

Share

Leave a Comment

Filed under Basic Concepts, Current Events

Common Sense, Science and Government, Part I

In the next set of posts, I draw on a lecture I gave to an undergraduate class on natural resource policy a few years ago to examine the relationship between common sense, science, and government. Revisiting this set of basic relationships will set a conceptual foundation for future posts on more specialized topics such as social construction and co-production.

Some decisions must be made and actions taken at a societal level, and such collective deciding and acting is part of what I mean when I use the word government (to distinguish from today’s popular usage of the word as a fixed institution). One thesis that I explore in this blog is that all government hinges on defining and manipulating relationships between people and nature [i]. This is a big claim, and in many cases might be difficult to demonstrate. For that reason, I begin with natural resources.

It seems to me that many people can easily imagine what good natural resource management might mean — clean and safe water, smog-free air, sustainable fisheries and forests, preventing soils from eroding away, preserving wild species from extinction, and so forth — which narrows the gap between common sense and good sense (more on that later) and makes for a good starting place.

As is often my wont, these posts will turn to food and agriculture for concrete case material to help illustrate the general points I would like to make. It might seem unusual to speak of food as a natural resource, but producing food involves the joining and utilization of many other natural resources – water, energy, land and soil, minerals for fertilization, ecosystems services like pollination, sunlight, and of course lots of hard work. Food may be the most complex and vital natural resource we have, which makes it a rich source of information for thinking about common sense, science, and government.

Common Sense and Government

The political theorist Antonio Gramsci, an Italian political activist in the years leading up to WWII who wrote his most famous works from prison after being arrested by the nascent fascist regime in Italy, turned to the concept of common sense to help explain how fascism could take root in a society. He defined it as:

“…the conception of the world which is uncritically absorbed by the various social and cultural environments in which the moral individuality of the average man is developed. Common sense is not a single unique conception, identical in time and space. It is the “folklore” of philosophy, and, like folklore, it takes countless different forms. Its most fundamental characteristic is that it is a conception which, even in the brain of one individual, is fragmentary, incoherent and inconsequential, in conformity with the social and cultural position of those masses whose philosophy it is.”[ii].

Common sense incorporates all of those beliefs and assumptions that people do not actively question, yet upon which we all rely upon to guide most of our actions throughout each day. While we might aspire to always make what Gramsci terms ‘an intellectual choice’, to act rationally (first weighing costs and benefits) or ethically (following a set code of conduct), following what we might term good sense [iii], Gramsci points out that much of the time people instead draw upon prepackaged thoughts and beliefs. We act out of habit as much as we do out of thoughtfulness.

While in general common sense often approximates good sense, the two are only loosely coupled. Critical theorist Stuart Hall—drawing on the source material in Gramsci’s Prison Notebooks—explains the relationship more fully:

“Why, then, is common sense so important? Because it is the terrain of conceptions and categories on which the practical consciousness of the masses of the people is actually formed. It is the already formed and “taken for granted” terrain, on which more coherent ideologies and philosophies must contend for mastery; the ground which new conceptions of the world must take into account, contest and transform, if they are to shape the conceptions of the world of the masses and in that way become historically effective. ‘Every philosophical current leaves behind a sediment of ’common sense’; this is the document of its historical effectiveness. Common sense is not rigid and immobile but is continually transforming itself, enriching itself with scientific ideas and with philosophical opinions which have entered ordinary life. Common sense creates the folklore of the future, that is as a relatively rigid phase of popular knowledge at a given place and time’ (PN, p. 362)” (emphasis added). [iv].

Today, our society often looks to inductive science for an external reference of good sense against which to weigh our common sense. Science, we think, ought to provide objective evidence for how we should act individually and as a society. But science must work with the pre-existing terrain of common sense which is messy, slow-to-change, nebulous and carries with it the baggage of other external referents for good sense—such as religious doctrines, moral reasoning, and logical deduction—that have come before. And science itself emerges from people who themselves live within the encompassing medium of common sense.[v]

And yet we must rely upon common sense, in general, since as a practical matter it just takes too much time and energy to rationally and ethically analyze every potential action (and analysis is never perfect in any case). Thus geographer David Harvey asserts, “We cannot understand anything other than ‘common sense’ conceptions of the world to regulate the conduct of daily life” [vi]. The word regulate here begins to imply a more-than-superficial connection between the ways in which individuals act in their private lives and the ways in which societies act collectively through government. Many people are familiar with the idea that government imposes restrictions upon the private lives on individuals. However, it is a two-way street: the form that government takes is shaped by the ways in which people lead their lives.

Modern government trends toward governing “with the grain”—its philosophy is to act less like a drill sergeant and more like the conductor of an orchestra, serving as a point of reference to guide everyone in playing the right part at the right time at the right tempo such that a harmonious whole emerges. Thus to govern today, to develop and put into action sensible policies, requires an intimate understanding of common sense, for the former can only be effective if it accommodates the latter. Every policy, every attempt at what we might call good sense, must be ‘refracted’ through the common sense ways in which people lead their day-to-day lives, like light filtering through a prism. Likewise for the study of government (or in my case, environmental governance), for as sociologist Mitchell Dean puts it

To analyse government is to analyse those practices that try to shape, sculpt, mobilize and work through the choices, desires, aspirations, needs, wants and lifestyles of individuals and groups. This is a perspective, then, that seeks to connect questions of government, politics and administration to the space of bodies, lives, selves and persons . [vii].

To the extent that to govern well also entails critical examination of the common sense of governing, which might be seen as an attempt to form a good sense of good government of common sense (too meta?), the ways in which we conceptualize government and its relation to both common sense and good sense (such as that offered by science) cannot be separated from the practice of government.

This is not just an academic point, but a practical lesson in government, as demonstrated in this discussion with a man who has a lot of personal experience wrestling with the relationship between common sense and sensible policy:

Chris Hughes (interviewer): Can you tell us a little bit about how you’ve gone about intellectually preparing for your second term as president?

Barack Obama: I’m not sure it’s an intellectual exercise as much as it is reminding myself of why I ran for president and tapping into what I consider to be the innate common sense of the American people. The truth is that most of the big issues that are going to make a difference in the life of this country for the next thirty or forty years are complicated and require tough decisions, but are not rocket science…

So the question is not, Do we have policies that might work? It is, Can we mobilize the political will to act? And so, I’ve been spending a lot of time just thinking about how do I communicate more effectively with the American people? How do I try to bridge some of the divides that are longstanding in our culture? How do I project a sense of confidence in our future at a time when people are feeling anxious? They are more questions of values and emotions and tapping into people’s spirit.”

What the President acknowledges in this passage is the importance of knowing, intimately, the ordinary routines, values, and beliefs that real Americans use to get through each day—their common sense—and linking that grassroots sort of sense with the policy sort of sense that is concerned with the grand abstractions with which government concerns itself, such as the nation, the economy, the environment and ‘the general Welfare’ (to quote the preamble to the US Constitution). Thus his latter admission that his administration should focus on “spending a lot more time… in a conversation with the American people as opposed to just playing an insider game here in Washington.”

Of course, as Gramsci wrote and Hall emphasized, common sense is both “fragmented” and “continually transforming”—it is by nature mercurial and inchoate, often at odds with itself and internally inconsistent. Policy, by contrast, is designed to impose coherence and stability upon the dynamic and changeable currents of common sense. So while sensible policy must respond to those currents, as I will discuss in the next post, it cannot rely entirely upon common sense to provide the signposts toward good sense.

Why do we eat what we eat?

To take a concrete example, consider recent public policies relating to food, such as recent ballot initiatives to ban large sugary soft drinks in some cities, laws to force labeling of GMO ingredients, or requirements for schools to offer more fruits and vegetables in cafeterias. These policies can only be effective if they can successfully build upon the existing foundation of common sense ways of eating—the collective habits that all of us together practice in our daily acts of munching, dining, snacking, lunching, and breaking fast.

But what would it take to understand the common sense of eating? We are, each of us every day, actively engaged in producing and reproducing common sense for diet. Consider why you eat what you eat. On the surface, it seems a simple matter to list out the reasons behind eating certain foods and not eating others. We might start listing off criteria: cost, taste, aesthetic appeal, freshness, convenience, accessibility, nutritional value, presence or absence of certain ingredients (e.g. vitamins or allergens), whether it is certified organic, fair trade, or local.  Clearly there are many characteristics we might look for in our food, but how do we know that the foods we are choosing among are any of these things?

Let’s think about that question for a minute. First we have our senses—we can taste, touch, smell, listen and look. These sensory perceptions give us direct information that helps us pick out our food. If an apple has mushy brown spots all over it, the tilapia smells extremely fishy, or the watermelon sloshes too much when shaken, then they’re probably bad.

In addition to our senses, we have many indirect means for learning about the food. In the moment, for example, we can read the product labeling. Labeling contains the abstracted information that travels along with the food and tells us about it. From as simple a bit of information as the price per pound and the weight of the food to as complex a bit of information as the percent of recommended daily value of sodium or the USDA organic label, the information accompanying the food itself strongly influences how we know if it is good to eat. It is hard to understate the importance of labeling today. Think of how often you look at the ingredients list, check the seals of certification for organic or kosher, review the allergen information, or consider the calories per serving before deciding to buy a given food item.

But what we can learn about food in the moment is only part of what informs our understanding of what is good to eat. We have past experience and familiarity to guide us as well. If I have eaten kumquats, Oreo cookies, fried okra, or raw cheese in the past and enjoyed them without any immediate problems arising, I’m more likely to try them again in the future. The more we eat a food, the more familiar we become with eating it. After a while, we don’t have to think about the individual food choices much at all: we can rely on past experience to hold true in the future. Since we are social beings, it’s not just our own experiences we draw upon. Chances are we eat a lot of the same things that the people close to us eat because we trust the judgments of those around us—parents, role models, friends, and so on.

This is where advertising and marketing enter the picture. These tactics strongly influence our sense of familiarity with certain foods, usually through the intermediary symbolism of the brand. Whether we acknowledge it or not, many of us have brand loyalties of one form or another that have nothing to do with our senses, labeling, what we have eaten frequently in the past, or what the people close to us eat. That’s the power of marketing.

Journalism can also affect our sense of familiarity with foods. Food sections in newspapers, blogs, TV programs, and so forth may all introduce us to new foods and bolster our confidence in foods we already know. News can also speak to our intellectual understanding of food. In recent memory, reports about the health effects of salt, high fructose corn syrup, or trans fatty acids have all been tremendously impactful on how people make eating choices, both individually and as a matter of public policy.

Which brings us to the role of science in defining which foods are good to eat and which are not. Increasingly, people take into consideration what the experts say when making eating choices. Nutritionists, dieticians, food scientists, doctors and their professional organizations and expert committees frequently enter the public limelight with a new finding, recommendation, or warning about food. These expert opinions, which speak for science, carry great weight in shaping our everyday understanding of what foods are good to eat (or not).

As you can see from this lengthy discussion, the sources of information that feed into any given eating decision are manifold. However, does each of us actually consider each of these factors and sources of information every single time we make a choice about what to eat? Of course not. The majority of the time, we choose by habit, “an acquired behavior pattern regularly followed until it has become almost involuntary” (or non-conscious). However, habit is not just an individual trait, but a collective trait. Habit can also mean “customary practice”, or just “custom”, as in the habit of shaking hands when meeting another person or saying “Hello” when answering the phone. Habits or customs are built on common sense, or collective bundlings of wisdom, values, and assumptions that people use to make everyday decisions about all sorts of things, like what to eat.

Wrapping Up

We’ve now covered some basic points on the relationship between common sense and good government. Before I continue the discussion in next couple of posts, which explore the relationship through examples from food and agriculture, I’d like to raise a question as food for thought: why govern?

Why do we elect people like the president, like our senators, representatives, governors, mayors, aldermen? Why do we employ tens of thousands of civil servants, bureaucrats, and other government workers? What is their purpose? What is the purpose of government?

Keep this in mind as we go through concrete cases in the next few posts. I’ll come back to this question at the end of this series.


[i] This thesis might be alternatively stated: government relies on establishing a dominant environmental frame that defines problems between people and nature, identifies acceptable solutions for dealing with those problems, and imagines the sort of futures which those solutions are supposed to attain.

[ii] Gramsci, Antonio. Selections from the Prison Notebooks of Antonio Gramsci. Edited by Quintin Hoare. New York: International Publishers, 1972. p. 419.

[iii] The editor to the Prison Notebooks notes that “[Gramsci] uses the phrase ‘good sense’ to mean the practical, but not necessarily rational or scientific, attitude that in English is usually called common sense” (p. 322).

[iv] Hall, Stuart. “Gramsci’s Relevance for the Study of Race and Ethnicity.” Journal of Communication Inquiry 10, no. 2 (June 1, 1986): 5–27. He cites Gramsci, Prison Notebooks, p. 362.

[v] Gramsci differentiated between organic philosophy, which belonged to all people, and what he called “the philosophy of the philosophers”, which he used to refer to the theories produced by elite thinkers to be imposed upon the unthinking masses. That sort of ‘philosophy’, although it might overlap with science (think of eugenics), did not equate to good sense. As the editor to Prison Notebooks explains, “The critique of ‘common sense’ and that of ‘the philosophy of the philosophers’ are therefore complementary aspects of a single ideological struggle” (p. 322). One of the refreshing aspects of Gramsci’s perspective is that he rejects both an anti-intellectual herd mentality and the rule of experts, preferring instead to promote the idea that all people are, or can be, intellectuals in their own way. Hall writes that, “[Gramsci] insists that everyone is a philosopher or an intellectual in so far as he/she thinks, since all thought, action and language is reflexive, contains a conscious line of moral conduct and thus sustains a particular conception of the world (though not everyone has the specialized function of ‘the intellectual’)” (ibid). Good sense is a publicly accessible good, which implies that the purpose of good government is neither to impose some pre-formed theory of what’s best for everyone [authoritarianism in the extreme] nor to stand back and let things take their course [laissez faire], but rather to help organize individual citizens’ own capacity for making and following good sense.

[vi] Harvey, David. Spaces of Global Capitalism. London; New York, NY: Verso, 2006. p. 84.

[vii] Dean, Mitchell. Governmentality : Power and Rule in Modern Society. London; Thousand Oaks, Calif.: SAGE, 2009. p. 20.

Share

1 Comment

Filed under Basic Concepts