Category Archives: Basic Concepts

Common Sense, Science and Government Part III: Manufacturing the sweet tooth

I ended the last post with the idea that making policy and engaging in government is a process of shaping common sense. The reason is that government, unlike direct rule, relies upon the consent of the governed (government must ‘go with the grain’). People generally consent to live under a particular social order when that order seems perfectly natural and normal; consent is most assured at the point when people can’t envision an alternative way of doing things, or shrug their shoulders and lament “that’s just the way it is.” In this way, consent to be governed a certain way by a certain set of people is grounded in the terrain of common sense.[i] Without consent, there can be no government.[ii]

In the best case, the need for consent produces a “government of the people, by the people, for the people”, as President Lincoln famously proclaimed in the Gettysburg Address. In the ideal, democracy bubbles up from the grassroots: the citizenry consents to the rule of law because they define the law, and trust that the state they have chosen to implement that law serves their best interests. In other words, in an ideal case the consent of the governed is an active consent, predicated on the assumption that the law is a fair application of good sense to common public problems.

However, the populace can also passively consent to a government imposed from the top down. Thoughtful public deliberation can be bypassed altogether if an agenda of government can be made to fit smoothly within the existing framework of common sense. As we know from Gramsci, common sense is inherently dynamic. It changes and adapts over time due to chance and the aggregated choices of individuals, but common sense may also change to accommodate new realities of life (e.g. novel technologies or occupations) or by intentional manipulation (e.g. through media, education, propaganda, etc.). Here’s how David Harvey puts it:

What Gramsci calls ‘common sense’ (defined as ‘the sense held in common’) typically grounds consent. Common sense is constructed out of longstanding practices of cultural socialization often rooted deep in regional or national traditions. It is not the same as the ‘good sense’ that can be constructed out of critical engagement with the issues of the day. Common sense can, therefore, be profoundly misleading, obfuscating or disguising real problems under cultural prejudices. Cultural and traditional values (such as belief in God and country or views on the position of women in society) and fears (of communists, immigrants, strangers, or ‘others’) can be mobilized to mask other realities.[iii]

More importantly, these elements of common sense can be mobilized to mask the redistribution of benefits and burdens, to advantage some at the expense of others. But that’s a lot of abstraction to begin with, so I’ll turn now to some concrete examples to try paint a clearer picture of the dangers inherent in fiddling with common sense (as counterpoint to the previous post, in which I argued for the dangers inherent in not fiddling).

In the rest of this post and the next, we will look at two parallel, historical cases in which people changed their eating habits abruptly for reasons largely beyond their immediate For the remainder of this post, we will look at the development of a sweet tooth among the British working classes in the 17th through 19th centuries and the ways in which this dietary shift dovetailed with consent to an industrial capitalist mode of organizing peoples’ relationship with nature. In the next post we will look at the introduction of white bread to the American middle-class at the turn of the 20th century, and the ways in which the store-bought loaf acclimated Americans to the idea that experts know best how to organize relations between people and nature.

Habituating to Sweetness and Consenting to Industrial Capitalism

In his classic treatise Sweetness and Power, Sydney Mintz takes a deep look at the deceptively simple idea of the ‘sweet tooth’. While today we often take as fact that people like sweet foods, even to the point of self-harm, societal relationships with sweetness vary widely across both geography and time.[iv] At a time when the ubiquity of sugar in our diets is under intense scrutiny, even in the UK[v] (the birthplace of the modern sweet-tooth, as we’ll see), the irony that this problem was intentionally engineered is especially striking.

Just a few centuries ago, concentrated sweetness such as sugar was rare and expensive, and most people didn’t have it or even realize that they might want it.[vi] Such was the case in Medieval England, where merchants sold sugar, at exorbitant prices, as a prized luxury only the very rich and powerful could afford.[vii]  Between the mid-17th and mid-19th centuries, however, sugar experienced a reversal of fortunes. From spice of kings to common man’s fare in a mere two centuries, by 1900 sugar accounted for 20% of dietary calories in England. “What turned an exotic, foreign and costly substance into the daily fare of even the poorest and humblest people?” asks Mintz. What he is trying to understand is a sea-change in common sense about food, the ways in which people, seemingly out of the blue, become “firmly habituated” to eating sugar and consuming sweetness.

Unraveling the puzzle takes close attention to the everyday ways in which people decide what to eat and the political, economic, health and environmental repercussions of diet. Toward this end, Mintz breaks his main arguments into three sections, titled Production, Consumption, and Power. From their origins in the orient, sugar plantations slowly spread to the Arab empire and eventually to the Mediterranean and, by the 17th century, to the New World. The salient point is that sugar plantations pioneered industrial and capitalist forms of organizing production and labor long before the start of the Industrial Revolution and the advent of capitalism (at least, so far as these things are conventionally dated by historians). During the late 1600s and early 1700s, plantations in the West Indies combined the field and the factory in one centralized operation designed to maximize output of a single commodity for export, with the single-minded goal of reaping large, rapid profits for absentee owners and investors back in England and the continent (these English speculators kept their personal costs down by using slave labor). [viii] The following images, dating from the 17th to 19th centuries, illustrate how “factory and field are wedded in sugar making.”

Sugar boiling house

“Most like a factory was the boiling house,” writes Mintz (p. 47), who in addition to this print, attributed to R. Bridgens c. 19th-century (courtesy of the British Library), included the following descriptive passage from a plantation owner in Barbados, describing the boiling house c. 1700: “In short, ‘tis to live in perpetual Noise and Hurry, and the only way to Render a person Angry, and Tyrannical, too; since the Climate is so hot, and the labor so constant, that the Servants [or slaves] night and day stand in great Boyling Houses, where there are Six or Seven large Coppers or Furnaces kept perpetually Boyling; and from which with heavy Ladles and Scummers they Skim off the excrementitious parts of the Canes, till it comes to its perfection and cleanness, while other as Stoakers, Broil as it were, alive, in managing the Fires; and one part is constantly at the Mill, to supply it with Canes, night and day, during the whole Season of making Sugar, which is about six Months of the year.”

Sugar mill in the Antilles, 1665

A sugar mill belonging to Phillippe de Longvilliers de Poincy, from: Charles de Rochefort. Histoire naturelle et morale des iles Antilles de l’Amérique. A Roterdam: chez Arnould Leers, 1665 [FCO Historical Collection, via King’s College London].

Digging the Cane-holes - Ten Views in the Island of Antigua (1823), plate II - BL

“Digging the Cane-holes”, in Ten Views in the Island of Antigua (1823), plate II – BL. Curated by William Clark, via Wikimedia Commons. More images from Ten Views are available at Wikimedia Commons.

Importantly, the plantations initiated an exponential growth in sugar production before demand existed to consume all that sweetness. This posed something of a problem, for the many new goods produced by exploiting people and nature in the colonies of the British Empire and its peers—including also coffee, tea, and cocoa—threatened to swamp the limited commodity markets back in Europe. What was needed was a rapidly expanding consumer demand for all of these new goods, and Mintz points out that this is exactly what happened with the widespread transformation of sugar from “costly treat into a cheap food”. To make a long and very detailed chapter short, the use of sugar as a status symbol among the rich and powerful percolated (with encouragement) down to the lower classes, who sought to emulate their social superiors. At the same time, the uses of sugar in diet diversified, especially through synergies with other plantation commodities (chocolate, tea, and coffee) and through displacing other traditional staples that people no longer produced for themselves (because they now spent their time working in factories instead of farms). For example, people who couldn’t afford butter (and no longer had access to cows to produce their own) could instead eat jams and preserves with their daily bread.

At the same time as the working classes were accepting sugar as food, the powerful—first aristocrats and merchants, and later the rising industrial and trade capitalists—were also adjusting their relationship to sugar. From a simple vehicle for affirming social status through direct consumption, sugar came to be seen and used as a vehicle for accumulating wealth and solidifying the British nation (and empire). In a paradigm shift of contemporary attitudes toward consumption, it was during this time period that political economists first recognized that demand didn’t have to remain constant (tied to existing subsistence levels), but that rather it could be elastic.[ix] Not only did this realization mean that capitalists could extract greater effort from laborers, who “worked harder in order to get more”, but it unleashed the hitherto unanticipated growth engine of the working class purchasing power, providing a ready sponge to soak up increasing commodity production and make owners an obscene amount of money.

So all of these things happened at about the same time: sugar production boomed, capitalists made lots of money, basic foods were no longer produced at home, and people developed a taste and preference for sweetness. Rejecting the idea that such a coincidence happened by mere chance, Mintz contends that these events are related through the intricate dance of power to bestow meaning on vegetable matter, to transform a simple reed into food, and thence into a pillar of empire and the birth of capitalism. A slippery concept in the absence of overt force or coercion, power grapples with the question of who ultimately guides the reins of common sense, and thus steers the vast course of social organization. Such power is very difficult to observe directly, and does not necessarily fit tidily into bins of ‘cause’ and ‘effect’. So Mintz instead turns to indirect evidence in the form of motive: who profited and who didn’t from the new British sweet tooth?[x]

While “there was no conspiracy at work to wreck the nutrition of the British working classes, to turn them into addicts, or to ruin their teeth,” clearly widespread use of sugar as food benefited the sugar plantation owners, and also those who ran and operated the wheels of empire. It benefited manufacturers by making factory workers and their families dependent upon their jobs and wages to buy the new imported food goods so they could continue living. Mintz, through careful anthropologic interpretation, shows that the common people had no more free will in what to consume than they did in how to produce (i.e. by selling their labor power for wages): “the meanings people gave to sugar arose under conditions prescribed or determined not so much by the consumers as by those who made the product available” (p. 167). Though the mostly trivial webs of meaning spun by individuals lead us to believe in free choice in the marketplace, observation shows that our small individual webs of meaning are contained in and subsumed by “other webs of immense scale, surpassing single lives in time and space” (p. 158). Whoever can gain control of the shape and direction of these larger webs—such as common sense—can gain control over the mass of the people in a way that is not readily recognizable.

 


[i] I take this basic point from David Harvey’s chapter on “The Construction of Consent” in A Brief History of Neoliberalism, p. 39-41.

[ii] Out of concern for space, I grossly abbreviated the continuity of relationship between common sense, consent of the governed, and government. I wanted to note here that Foucault’s last writings, e.g. History of Sexuality, Vol. II & III, deal extensively with the idea of ethics, or “techniques of the self”. In a way, an ethic is used to describe rules that people have to regulate, or govern, our own personal behavior. If we want to talk about a government that rules with the grain, then it has to be a government that engages with these personal ethics—consent of the governed, then, can also be construed as the alignment of individual ethics with government, of techniques of the self with techniques of discipline (the relationship of ruler to subject). Given that personal ethics are informed as much by normalized (i.e. taken for granted) habits and patterns of behavior as by rational thought and decisive action, common sense can also be taken to describe the terrain of ethics to which the populace subscribes.

[iii] David Harvey, A Brief History of Neoliberalism (Oxford University Press 2005), p. 39. This paragraph is from his chapter on “The Construction of Consent”, to explain why people have accepted the ‘neoliberal turn’ in global governance, which basically holds to the philosophy that the social good is maximized by funneling all human relations through the mechanism of a market transaction, even though many of the policies pursued under this program have demonstrably negative effects on the well-being of hundreds of millions of people while simultaneously lining the pockets of a far smaller set.

[iv] “There is probably no people on earth that lacks the lexical means to describe that category of tastes we call ‘sweet’… But to say that everyone everywhere likes sweet things says nothing about where such taste fits into the spectrum of taste possibilities, how important sweetness is, where it occurs in taste-preference hierarchy, or how it is thought of in relation to other tastes.” Mintz, Sidney. 1985. Sweetness and Power : The Place of Sugar in Modern History. New York, N.Y.: Penguin Books, 1985. (17-18).

[v] The BBC, for example, ran a lengthy series stories throughout 2014 on sugar. For example, in March there was “WHO: Daily sugar  intake ‘should be halved’”, in June there was “How much sugar do we eat?”, and in September there was “Sugar intake must be slashed further, say scientists”. And just this week (Jan. 5, 2015), BBC ran “Cut back amount of sugar children consume, parents told”. Today, the entire ethics (see endnote ii) and government of sugar consumption are changing together, and more consciously than perhaps at any previous point in history.

[vi] Coincidentally, after I had already composed most of this post, I saw the BBC documentary Addicted to Pleasure, which first aired in 2012. Hosted by actor Brian Cox, who himself suffers from diabetes and must carefully manage his personal sugar intake, the documentary covers much of the story told by Mintz, albeit minus most of the scholarly critique of colonial exploitation and oppression and the exploitation of working class people.

[vii] In the 16th century, the English royalty were noted for “their too great use of sugar”, which was used to demonstrate wealth and status at court—Queen Elizabeth I’s teeth, for example, turned black from eating so many sweets. Mintz, p. 134.

[viii] Mintz, p. 55 and 61. The classic definition of capitalism requires well-developed markets, monetary currency, profit-seeking owners of capital, the alienation of consumers from intimate involvement in production, and ‘free’ labor (i.e. not slaves or land-bound peasants, but rather workers paid in wages). The sugar plantations of the mercantile colonial period fit some but not all of these criteria.

[ix] Mintz is careful to demonstrate how political economists changed their thinking on the role of consumers in national economies. Whereas mercantilists had assumed national demand for any given good to be more or less fixed, a new wave of capitalist thinking held that demand could increase by enrolling the people of a nation more completely in market relations—people should no longer subsist on the good they produce for themselves, but should get everything they consume through the market (note that this thinking forms a direct connection between personal ethics and social organization, or government). In return, they should sell their labor on the market as well. See p. 162-165.

[x] “To omit the concept of power,” Mintz writes, “is to treat as indifferent the social, economic, and political forces that benefited from the steady spread of demand for sugar… The history of sugar suggests strongly that the availability, and also the circumstances of availability, of sucrose… were determined by forces outside the reach of the English masses themselves.” p. 166.

Share

Leave a Comment

Filed under Basic Concepts

Common Sense, Science and Government Part II: A Case of Quinoa

The first case I’ll discuss focuses on quinoa, a grain-like staple more closely related to beets and spinach than to the true grasses such as wheat. Once a rather obscure food in the US, quinoa experienced a rapid popularity spike beginning in 2007 when consumers in the global north fed into a new narrative extoling its virtues for health and social justice alike. It began with nutrition-minded journalists hailing quinoa as a “new health food darling”. High in protein, in fact featuring all nine amino acids, quinoa could also serve as a gluten-free substitute for wheat or barley. The added mystique of a “rediscovered” (read: Columbused) ancient staple, a “lost” Inca Treasure, also dovetailed nicely with the popular paleo-diet trend which urged a dietary return to an idealized simpler time when people were more closely attuned with nature. For all of these reasons, quinoa received great publicity as a sacred, super crop.

An interrelated second narrative presented quinoa as a way for consumers to also support fair trade and sustainable development. Buying quinoa meant supporting farmers in developing countries such as Peru and Bolivia while allowing them to maintain a traditional way of life. The pitch for quinoa on Eden Organic’s website, for example, reads, “The most ancient American staple grain. Sustainably grown at over 12,000 feet in the Andes helping preserve native culture.” In 2012, when I first looked into the quinoa case, I came across a fair trade certified brand, La Yapa (now defunct), which summed up a stronger iteration of this marketing narrative:

“In the past few years, the income of quinoa farmers has doubled with the increase in volume and prices… The farmer’s quality of life also has increased steadily… By choosing this Fair Trade Certified™ product, you are directly supporting a better life for farming families through fair prices, direct trade, community development, and environmental stewardship.”

The global food security community also picked up on the quinoa fanfare, culminating in the 2011 decision by the United Nations General Assembly to declare 2013 “The International Year of the Quinoa”. The press release for the occasion cites the potential contributions of quinoa to then-Secretary-General Ban Ki-moon’s Zero Hunger Challenge, “not only because of its nutritional value but also because most quinoa is currently produced by smallholder farmers… ‘The crop holds the promise of improved incomes – a key plank of the Zero Hunger Challenge,’ Ban said.” A special report was released with the goal of “improving knowledge and dissemination of this ancient crop, which has a significant strategic value for the food and nutritional security of humanity.

The other piece of this global food security narrative touted the environmental advantages of traditional subsistence crops like quinoa (e.g. amaranth, teff, fonio, etc.), especially their resilience in the face of global climate change. A recent article from National Geographic captures the essence of this line of thinking:

“[Sustainable agriculture advocates are] increasingly turning to grains that have been the basis of subsistence farmers’ diets in Africa, South Asia, and Central and South America since the time of earliest agriculture. Because such grains adapted to grow on marginal land without irrigation, pesticides, or fertilizers, they are often more resilient than modern commodity crops are.” (emphasis added, also see note at [ii]).

Taken altogether, quinoa has been presented in and to the global north as a win-win-win superfood—good for the health of wealthy consumers, the wealth of poor farmers, and the ecological stability of global agriculture[i]. The overall message to the savvy shopper in New York or Berkeley or Chicago, then, was that quinoa was good to buy.

But complications with that rosy narrative arose just as rapidly as quinoa’s acclaim spread. Demand rose so quickly that the price of quinoa tripled from 2007 to 2010 (Fig. 1).

Prices

Fig. 1: Prices of Quinoa at the Farm Gate, 1993-2012 (constant Int. $/tonne). Source: http://faostat3.fao.org/.

 

Ironically, the food which was celebrated as a “cultural anchor and a staple in the diet of millions of people throughout the Andes for thousands of years” seemed to have been priced out of their budget by the “agricultural gold rush.”Over the same time period, production volume accelerated its growth and the area cultivated for quinoa expanded substantially, especially in Bolivia (Figs. 2 and 3).

Production

Fig. 2: Tonnes of Quinoa Produced Annually, 1994-2013. Source: http://faostat3.fao.org/.

Area Harvested

Fig. 3: Hectares of Quinoa Harvested Annually, 1994-2013. Source: http://faostat3.fao.org/.

These numbers seemed to paint a much more complex story than win-win-win: Was high consumer demand in the US and the EU actually taking a staple food away from South America smallholders? Were record-level prices encouraging farmers to plant quinoa on ecologically marginal lands, courting disaster in the form of an Andean equivalent to the Dust Bowl? With soaring prospects for fat profit-margins and a global development community hungry for a silver bullet crop, were Andean smallholder farmers in danger of losing control over quinoa and being pushed out of their own market?[ii]

All of these questions, however, boiled down to one media snippet for global north publics: to eat or not to eat? A rash of headlines in early 2013 posed titillating provocative challenges to the quinoa fad. “Is eating quinoa evil?,” quipped The Week, while The Guardian challenged, “Can vegans stomach the unpalatable truth about quinoa?”. Tom Philpott, writing for Mother Jones, tried to restore some sanity with his more nuanced article, “Quinoa: Good, Evil, or Just Really Complicated?”, but the overarching point of reference for the American and European publics had been set. Whether a question of health, the viability of smallholder farming, or environmental sustainability, it had to be framed, in Hamlett-like fashion, to buy or not to buy?

Lost amid all the hand-wringing were the voices cautioning that the public and the media had fixated on the wrong question. Tanya Kerssen, an analyst at the non-profit organization Food First writing for Common Dreams, pointed out that to consume or not to consume was a false choice:

“In short, the debate has largely been reduced to the invisible hand of the marketplace, in which the only options for shaping our global food system are driven by (affluent) consumers either buying more or buying less… [T]he so-called quinoa quandary demonstrates the limits of consumption-driven politics. Because whichever way you press the lever (buy more/buy less) there are bound to be negative consequences, particularly for poor farmers in the Global South. To address the problem we have to analyze the system itself, and the very structures that constrain consumer and producer choices…

Consumption-driven strategies, while part of the toolbox for effecting change, are not the only tools. Only by facing the reality that we can’t consume our way to a more just and sustainable world—and examining the full range of political options and strategies—can we start coming up with real solutions.” (emphasis added).

So there we have one example to help illustrate why good policy cannot rely solely upon common sense for guidance. As Gramsci warned, common sense “takes countless different forms” and “even in the brain of one individual, is fragmentary, incoherent” (as quoted in my previous post). Relying upon common sense alone is to follow a fickle and partial guide. The assumptions and tacit beliefs underlying common sense will not always hold up under scrutiny, meaning that developing good policy requires continual critical reflection, public debate, and learning.

Quinoa has risen to prominence because it can link key points of contention in global agricultural policy—often voiced in highly abstract statistics on population demographics, epidemiological findings, economic indicators, and environmental qualities—to the daily concern with what to eat that makes intuitive sense to powerful publics in the global north. While certain policy programs (e.g. leveraging the ‘traditional ecological knowledge’ of smallholder farmers or folding peasants into global commodity food markets) may have gained political traction by adapting their arguments to the contours of common sense, such compromise comes at a cost. In this case, the experiences and perceptions of first-world consumers were naively accepted as the “terrain” of common sense upon which public debates about global poverty, health, and climate change can and should be debated. However, this common sense represents only a narrow slice of daily life around the globe.

Missing from the common sense of affluent consumers are, for example, the experiences and perspectives of the Andean farmers who grow quinoa and the poor whose health and development so many are concerned with. And this is not to mention the underrepresentation of nonhuman organisms and ecosystems, especially those not explicitly contributing to food commodities (e.g. the ecosystems on marginal lands into which quinoa farming has begun to spread). That translates to a large number of options and strategies that will never even be considered and a large number of unintended consequences that will never be recognized because they are outside the realm of what is commonly familiar to the consumer classes. As Kerssen writes, it would be a rational ideal to “examine the full range”, but if we want to take that process seriously, then we also need to examine the full range of common sense.

As I argued in the previous post, good policy, including environmental and natural resource policy, cannot ignore common sense, but must work with the grain of existing preconceptions and ways of living in the world. What this case highlights is that we also cannot rely solely on common sense to guide us to good policy. As is shown with the quinoa case, common sense—such as the idea that the only lever we have with which to move the world comes in the form of our fork or our wallet—often misses important pieces of the story and can lead us far afield or into a seemingly intractable impasse or an impossible (or false) choice. Critical reflection on the strengths and shortcomings of basic common sense is needed.

From this insight, we can infer that good policy emerges from critical consideration of common sense. Good policy must be built on that existing foundation, but also must do productive work on people to direct them toward better habits, better ways of living in the world. In short, in order to better approximate good sense. Next time, we’ll consider upon what basis, if not common sense, good sense can be gauged.


[i] For an example of the kind of utopian visions that experts began attaching to quinoa potential future, a 2014 article by Lisa Hamilton in Harper’s Magazine quotes a prominent Dutch agronomist, saying, “If you ask for one crop that can save the world and address climate change, nutrition, all these things—the answer is quinoa. There’s no doubt about it.”

[ii] This poses a very thorny political economic question, and one that doesn’t lend itself easily to a simple yes or no response. The Harper’s article (ibid) tackles the complexity in greater depth, but the short version is that with great potential comes great prosperity, and then a great struggle over who has the right to enjoy that prosperity. In past epochs, newly “discovered” crops could be expropriated and spread around the world; examples include potatoes, tomatoes, or maize, all of which are native to the Americas. These plants didn’t just naturally evolve as desirable food crops, however. Rather, the ancestors of the Aztec, Incan and other indigenous peoples spent millennia worth of work breeding them from wild plants. Yet they never saw a penny for sharing those crops with the rest of the world. Instead, that privilege was assumed by European colonists and their descendants while Native American peoples were instead violently repressed (and killed). The Andean peoples of Bolivia and Ecuador are savvy to the long history of indigenous groups losing control over their germplasm heritages and have thus imposed strict sanctions over any sharing of quinoa seeds and genetic information. Thus quinoa finds itself at the heart of a struggle between food sovereignty and food security—an impasse seems to have been reached with “the poor of the Andes pitted against the poor of the world” (ibid). There are doubtless sensible and just ways to negotiate out of this impasse, which I won’t try to guess at here, but again the point I would like to make is that complex problems require complex (and often messy) responses. Pretending that a simple solution can be found by applying basic common sense (i.e. the needs of the world’s many outweigh the needs of the Andean few, so world development organizations should just go ahead and take quinoa from Bolivians and Peruvians) is not a route to sound policy or good governance.

Share

Leave a Comment

Filed under Basic Concepts, Current Events

Common Sense, Science and Government, Part I

In the next set of posts, I draw on a lecture I gave to an undergraduate class on natural resource policy a few years ago to examine the relationship between common sense, science, and government. Revisiting this set of basic relationships will set a conceptual foundation for future posts on more specialized topics such as social construction and co-production.

Some decisions must be made and actions taken at a societal level, and such collective deciding and acting is part of what I mean when I use the word government (to distinguish from today’s popular usage of the word as a fixed institution). One thesis that I explore in this blog is that all government hinges on defining and manipulating relationships between people and nature [i]. This is a big claim, and in many cases might be difficult to demonstrate. For that reason, I begin with natural resources.

It seems to me that many people can easily imagine what good natural resource management might mean — clean and safe water, smog-free air, sustainable fisheries and forests, preventing soils from eroding away, preserving wild species from extinction, and so forth — which narrows the gap between common sense and good sense (more on that later) and makes for a good starting place.

As is often my wont, these posts will turn to food and agriculture for concrete case material to help illustrate the general points I would like to make. It might seem unusual to speak of food as a natural resource, but producing food involves the joining and utilization of many other natural resources – water, energy, land and soil, minerals for fertilization, ecosystems services like pollination, sunlight, and of course lots of hard work. Food may be the most complex and vital natural resource we have, which makes it a rich source of information for thinking about common sense, science, and government.

Common Sense and Government

The political theorist Antonio Gramsci, an Italian political activist in the years leading up to WWII who wrote his most famous works from prison after being arrested by the nascent fascist regime in Italy, turned to the concept of common sense to help explain how fascism could take root in a society. He defined it as:

“…the conception of the world which is uncritically absorbed by the various social and cultural environments in which the moral individuality of the average man is developed. Common sense is not a single unique conception, identical in time and space. It is the “folklore” of philosophy, and, like folklore, it takes countless different forms. Its most fundamental characteristic is that it is a conception which, even in the brain of one individual, is fragmentary, incoherent and inconsequential, in conformity with the social and cultural position of those masses whose philosophy it is.”[ii].

Common sense incorporates all of those beliefs and assumptions that people do not actively question, yet upon which we all rely upon to guide most of our actions throughout each day. While we might aspire to always make what Gramsci terms ‘an intellectual choice’, to act rationally (first weighing costs and benefits) or ethically (following a set code of conduct), following what we might term good sense [iii], Gramsci points out that much of the time people instead draw upon prepackaged thoughts and beliefs. We act out of habit as much as we do out of thoughtfulness.

While in general common sense often approximates good sense, the two are only loosely coupled. Critical theorist Stuart Hall—drawing on the source material in Gramsci’s Prison Notebooks—explains the relationship more fully:

“Why, then, is common sense so important? Because it is the terrain of conceptions and categories on which the practical consciousness of the masses of the people is actually formed. It is the already formed and “taken for granted” terrain, on which more coherent ideologies and philosophies must contend for mastery; the ground which new conceptions of the world must take into account, contest and transform, if they are to shape the conceptions of the world of the masses and in that way become historically effective. ‘Every philosophical current leaves behind a sediment of ’common sense’; this is the document of its historical effectiveness. Common sense is not rigid and immobile but is continually transforming itself, enriching itself with scientific ideas and with philosophical opinions which have entered ordinary life. Common sense creates the folklore of the future, that is as a relatively rigid phase of popular knowledge at a given place and time’ (PN, p. 362)” (emphasis added). [iv].

Today, our society often looks to inductive science for an external reference of good sense against which to weigh our common sense. Science, we think, ought to provide objective evidence for how we should act individually and as a society. But science must work with the pre-existing terrain of common sense which is messy, slow-to-change, nebulous and carries with it the baggage of other external referents for good sense—such as religious doctrines, moral reasoning, and logical deduction—that have come before. And science itself emerges from people who themselves live within the encompassing medium of common sense.[v]

And yet we must rely upon common sense, in general, since as a practical matter it just takes too much time and energy to rationally and ethically analyze every potential action (and analysis is never perfect in any case). Thus geographer David Harvey asserts, “We cannot understand anything other than ‘common sense’ conceptions of the world to regulate the conduct of daily life” [vi]. The word regulate here begins to imply a more-than-superficial connection between the ways in which individuals act in their private lives and the ways in which societies act collectively through government. Many people are familiar with the idea that government imposes restrictions upon the private lives on individuals. However, it is a two-way street: the form that government takes is shaped by the ways in which people lead their lives.

Modern government trends toward governing “with the grain”—its philosophy is to act less like a drill sergeant and more like the conductor of an orchestra, serving as a point of reference to guide everyone in playing the right part at the right time at the right tempo such that a harmonious whole emerges. Thus to govern today, to develop and put into action sensible policies, requires an intimate understanding of common sense, for the former can only be effective if it accommodates the latter. Every policy, every attempt at what we might call good sense, must be ‘refracted’ through the common sense ways in which people lead their day-to-day lives, like light filtering through a prism. Likewise for the study of government (or in my case, environmental governance), for as sociologist Mitchell Dean puts it

To analyse government is to analyse those practices that try to shape, sculpt, mobilize and work through the choices, desires, aspirations, needs, wants and lifestyles of individuals and groups. This is a perspective, then, that seeks to connect questions of government, politics and administration to the space of bodies, lives, selves and persons . [vii].

To the extent that to govern well also entails critical examination of the common sense of governing, which might be seen as an attempt to form a good sense of good government of common sense (too meta?), the ways in which we conceptualize government and its relation to both common sense and good sense (such as that offered by science) cannot be separated from the practice of government.

This is not just an academic point, but a practical lesson in government, as demonstrated in this discussion with a man who has a lot of personal experience wrestling with the relationship between common sense and sensible policy:

Chris Hughes (interviewer): Can you tell us a little bit about how you’ve gone about intellectually preparing for your second term as president?

Barack Obama: I’m not sure it’s an intellectual exercise as much as it is reminding myself of why I ran for president and tapping into what I consider to be the innate common sense of the American people. The truth is that most of the big issues that are going to make a difference in the life of this country for the next thirty or forty years are complicated and require tough decisions, but are not rocket science…

So the question is not, Do we have policies that might work? It is, Can we mobilize the political will to act? And so, I’ve been spending a lot of time just thinking about how do I communicate more effectively with the American people? How do I try to bridge some of the divides that are longstanding in our culture? How do I project a sense of confidence in our future at a time when people are feeling anxious? They are more questions of values and emotions and tapping into people’s spirit.”

What the President acknowledges in this passage is the importance of knowing, intimately, the ordinary routines, values, and beliefs that real Americans use to get through each day—their common sense—and linking that grassroots sort of sense with the policy sort of sense that is concerned with the grand abstractions with which government concerns itself, such as the nation, the economy, the environment and ‘the general Welfare’ (to quote the preamble to the US Constitution). Thus his latter admission that his administration should focus on “spending a lot more time… in a conversation with the American people as opposed to just playing an insider game here in Washington.”

Of course, as Gramsci wrote and Hall emphasized, common sense is both “fragmented” and “continually transforming”—it is by nature mercurial and inchoate, often at odds with itself and internally inconsistent. Policy, by contrast, is designed to impose coherence and stability upon the dynamic and changeable currents of common sense. So while sensible policy must respond to those currents, as I will discuss in the next post, it cannot rely entirely upon common sense to provide the signposts toward good sense.

Why do we eat what we eat?

To take a concrete example, consider recent public policies relating to food, such as recent ballot initiatives to ban large sugary soft drinks in some cities, laws to force labeling of GMO ingredients, or requirements for schools to offer more fruits and vegetables in cafeterias. These policies can only be effective if they can successfully build upon the existing foundation of common sense ways of eating—the collective habits that all of us together practice in our daily acts of munching, dining, snacking, lunching, and breaking fast.

But what would it take to understand the common sense of eating? We are, each of us every day, actively engaged in producing and reproducing common sense for diet. Consider why you eat what you eat. On the surface, it seems a simple matter to list out the reasons behind eating certain foods and not eating others. We might start listing off criteria: cost, taste, aesthetic appeal, freshness, convenience, accessibility, nutritional value, presence or absence of certain ingredients (e.g. vitamins or allergens), whether it is certified organic, fair trade, or local.  Clearly there are many characteristics we might look for in our food, but how do we know that the foods we are choosing among are any of these things?

Let’s think about that question for a minute. First we have our senses—we can taste, touch, smell, listen and look. These sensory perceptions give us direct information that helps us pick out our food. If an apple has mushy brown spots all over it, the tilapia smells extremely fishy, or the watermelon sloshes too much when shaken, then they’re probably bad.

In addition to our senses, we have many indirect means for learning about the food. In the moment, for example, we can read the product labeling. Labeling contains the abstracted information that travels along with the food and tells us about it. From as simple a bit of information as the price per pound and the weight of the food to as complex a bit of information as the percent of recommended daily value of sodium or the USDA organic label, the information accompanying the food itself strongly influences how we know if it is good to eat. It is hard to understate the importance of labeling today. Think of how often you look at the ingredients list, check the seals of certification for organic or kosher, review the allergen information, or consider the calories per serving before deciding to buy a given food item.

But what we can learn about food in the moment is only part of what informs our understanding of what is good to eat. We have past experience and familiarity to guide us as well. If I have eaten kumquats, Oreo cookies, fried okra, or raw cheese in the past and enjoyed them without any immediate problems arising, I’m more likely to try them again in the future. The more we eat a food, the more familiar we become with eating it. After a while, we don’t have to think about the individual food choices much at all: we can rely on past experience to hold true in the future. Since we are social beings, it’s not just our own experiences we draw upon. Chances are we eat a lot of the same things that the people close to us eat because we trust the judgments of those around us—parents, role models, friends, and so on.

This is where advertising and marketing enter the picture. These tactics strongly influence our sense of familiarity with certain foods, usually through the intermediary symbolism of the brand. Whether we acknowledge it or not, many of us have brand loyalties of one form or another that have nothing to do with our senses, labeling, what we have eaten frequently in the past, or what the people close to us eat. That’s the power of marketing.

Journalism can also affect our sense of familiarity with foods. Food sections in newspapers, blogs, TV programs, and so forth may all introduce us to new foods and bolster our confidence in foods we already know. News can also speak to our intellectual understanding of food. In recent memory, reports about the health effects of salt, high fructose corn syrup, or trans fatty acids have all been tremendously impactful on how people make eating choices, both individually and as a matter of public policy.

Which brings us to the role of science in defining which foods are good to eat and which are not. Increasingly, people take into consideration what the experts say when making eating choices. Nutritionists, dieticians, food scientists, doctors and their professional organizations and expert committees frequently enter the public limelight with a new finding, recommendation, or warning about food. These expert opinions, which speak for science, carry great weight in shaping our everyday understanding of what foods are good to eat (or not).

As you can see from this lengthy discussion, the sources of information that feed into any given eating decision are manifold. However, does each of us actually consider each of these factors and sources of information every single time we make a choice about what to eat? Of course not. The majority of the time, we choose by habit, “an acquired behavior pattern regularly followed until it has become almost involuntary” (or non-conscious). However, habit is not just an individual trait, but a collective trait. Habit can also mean “customary practice”, or just “custom”, as in the habit of shaking hands when meeting another person or saying “Hello” when answering the phone. Habits or customs are built on common sense, or collective bundlings of wisdom, values, and assumptions that people use to make everyday decisions about all sorts of things, like what to eat.

Wrapping Up

We’ve now covered some basic points on the relationship between common sense and good government. Before I continue the discussion in next couple of posts, which explore the relationship through examples from food and agriculture, I’d like to raise a question as food for thought: why govern?

Why do we elect people like the president, like our senators, representatives, governors, mayors, aldermen? Why do we employ tens of thousands of civil servants, bureaucrats, and other government workers? What is their purpose? What is the purpose of government?

Keep this in mind as we go through concrete cases in the next few posts. I’ll come back to this question at the end of this series.


[i] This thesis might be alternatively stated: government relies on establishing a dominant environmental frame that defines problems between people and nature, identifies acceptable solutions for dealing with those problems, and imagines the sort of futures which those solutions are supposed to attain.

[ii] Gramsci, Antonio. Selections from the Prison Notebooks of Antonio Gramsci. Edited by Quintin Hoare. New York: International Publishers, 1972. p. 419.

[iii] The editor to the Prison Notebooks notes that “[Gramsci] uses the phrase ‘good sense’ to mean the practical, but not necessarily rational or scientific, attitude that in English is usually called common sense” (p. 322).

[iv] Hall, Stuart. “Gramsci’s Relevance for the Study of Race and Ethnicity.” Journal of Communication Inquiry 10, no. 2 (June 1, 1986): 5–27. He cites Gramsci, Prison Notebooks, p. 362.

[v] Gramsci differentiated between organic philosophy, which belonged to all people, and what he called “the philosophy of the philosophers”, which he used to refer to the theories produced by elite thinkers to be imposed upon the unthinking masses. That sort of ‘philosophy’, although it might overlap with science (think of eugenics), did not equate to good sense. As the editor to Prison Notebooks explains, “The critique of ‘common sense’ and that of ‘the philosophy of the philosophers’ are therefore complementary aspects of a single ideological struggle” (p. 322). One of the refreshing aspects of Gramsci’s perspective is that he rejects both an anti-intellectual herd mentality and the rule of experts, preferring instead to promote the idea that all people are, or can be, intellectuals in their own way. Hall writes that, “[Gramsci] insists that everyone is a philosopher or an intellectual in so far as he/she thinks, since all thought, action and language is reflexive, contains a conscious line of moral conduct and thus sustains a particular conception of the world (though not everyone has the specialized function of ‘the intellectual’)” (ibid). Good sense is a publicly accessible good, which implies that the purpose of good government is neither to impose some pre-formed theory of what’s best for everyone [authoritarianism in the extreme] nor to stand back and let things take their course [laissez faire], but rather to help organize individual citizens’ own capacity for making and following good sense.

[vi] Harvey, David. Spaces of Global Capitalism. London; New York, NY: Verso, 2006. p. 84.

[vii] Dean, Mitchell. Governmentality : Power and Rule in Modern Society. London; Thousand Oaks, Calif.: SAGE, 2009. p. 20.

Share

1 Comment

Filed under Basic Concepts

Metabolic Rift

The concept of metabolic rift is a powerful frame through which to understand the history of relations between people and nature, especially to highlight a contrast between the modern industrial era and a more organic past. I will primarily use the example of agriculture—the growing of food, feed, fiber and fuel to use for human purposes—to explain this frame.

Unbroken Cycles

Metabolic rift draws a metaphor from biology and the metabolism of living organisms. Metabolism refers to the chemical and physical processes by which a living being breaks down some substances to produce energy and uses that energy to produce other substances. The complex flows of energy and materials across ecosystems can similarly be thought of as metabolism on a larger scale. If human societies are included in these systems of exchange, the concept of metabolism as applied to something like agriculture begins to make sense.

We should keep in mind that in order to think about there being a rift, or a break, in this metabolism, we must first assume that there used to be an unbroken cycle of material and energy flows. Frequently, this assumption is justified by reference to ancient or “traditional” societies. For example, in their article Breaking the Sod: Humankind, History, and Soil,[i] scholars J.R. McNeil and Verena Winiwarter write of an organic nutrient cycle maintained by people living thousands of years ago:

Neolithic farmers, in southwest Asia and elsewhere, depleted soils of their nutrients by cultivating fields repeatedly, but they simultaneously enriched their soils once they learned to keep cattle, sheep, and goats, pasture them on nonarable land, and collect them (or merely their dung) upon croplands… When a population lived amid the fields that sustained them, the net transfer of nutrients into or out of the fields remained minor, as after shorter or longer stays in human alimentary canals and tissues, nutrients returned to the soils whence they had come.

Another example is given by environmental historian Richard White in The Organic Machine,[ii] in which he writes about the complex social and technological system by which peoples indigenous to what is now known as the Columbia River basin in the Pacific Northwest made their livelihoods from salmon:

For thousands of years Indian people had recognized and understood the blessings of a world in which small fish left the river, harvested the greater solar energy available in the ocean, and returned as very big fish. These fish always returned at the same time to the same place, and in their return they followed paths which took to the spots where human labor secured their capture.

The annual harvest of the returning salmon, rich in nutrients and calories gathered from the open ocean, supported one of the densest populations of people in North America before Europeans colonized the continent, and had done so for a very long time.

What is common to both of these stories is a narrative of cyclical flows—what leaves in one form returns in another, and that which is used in one stage is renewed in another. Along the way, people and other organisms can draw off some of the energy to make a living without interrupting the overall capacity of the cycle to bring the same benefits back the next round. Basically, this is the definition of sustainability.[iii]

Breaking Cycles

However, there are many historical examples where cycles which lasted for hundreds if not thousands of years have been broken by people eager to enhance productivity (and profit) even further. In the case of the Columbia River, the wealth of salmon did not last long once white Americans began to oust indigenous peoples and collect the fish for themselves. In fact, Americans so thoroughly disrupted the cycle that had existed that, “[I]n the face of such regularity and bounty, the Americans began breeding the fish in factories and setting out to sea to catch them.”[iv] Later on, of course, the annual bounty of the returning salmon faded from the economic landscape of the Columbia, replaced by the promise of raw power in the form of hydroelectricity and irrigation.

Farming systems also experienced rifts in the metabolic cycles that had sustained agricultural productivity for thousands of years. The concern that soils were becoming “worn out” or “exhausted” was a major concern for Europeans in the 18th and early 19th centuries.[v] Observers at the time cited the physical separation of farms and cities, the sites of production and consumption respectively, as a primary cause of metabolic rift. American farmers breaking sod in the North American plains to grow grains for shipment to eastern cities such as New York or European metropolises like London were doing little more than “robbing of the earth of its capital stock”, as George Waring wrote in an agricultural census report published in the 1850s.[vi] On the other side of the Atlantic, the German agronomist Justus von Liebig, often referred to as the father of agricultural chemistry, argued fervently in the mid-19th century that selling food to distant cities, which never returned the material (i.e. as manure or “night soil”), inevitably degraded the soil.[vii] The further separation of plant crops from animal livestock into separate production systems further broke up the cycle—manure, like human sewage, was no longer being returned to renew the soil. Ironically, the buildup of human and animal waste created a new, separate problem: what to do with all the hazardous material![viii]

The Law of Return

People have long recognized the wastefulness of breaking metabolic cycles, and have often not hesitated to condemn social and economic systems that incentivize this sort of rift. One of my favorite denouncements of the emerging industrial mode of agriculture comes from the English agronomist Sir Albert Howard, writing in 1947 (note the parallels to the ways in which sustainability, and unsustainability, are discussed today, see endnote iii):

The using up of fertility is a transfer of past capital and of future possibilities to enrich a dishonest present: it is banditry pure and simple. Moreover, it is a particularly mean form of banditry because it involves the robbing of future generations which are not here to defend themselves.

Howard had a different vision for how to practice agriculture, which embraced unbroken energy and nutrient cycles as the key for land to sustain its productive benefits for people. He conducted a multi-decade study of composting practices in India that laid the groundwork for his Law of Return, which he describes eloquently in this passage from a 1947 publication:[ix]

The subsoil is called upon for some of its water and minerals, the leaf has to decay and fall, the twig is snapped by the wind, the very stem of the tree must break, lie, and gradually be eaten away by minute vegetable or animal agents; these in turn die, their bodies are acted on by quite invisible fungi and bacteria; these also die, they are added to all the other wastes, and the earthworm or ant begins to carry this accumulated reserve of all earthly decay away. This accumulated reserve—humus—is the very beginning of vegetable life and therefore of animal life and of our own being.

Any break in this intricate cyclical process would carry dire consequences for soil fertility, and by extension the health of plants, animals, and people. Howard believed that human health was linked to the condition of the soil: preserving the Law of Return and maintaining healthy soils would eliminate the source of most diseases. “Soil fertility,” he wrote, “is the basis of the public health system of the future.” For this reason his Indore process for composting is minutely concerned with recycling wastes back onto farm fields, and preserving organic material and live organisms in the final product. Howard recognized that all agriculture must be an intervention into natural processes, but he drove home that the farmer operated within limits set by the cycle of life: “The first duty of the agriculturist must always be to understand that he is a part of Nature and cannot escape from the environment.” The proper method of agriculture, in his view, involves the careful attention to and maintenance of autonomous metabolic cycles. These processes could be adapted somewhat to benefit people, but people also had to adapt to the processes.

Bandaging the Rift

However, repairing or reconnecting the broken cycles has historically not been the solution of choice for metabolic rift. Howard wrote at a time when the concept of people adapting to the metabolic rhythms of nature did not receive much public support. At the close of WWII, America was about to lead the world into a wave of agricultural development that embraced not the law of return, but the law of economies of scale. Rather than treating farms as embedded within living, dynamic systems that cycled energy and nutrients to the mutual sustainment of all, per Howard’s vision, farms would be factories,[x] a stopover on a one-way passage from mines and wells to waste dumps. Crucially, in order to transform farms to factories for food, fiber and fuel, more concentrated inputs of energy and nutrients were needed than the organic metabolic cycles could provide. By organic, I refer to those materials and energies that were wrapped up in living ecosystems, as opposed to materials and energies lying dormant in underground reserves of fossil water, fuel, and nutrients.[xi]

In his classic history, The Development of American Agriculture, the agricultural economist Willard Cochrane wrote that this transformation required a host of external inputs into agriculture. The list of industrial inputs needed to replace the organic inputs is illustrated in this passage:

The petroleum industry, the tractor and farm machinery industry, the fertilizer industry, the pesticide industry, and the livestock feed industry had to develop the production plants and distributive organization – the infrastructure – to permit and facilitate the capital transformation on farms.

The consequences of this industrial relation between people and nature can be deferred so long as people are able to fill the rift with stuff mined from the earth. However, many worry about the looming limits to these resources: it is more and more common to hear the phrases peak oil, peak phosphorous, and peak water. Meanwhile, just as city planners discovered that the buildup of human sewage in cities created a crisis to parallel soil exhaustion in the countryside, contemporary environmental scientists are discovering parallel crises to dependence on limited supplies of fossil resources. The release of vast quantities of greenhouse gases through burning fossil fuel and the unchecked runoff of vast quantities of nitrogen and phosphorous compounds into marine ecosystems, for example, threaten to exceed a “safe operating space for humanity.”[xii] Thus, framing relations between people and nature through the lens of metabolic rift alerts us to the possibility that certain long-standing problems, while temporarily mitigated, may arise again with magnified consequences.

Cautions

Metabolic rift is an attractive frame in part because it effectively combines environmental, economic and moral values. Breaking soil nutrient, water, or energy cycles by importing mineral substitutes—fossil fuels, aquifers, mined phosphates, and so on—degrades ecosystems and inhibits the autonomous natural processes that provide manifold benefits to people (such as the salmon returning year after year, fattened from their time in the sea, to the same stretches of river to be harvested). People reap the surplus benefits from those ecosystem services, which form a foundation for all of our livelihoods. Metabolic rift thus also represents a break in the social and economic cycles that maintain and renew the means by which people produce goods and services. Lastly, the concept of metabolic rift is deeply infused with moral judgments about the right and the wrong way to go about making a livelihood. When 19th century observers spoke of “robbing the earth of its capital stock” or Howard called out “banditry” in the 1940s, they were pointing to the immorality of disrupting what were otherwise functional, elegant, and beneficial cycles. In other words, metabolic rift offers a powerful argument for what causes problems between people and nature and how to fix those problems that draws on both technical and moral reasoning.

It can be tempting to look back across history and read cases of metabolic rift as the parable of the goose that laid the golden egg.[xiii] In the (misguided) hope of speeding up agricultural metabolism and unleashing an even greater bounty, people broke the beneficial cycles through a reckless binge on fossil energy and mineral nutrients. Armed with hindsight and a contemporary awareness of global environmental crises, it might seem that in the process modern industrial society has killed the golden goose.

Drawing this conclusion from the metabolic rift frame oversimplifies history, however, and lends a greater continuity and uniformity than can be seen on close and careful examination. Nature, like people, is always changing, as is our relation to it. While the concept of metabolic rift powerfully reveals a number of interrelated problems and consequences at the nexus of ecosystems, economic production, and moral sensibility, it also tends to divide the world into binaries: traditional and modern, organic and industrial, closed and broken cycles, and so on. I think it’s important not to be too quick in condemning or too hasty in dismissing certain practices. The distinctions between organic and industrial, the natural and the mechanical, as Richard White makes clear in his book, are always blurred on close inspection. As is the rational or sensible with the mad or insane. I’ll close with his final observation on the collapse of the salmon cycle:

Each step of the process that led to this result was logical. It was only the result that was mad. Like many kinds of madness, this one looked quite sane from the inside. One thing followed quite understandably from another until both a kind of environmental insanity and a bitter social conflict were achieved.

 


[i] McNeill, J. R., & Winiwarter, V. (2004). Breaking the Sod: Humankind, History, and Soil. Science, 304(5677), 1627-1629.

[ii] White, R. (1995). The Organic Machine: The Remaking of the Columbia River. New York: Hill and Wang. p. 47.

[iii] This is a bit disingenuous, since sustainability actually doesn’t have a unique, universal definition—its meaning is constantly argued over and debated around the world. However, compare the sorts of metabolic cycles described here with the definition of sustainable development proposed by the 1987 Brundtland Commission of the United Nations in Our Common Future (one of the foundational texts for sustainability thinking): “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.”

[iv] White, p. 47.

[v] See for example, (1) Foster, J. B., & Magdoff, F. (2000). Liebig, Marx, and the Depletion of Soil Fertility: Relevance for Today’s Agriculture. In F. Magdoff, J. B. Foster & F. H. Buttel (Eds.), Hungry for Profit: The Agribusiness Threat to Farmers, Food, and the Environment: Monthly Review Press; or (2) Foster, J. B. (1999). Marx’s Theory of Metabolic Rift: Classical Foundations for Environmental Sociology 1. American Journal of Sociology, 105(2), 366-405.

[vi] Quoted in Foster, J. B. 1999. “Robbing the Earth of its Capital Stock”: An Introduction to George Waring’s Agricultural Features of the Census of the United States for 1850. Organization & Environment, Vol. 12 No. 3, 293-297.

[vii] “This enormous drain of these matters from the land to towns, has been going on for centuries, and is still going on year after year, without any part of the mineral elements thus removed from the land ever being restored to it… It is perfectly absurd to suppose that the loss of these matters… should have had no influence upon the amount of its produce.” Letters on Modern Agriculture. 1859.

[viii] Foster, J. B., & Magdoff, F. (2000) at note v.

[ix] Howard, A. 1947. The Soil and Health. The Devin-Adair Company: New York. Howard’s collected works are available publicly online at http://journeytoforever.org/farm_library.html.

[x] For more on this trend, see Deborah Fitzgerald’s Every Farm a Factory. 2003, Yale University Press.

[xi] Historian E. A. Wrigley, in Continuity, Chance and Change: The character of the industrial revolution in England, describes the transition from an organic to an industrial economy as the shift from wood and muscle to coal as the main sources of energy which people could utilize in making a livelihood. I merely expand on this observation by adding that people have also introduced other fossil fuels (which store ancient solar energy in the molecular bonds of hydrocarbons for millions of years underground), fossil water (which stores thousands of years of rainfall in underground aquifers), and fossil nutrients (underground deposits of plant nutrients such as phosphorous, nitrogen and potassium which have to be mined to made available to living ecosystems).

[xii] “Since the Industrial Revolution, a new era has arisen, the Anthropocene, in which human actions have become the main driver of global environmental change. This could see human activities push the Earth system outside the stable environmental state of the Holocene, with consequences that are detrimental or even catastrophic for large parts of the world.” Rockström, J; Steffen, W; Noone, K, et al. 2009.  A safe operating space for humanity. Nature. 461(7263): 472-475.

[xiii] Interestingly enough, I found that the Wikipedia article on this fable also references another fable, The Farmer, which is quoted as reading, “A farmer, bent on doubling the profits from his land, proceeded to set his soil a two-harvest demand. Too intent thus on profit, harm himself he must needs: Instead of corn, he now reaps corn-cockle and weeds.” Thus I am not the first to make the connection between metabolic rift and killing the golden goose!

Share

2 Comments

Filed under Basic Concepts

Integrative Terminology and the “Artinatural”

As Patrick has suggested in previous posts, environmental and technological developments can be determined, to some extent, by the frames we use to understand the world. In this post I hope to address this issue by offering a new kind of frame that refuses to separate the world into simplistic, dualistic categories….

When I was a kid, sometimes my friends would accuse me of performing non-sequiturs or saying random things. To me, my random leaps weren’t so random as I was always one to see the strange connections between seemingly separate subject matters. Bird watching and robots, for example… or candy and climate change. Anyway, what I call “integrative terminology” is to describe terms that try to make these once invisible connections more visible. The word “cyborg”, for example, is used to describe things that are part organic and part machine – it describes a literal amalgamation of once separate spheres. Professor Donna Haraway now uses it to describe other things that transcend these kinds of boundaries (and not just characters like Robocop or Iron Man).[i]

Another term that is used to talk about overlapping spheres is the word “hybrid”. This word also now refers to cars that utilize both electric and combustion motors in unison, but Professor Bruno Latour also uses the term to talk about the interesting mixture of the categories of “nature” and “culture”.[ii]

I myself have adopted the word “artinatural” to describe things that are both artificial and natural at the same time.

Now let us turn to some more concrete examples so that we don’t wander off too far into the territory of the conceptual….

My dog Sonny is a half Boxer, half Golden Retriever mix – a very special and rare mix as far as I know. He is a special kind of animal beyond simply being a rare breed of dog, however. He is special partly because he exists due to the strange and dynamic human breeding techniques that led to those two specific breeds in the first place. Dog breeding is a very artificial thing – it involves lots of intentional planning and human social organization to achieve. There are even institutions involved in the process like the American Kennel Club. But would it make sense simply to call my dog “artificial”? He certainly is not just a human product – wild wolves are, indeed, his close relatives. No, I would argue that he is both artificial and natural, in other words, he is “artinatural”.

DSC_0015

Once we understand Sonny as artinatural, we can also see that many things we once thought as simply artificial, or simply natural, are in fact artinatural. A wooden desk, for example, has natural and artificial elements. A tree planted in the park, also, has the naturalness of its genetic lineage, perhaps, and the artificial elements of its intentional placement in that park (assuming it was landscaped in). The same can be said for a plastic chair, whose petroleum material was pulled from the earth, processed and reformed into a chair shape. And, a similar thing can even be said for a human thought itself – being that we human beings also come from, and are still part of, nature (i.e. the universe in its entirety, one of the many definitions of the word “nature”).

This is where integrative terminology and thinking can lead to some larger questions and answers. Is the artificial simply part of the natural? Or, is the “natural” an artificial concept that is then applied to the world by human beings? The answer to both questions may be “yes”, and beyond that, in these queries many fascinating insights and mysteries can be found.

Now let’s turn to how an integrative idea like artinatural could contribute to some key ecological and environmental concerns…. Firstly, to see the interconnectedness of spheres means that one can no longer imagine an entirely contained “artificial” place or object. For example, we have learned the hard way, through recent events like the BP/Gulf oil spill and the Fukushima nuclear spill, that although things like crude oil and radioactive materials may temporarily be contained in human-made structures, they are still structures that exist in the natural world, and furthermore, they are by no means permanently or completely sealed from that broader natural world. These toxic spills clearly crossed the theoretical boundary between “artificial” and “natural” to show that they had always been artinatural, and because of this, other similar projects should be understood as risky and dangerous practices that can only be temporarily safe and contained. Even if the oil spill had not taken place, the oil itself would have been distributed and used, leading to increases in greenhouse gas emissions and other pollutants….

Global climate destabilization itself is an example of the artinatural: it is not just human societies emitting billions of tons of chemicals into the atmosphere, but also the interconnected processes that create the warming, the storms, and the rising sea levels. To not see these artinatural connections, and to not respond to them, would be disastrous.

On the other hand, recognizing the artinatural can also help us move toward positive, transformative goals. Because we no longer imagine the city as merely artificial, we can start to imagine more urban farms, edible gardens, rooftop gardens, green/ecological corridors, and decentralized energy production throughout and within our cities and neighborhoods. We already have cars and trucks and high-tech outdoor equipment in our wildernesses, but now we can also imagine wilderness within the city – more plants and animals integrated into the once strictly artificial places.

Last but not least, the ideas of wilderness and environment can be seen as artinatural themselves – understanding them in their historical and linguistic contexts can help to separate the negative aspects from the positive aspects that these ideas have had. The idea of wilderness, as authors like William Cronon have pointed out, was applied in such a way as to imagine land without people – that somehow the human element would “contaminate” a once pristine nature.[iii] This kind of thinking is simplistic and wrong. Not all human interactions are destructive ones. The term environment, moreover, can make one imagine a that the “environment” out there is but a thing, a pool of resources beyond oneself, when it is, in fact, something integrally connected to every human being through food, water, air and climate (among many other things). There is an artinatural way to have people and nature coexist peacefully and positively and it involves, in part, understanding that some of the simplistic dualisms that are still widely used were incorrect.

 


[i] Haraway, Donna Jeanne. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century”. Simians, Cyborgs and Women: The Reinvention of Nature. Routledge, 1991.

[ii] Latour, Bruno. We Have Never Been Modern (tr. by Catherine Porter), Harvard University Press, Cambridge Mass., USA, 1993.

[iii] Cronon, William, ed., Uncommon Ground: Rethinking the Human Place in Nature, New York: W. W. Norton & Co., 1995, 69-90

Share

Leave a Comment

Filed under Basic Concepts, Guest Posts

Undetermining Determinism

This week I discuss the notion of determinism, or “the doctrine that all events, including human choices and decisions, have sufficient causes.”[i] The post will cover how determinism works as a method of explanation, why this method of explanation is problematic, and finally a several different specific flavors of determinism that impact people and nature.

What does determinism mean for people and nature?

Discussing determinism means thinking about causes and effects. In particular, not just how we know causes and their effects, but also what causes and effects exist and whether or not we can know them at all. These considerations correspond to questions 5c-5e in my guide:

  1. What counts as a cause and what counts as an effect?
  2. What can be known about the problem? In particular, are there limits to knowledge? Are there limits to control? What is within human power and what is beyond?
  3. What counts as a “fact” or evidence? Who knows things?

We ask these questions when analyzing frames because, as STS scholar Sheila Jasanoff has written, “What we know about the world is intimately linked to our sense of what we can do about it.”[ii] Deterministic accounts sharply limit what may count as a cause, attributing causal power to structural factors—the physical environment, technology, genetic makeup, or social structures like the State or the Economy—rather than individual actions. In so doing, such accounts make history appear inevitable, that what happened was unavoidable. This sentiment of the inevitable deprives individual people of causal power, [iii] suppressing any hopes or ambitions we might have to change the course of history. Moreover, determinist accounts tend to focus narrowly on just one structural factor at the expense of others. This leads to one-sided explanations such as attributing the European conquest of the Americas solely to the “accident” of guns, germs and steel.[iv] In other words, determinism not only limits how we understand history and attribute praise and blame for past events, but also what powers we recognize in ourselves in the present and the sorts of possibilities we can imagine for the future.

Environmental Determinism

In this form of determinism, accidents of the environment—such as the distribution of resources like water[v] —determine history (see note iv). One historical example emphasizes the vagaries of climate in determining race. In the late nineteenth century, it was commonly believed among white colonists in tropical places that the more “stimulating” northerly climate of Europe (and North America) had led whites to be naturally superior thinkers and moral leaders, while the “degenerating” tropical climates had naturally stunted the moral and intellectual growth of the people living there. This climatic determinism served to rationalize a deep-seated racism among white colonists and lend a pseudo-scientific pretext for the obvious brutality and oppression of colonialism. It also caused anxiety among whites that they might suffer the same “fate.” Of major concern to colonists in the Philippines, writes Warwick Anderson, was the question, “Would the white race degenerate and die off in a climate unnatural to it?” Ironically, the general sentiment was, as one prominent U.S. Army doctor put it at the time, that “the Anglo-Saxon branch of the Teutonic stock is severely handicapped by nature in the struggle to colonize the tropics.” [vi]

Technological Determinism

A frame based on technological determinism grants too much causal agency to technologies and discounts the agency of people and nature.[vii] This makes it difficult to imagine ways for people or nature to change the course of history, to re-route it from its current path.

Environmental historian Edmund Russell identifies two deterministic approaches to explaining the role of technology in relations between people and nature.[viii] The first he calls the deus ex machina approach, summarized as “science invents, technology applies, and man conforms;” the second he calls the “necessity is the mother of invention” approach.[ix] Both lead to accounts of people and nature that suffer from technological determinism.

Deus ex machina

We can see a deus ex machina approach in a commonly accepted narrative of climate change. In this telling, once people discovered that fossil fuels such as coal and oil are a plentiful source of energy, it was inevitable that technologies would develop to direct that energy toward all sorts of uses. From there, societies would inevitably adapt to incorporate these very effective fossil fuel technologies as fully as possible into their economic, political, and productive fabrics. Societal dependence on the gasoline-powered automobile, coal and natural gas power plants, and petrochemical inputs to industrial and agricultural processes were unavoidable outcomes of the fact that large reserves of oil, coal, and natural gas exist underground. From here, it is a short leap to conclude that climate change, though caused by humans burning fossil fuels, was nonetheless unavoidable (i.e. not our fault). This technological determinist account rationalizes the second framing of climate change that I identified in my post on Framing Environmental Problems. If using fossil fuels to the fullest was inevitable, then stopping climate change by scaling back how much fossil fuel people burn (the goal identified by the first frame) is not a viable solution. The only route left would be to go with the flow and hope that this technological rollercoaster will produce some solutions for adapting to the dangers of climate change.

Necessity is the mother of all invention

The regression to deterministic accounts of people and nature is more subtle in the “necessity is the mother of all invention” approach. “It enters,” writes Russell, “when we assume that technical choices are inevitable—technical criteria govern technical decisions, each step in design follows logically from the one before, and designers arrive at optimal solutions.”[x] This trap tends to occur through an asymmetrical focus on technologies that have succeeded, rather than those that have failed. By only looking at the successes, the development of technology along a certain path seems ever more self-evident, in other words, inevitable.[xi]

One such account could be told of synthetic fertilizers. The story might go that modern agriculture depends upon artificial sources of nitrogen (N) in order to counteract declining soil fertility from long-term farming. N is generally the limiting nutrient in agriculture (along with P and K, to a lesser extent), and over time repeated harvests pull that N out of the soil and ship it off to towns and cities for people to consume in the form of food. As the soil loses N, it must be replaced. Naturally occurring N-rich fertilizers (e.g. Peruvian guano or Chilean sodium nitrate) are in limited supply and must be shipped long distances. Therefore, it was only logical and rational that scientists should develop a means for fixing N from the atmosphere (which is about 80% N2) into a form readily usable by plants (NH3).[xii] Its widespread adoption is further evidence that synthetic fertilizer from industrially fixed N was an optimal technological innovation.

However, such an account would ignore several important factors in the development of synthetic N fertilizers. For example, it would ignore the history of how and why scientists developed a process for fixing atmospheric nitrogen. In 1909, the German chemist Fritz Haber successfully demonstrated a process for the synthesis of liquid ammonia (NH3) from the reaction of atmospheric nitrogen (N2) with hydrogen gas (H2). He teamed up with Carl Bosch under the hire of German chemical company BASF to bring the process “to an industrial scale with a view to its economic application”.[xiii] By 1911 they had built an operational pilot plant, and by 1913, on the eve of WWI, brought a full-scale manufacturing plant online. Ironically, Haber and Bosch were working not toward the goal of better fertilizer, but rather toward providing a ready supply of military-grade nitrate (NO3) for the manufacture of explosives. Germany’s primary source of NO3 from Chilean sodium nitrate, and had to be shipped at great expense across the Atlantic. This supply line was also highly vulnerable to disruption (for example by Great Britain’s famed Royal Navy), and the Haber-Bosch process allowed Germany to use its ample coal reserves and normal air to provide a near-infinite local supply of ammonia. Only somewhat coincidentally did the applications for agriculture become clear during the inter-war period. And it wasn’t until after WWII that the process began to be widely used to fix nitrogen for use in fertilizers: the US bomb-making industry realized that it’s now useless N-fixing capacity could provide a profitable solution to the shambles that world agricultural production and trade had been left in after decades of war and depression.[xiv]

Undetermining Dichotomies

The point of considering technological determinism is to recall that technologies shape and are shaped by social, environmental and personal factors. Technologies are not inevitable, “they might have been otherwise.”[xv] And that goes for any type of determinism: history might have developed otherwise, and it still may.

Determinism tends to reinforce a number of problematic dichotomies (black-and-white views of the world). I have already discussed briefly the presumed sharp divide between nature and culture, and we have here seen reference to structure and agency. In the future we will also cover complications arising from separating science and society, and the state and society. If we hope to overcome these sharp divisions, we must avoid narrowly deterministic accounts of the world that divide people and nature unquestioningly into powerful causes and powerless effects.

In the next couple of weeks we will cover some of the possible responses to determinism, including ways to get beyond these problematic dichotomies. We’ll start next week with a discussion of integrative terms from our first guest blogger, my friend and colleague Ted Grudin.



[i] Dictionary.com had the most concise definition I could find.

[ii] Jasanoff, Sheila. 2006. State of Knowledge: The co-production of science and social order. Routledge. p. 14.

[iii] Determinism and free will do not play well together. Especially in the case of strong types of Newtonian determinism, “the existence of the strings of physical necessity, linked to far-past states of the world and determining our current every move, is what alarms us.”Hoefer, Carl. 2010. Causal Determinism. Stanford Encyclopedia of Philosophy. Online at: http://plato.stanford.edu/entries/determinism-causal/. This is also a good starting place for delving more deeply into the philosophy of determinism.

[iv] Referring to Jared Diamonds’ (in)famous book, Guns, Germs and Steel: The Fates of Human Societies. Diamond has (rightly) received much criticism for the book’s apologist stance on European conquest. But he has also received criticism for his environmental determinist take on history, “Environment molds history.” I also like this entry from Barbara King on NPR’s 13.7 blog, “Why does Jared Diamond make anthropologists so mad?”

[v] e.g. Solomon, Steven. 2010. Water: The epic struggle for wealth, power and civilization. New York: Harper.

[vi] Anderson, Warwick. Colonial Pathologies. 2006. Duke University Press. Ch. 1.

[vii] For more detailed discussion on technological determinism, I suggest the edited volume, Does Technology Drive History? The dilemma of technological determinism. Ed. M. Smith and L. Marx. 1994. MIT Press.

[viii] Russell, Edmund. 2011. Evolutionary History: Uniting History and Biology to Understand Life on Earth. Cambridge University Press. p. 139-142.

[ix] Ibid, p. 139.

[x] Ibid, p. 140.

[xi] “Preference for successful innovations seems to lead scholars to assume that the success of an artifact is an explanation of its subsequent development. Historians of technology often seem content to rely on the manifest success of the artifact as evidence that there is no further explanatory work to be done.” Trevor Pinch and Wiebe Bijker in The Social Construction of Technological Systems, Ed. Bijker, Hughes and Pinch. 1999 (1987). MIT Press. p. 22.

[xii] This article from Nature provides a good overview of the nitrogen cycle for reference.

[xiii] Carl Bosch, quoted in, Paull, J. 2009. A century of synthetic fertilizer:1909-2009. Journal of Bio-Dynamics Tasmania 94 : 16-21.

[xiv] Smil, Vaclov. 2001. Enriching the Earth: Fritz Haber, Carl Bosch, and the transformation of world food production.

[xv] Bijker and Law. 1997 (1994). Shaping Technology/Building Society. p. 3.

Share

Leave a Comment

Filed under Basic Concepts

Excavating Environmental Frames

As promised, we are now going to talk about how to use environmental framing. This is going to be a two-part post. Part I will introduce the method in detail. Part II will walk through an example from current events step-by-step.

The Basic Approach

Recall from last post that people use frames to provide a mental model of how (part of) the world works and how it should work. Many assumptions, values, motivations, experiences, and much knowledge are wrapped up in these models. Revisiting all of those commitments takes time and energy, and people generally do not actively think about them very much. Hence frames rest on a foundation of assumptions that people rarely revisit. Analyzing frames reveals taken-for-granted assumptions so that we can better understand the role they play in public policy and discourse.

I like to approach frames as mental models that people use to make sense of and communicate problems. Thinking about frames in relation to environmental problems helps me keep my work grounded and connects abstract, academic theory to practical questions confronting society right now: What should be done about climate change? Is urban sprawl harming America? Do people eat too much meat? Should I buy more local food? Is fracking too lightly regulated? Has the public sector invested enough in solar and wind energy?

Seen through the lens of frames, each of these big questions rests on particular ways in which people define, address, and resolve problems. If we boil it down, for any given public problem we want to know three simple things:

  1. What is the problem?
  2. What options are available to deal with the problem?
  3. How will people know the problem has been solved?

Different actors will answer the questions differently. An actor is a general term used in social science lingo to refer to any person or organized group of persons (such as a government agency, a corporation, or a non-profit organization). Ideally, we want to survey the ways in which a variety of actors frame a given issue so that we get a sense of the range of possibilities.[i] Since the nuances matter in conducting such a survey, we should expand the three basic questions. To that end, I have been working on a guide to analyzing frames.

A Step-by-Step Guide

Here is a list of basic questions I am developing as a guide[ii] for systematically analyzing how actors frame environmental issues. The goal is to answer each question from the point of view of different actors engaged in the issue of concern.

Understanding the Actor in Context
  • To what audience is the actor speaking?
  • For whom does the actor speak?
  • Against whom is the actor arguing? It can help to compare arguments against one another to determine where the most relevant points of agreement and disagreement lie.
  • What resources does the actor have at their disposal? Resources might be monetary, social status, education, information, legal authority, coercive force, popular will, etc.
  • Does the actor occupy a position of authority? What is it, and with respect to whom?
Understanding the Problem
  •  What problem does the actor identify? Try to summarize concisely (imagine a twitter post) in your own words.
  • What caused the problem, and who is responsible?
  • Is it a collective problem or a problem for individuals?
  • At what scale does the problem exist?  Is it local, regional, national, global?
  • Over what timeframe does the problem exist? Is it a short-term or long-term problem?
  • Is it political (i.e. we need to distribute power and resources differently)?
  • Is the problem isolated or interrelated with other problems?
 Understanding the Goals
  • What are the stakes? For example, money, power, efficiency, justice, the public good, biodiversity, health, security, knowledge, etc.
  • Who stands to lose out? Who stands to benefit? And how?
  • Why does the actor care? i.e. do they have a financial stake? Is it their job? Are their friends, family, or community involved? Do they seek social status or prestige?
Understanding the Resolution
  • What tools are available to the actor or their audience to deal with the problem?
  • What options do those tools present for resolving the problem?
  • Who may participate in seeking resolution? Who is left out or excluded?
  • How does the actor or their audience know if they’re doing a good job or a bad job in resolving the problem? What are the criteria, or indicators, of success or failure?
Understanding the Big Picture
  • Where does the actor draw boundaries around the world? What scales matter? i.e. is this an individual problem, a local problem, a state problem, a national problem, an industry-specific problem, somebody else’s problem etc.
  • What time-frame matters? The next fiscal quarter? The next year? The term limit of a political office? The length of a human generation? A lifetime? This century? Indefinite?
  • What counts as a cause and what counts as an effect? Think of the Dust Bowl example: in story 1, harsh nature is the cause of human suffering and opportunity for bravery while in story 3 human exploitation of the land is the cause of natural disaster.
  • What can be known about the problem? In particular, are there limits to knowledge? Are there limits to control? What is within human power and what is beyond?
  • What counts as a “fact” or evidence? Who knows things? i.e. peer-reviewed literature, unbiased experts, experienced practitioners, legal decisions, public opinion.

Now that we have a set of tools, Part II will demonstrate how to use them through a concrete example. Next week: Hunger Frames!

 


[i] See last week’s post. In the face of an “overwhelmingly crowded and disordered chronological reality,” the complexity of things is so great that people only ever have a partial understanding. Frames help filter out a lot of the extraneous “noise” into a manageable subset of the whole, but important aspects are always and unavoidably lost in the process. Like in the parable of the blind men and the elephant, conflict often arises because people adhere very strongly to their incomplete knowledge of the world. However, cooperation and sharing notes with a humble acceptance for the partiality of any given frame can help form a more complete, if jumbled, vision. At the very least, we can try to survey all the different existing frames to put all the cards on the table and make sure we haven’t missed anything obvious. I’ll discuss in greater detail the link between analyzing frames and good democratic process in future posts.

[ii] I will make this guide available as a downloadable document soon.

Share

Leave a Comment

Filed under Basic Concepts

Framing Environmental Problems

Last week, I implied that as the answer to what is nature? changes, so do the consequences. Today, I will explain how to make use of this point through the concept of environmental framing.[i]

Defining Environmental Framing

A formal definition for environmental framing can be difficult to grasp all at once, so let’s step back and explore a related and more familiar idea: stories. In a famous essay on story-telling in environmental history, William Cronon compares two opposing interpretations of the 1930s Dust Bowl.[ii] The first tells a heroic tale of determined settlers persevering in the face of a wrathful (and very dry) nature. The second tells a tragic tale of man-made disaster caused by settlers who failed to adapt properly to the unstable Great Plains environment. The important point here is that the lessons to be learned from each story differ as starkly as the stories themselves: the first urges more daring agricultural development in the American West, while the second urges caution and conservation. When making sense of historical events, Cronon explains, the way in which a story is told matters:

When we describe human activities within an ecosystem, we seem always to tell stories about them. Like all historians, we configure the events of the past into causal sequences—stories—that order and simplify those events to give them new meanings. We do so because narrative is the chief literary form that tries to find meaning in an overwhelmingly crowded and disordered chronological reality. When we choose a plot to order our environmental histories, we give them a  unity that neither nature nor the past possesses so clearly. In so doing, we move beyond nature into the intensely human realm of value.[iii]

(Hi)stories carry power, but people also tell stories about what’s going on today, and even stories about what might happen in the future (an everyday example is when the meteorologist makes a 5-day forecast). The process of storytelling is the same: order events into causes and effects to simplify and give meaning to what we experience and observe. Again, the key point here is that giving events meaning brings them into the “intensely human realm of value” and makes claims about what people should do. To return to the main topic, we can think of environmental framing as a tool for critically analyzing the stories told about people and nature. This tool can help us see the connection between how people order knowledge of reality into causes and effects and how people seek to order social and environmental relationships.[iv] Environmental framing thus connects the knowledge of people and nature with the power to make changes in the world.[v]

Example: Climate Change

Now that we’ve covered the analytical purpose of environmental frames, I will demonstrate with the example of climate change. We’ll start with basics. Climate change results from an increase in the level of greenhouse gases (GHGs) in the atmosphere. The most notable has been CO, a byproduct from burning fossil fuels. GHGs cause an atmospheric greenhouse effect that traps heat, originally from the sun, which would normally escape from the earth back into space.[vi] The extra heat warms the earth’s surface and lower atmosphere, causing a host of serious problems ranging from rising sea level to more extreme weather.[vii]

Increasingly, people view climate change as one of the most dire crises of our times. But even among the large majority who agree[viii] that climate change poses a pressing problem, there is wide variation in how this problem is framed. For the sake of example, I will only present two pieces of evidence. First is a 2009 blog post from climate activist Bill McKibben, a founder of the non-profit 350.org.[ix] Second is a 2010 feature article from The Economist.[x] Below I quote some of the relevant excerpts.

(1) Bill McKibben, in reference to a landmark paper in the journal Nature[xi] that proposed an atmospheric concentration of 350ppm CO2 (associated with a temperature rise of 2° C) as the upper boundary for “a safe operating space for humanity”:

[A]s a planet we’d need to get off coal by 2030 in order for the planet’s forests and oceans ever to bring atmospheric levels back down below 350—that’s the toughest economic and political challenge the earth has ever faced.

But it’s not as if we have a choice. The most useful thing about having a number is that it forces us to grow up, to realize that the negotiations that will happen later this fall in Copenhagen aren’t really about what we want to do, or what the Chinese want to do, or what Exxon Mobil wants to do. They’re about what physics and chemistry want to do: the physical world has set its bottom line at 350, and it’s not likely to budge. (emphasis added).[xii]

(2) The Economist, responding to the dismal prospect that “a plausible programme for keeping climate change in check” would result from another major international meeting:

Global action is not going to stop climate change. The world needs to look harder at how to live with it…

A 2009 review of the cost of warming to the global economy suggests that as much as two-thirds of the total cannot be offset through investment in adaptation… But adaptation can still achieve a lot…

The green pressure groups and politicians who have driven the debate on climate change have often been loth to see attention paid to adaptation, on the ground that the more people thought about it, the less motivated they would be to push ahead with emissions reduction. Talking about adaptation was for many years like farting at the dinner table, says an academic who has worked on adaptation over the past decade. Now that the world’s appetite for emissions reduction has been revealed to be chronically weak, putting people off dinner is less of a problem. (emphasis added). [xiii]

 

From these two short excerpts, two different ways of framing climate change emerge. In the first, humans have without a doubt overstepped the bounds of our biosphere by burning too many fossil fuels. Faced with the physical facts, the only option for our continued survival[xiv] is to scale back, way back, on industrial growth and development. In the second, while industrial development has led to costly problems related to the environment, economic growth cannot and should not be stopped. Only further innovation and development can provide solutions.[xv] The policy implications diverge greatly. One way points toward renewable energy, energy efficiency, and subsistence-oriented economies. The other toward big infrastructure, high-tech research, elaborate insurance schemes, and lots of capital investment.

I have grossly oversimplified the climate change frames for the purposes of example. Many nuances are in play, and there is plenty of room for compromise and even complementarity between mitigation and adaptation to climate change. Nonetheless, these two articles illustrate one core tension underlying all effort to address climate change: the promise of development versus the risk of overstepping natural bounds.

In summary, environmental frames help us analyze how different interpretations of the relationship between people and nature are connected to different claims about what should be done. Now that we have discussed what environmental frames are used for, next week’s post will discuss in finer detail how to use environmental frames.

 


[i] I taught this concept last semester for my advisor, Alastair Iles. I owe much of this discussion to that experience.

[ii] Cronon, W. 1992. A place for stories: Nature, history, and narrative. The Journal of American History. 78(4): 1347-1376.

[iii] Ibid, 1349.

[iv] There is much more to be said on this point, and I will return to it in future posts on science and society, determinism, and co-production.

[v] The savvy reader will recognize that I am referencing Michel Foucault here. The canon of Foucaultian theory is too enormous to cite here, but The Foucault Reader, edited by Paul Rabinow, will do for my purposes.

[vi] NASA covers all of this information in detail here: http://climate.nasa.gov/causes.

[viii] I will not address the critics of climate change science here, as by and large they represent industries which profit enormously from the status quo. The reasons for which these critics frame climate change as a hoax or as a purely natural phenomenon linked to periodic solar cycles have been addressed extensively elsewhere, and are too obvious to be of much interest for the purposes of this discussion. The really interesting exercise is to identify the political implications that people do not wear on their sleeves when they frame environmental problems. This example will only brush the surface in that regard.

[ix] Bill McKibben, “The Science of 350, the Most Important Number on the Planet,” which lays out the mission statement for the climate action group 350.org. http://www.treehugger.com/corporate-responsibility/the-science-of-350-the-most-important-number-on-the-planet.html.

[x] “Facing the Consequences.” The Economist. November 25, 2010. http://www.economist.com/node/17572735.

[xi] I use McKibben’s blog post in part to avoid any possible pay-wall problems. Here’s the citation: Rockström, J., Steffen, W., Noone, K., et al. 2009. A safe operating space for humanity. Nature. 461(7263): 472-475.

[xii] McKibben

[xiii] The Economist

[xiv] McKibben quoting the abstract for the Nature article, “above 350 you couldn’t have a planet ‘similar to the one on which civilization developed and to which life on earth is adapted.'”

[xv] For example, see the passage that reads, “Economic development should see improvements in health care that will, in aggregate, swamp the specific infectious-disease threats associated with climate change.”

Share

3 Comments

Filed under Basic Concepts

Why does thinking about nature matter?

In this post, we will explore two questions: What is nature? and, Why does this question matter?

Nature, and the adjective natural, are some of the most widely used words in the English language. Backpackers hike and camp in nature to escape the city and suburbs. Natural disasters strike in the form of tornadoes, earthquakes, and hurricanes. Loggers, miners, and prospectors extract natural resources. Conservationists protect natural areas and conserve nature. Natural scientists poke, prod and observe nature to know about the world. Orators base arguments upon what is natural or what is true by nature. Farmers both battle nature’s caprices and cultivate its fruits. Despite the near endless variety of uses for the concept of nature, all build off of three basic meanings[i]:

  1. The essential character or quality of a thing.
  2. The force which directs the physical world.
  3. The physical world itself.

Each meaning is related to and inseparable from the others. For example, consider the term wilderness[ii], which many people associate with nature. Wilderness is a physical place, “a wild or uncultivated region…uninhabited or inhabited only be wild animals” according to the dictionary definition. Wilderness is nature, in a physical sense, because it is a place where the forces of nature rule completely. Here, organisms, ecosystems, and biophysical processes are said to exist in their natural state because no human cultivation, settlement, extraction, or other use interferes. In summary, nature refers the essence of things, the way they are and will tend to be if we don’t interfere. A natural process unfolds through things acting according to their essential character. Nature as a place encompasses a group of things acting together through natural processes.

This seems simple enough, so why is it important to ask what nature is? Because, through the implicit contrast of nature with people that is common to all three meanings, we can see a fourth meaning for nature: that which should be (and would be if we didn’t artificially interfere and muck up the works). However, we humans live and work in nature at the same time that we alter it to produce man-made things. This blurs the distinction between nature and artifice. At what point does the block of marble cease to be a natural deposit of sediments compressed by heat and pressure over millions of years and become Michelangelo’s David? This ambiguity means that using nature to draw a hard line between what should and should not be is more difficult than it might seem. Let’s think about another example.

References to nature are common in debates over the safety of genetically modified organisms (GMOs). Genetic engineering directly inserts genetic material from one organism into the DNA of a different organism, allowing for combinations that are not possible with conventional breeding techniques. Common GMOs include Bt cotton, the DNA of which has been augmented with a gene from the Bacillus thuringiensis bacterium that causes the plant to produce its own insecticide, and Roundup Ready Soybeans, the DNA of which has been augmented with a gene granting it resistance to the herbicide glyphosate (trade name Roundup).

Proponents of GMOs argue that genetic engineering is simply a new way of combining genetic material already found in nature. The only difference is that while breeding is limited to combining genetic material randomly from two organisms capable of sexual reproduction with one another, genetic modification can predictably combine genetic material from any organisms. GMOs are merely an extension and acceleration of natural genetic combination. Therefore genetic modification is no more unnatural or unsafe than any other practice in agriculture[iii].

Opponents of GMOs, on the other hand, argue that such crops are not natural at all, giving them names like “frankenfoods”[iv]. The naturalness of the process of combination, not the things combined, matters most. Even if the genes are natural, i.e. found in nature, the process of inserting a gene from a bacterium directly into a plant’s DNA is thoroughly artificial. Such a thing could not happen without people, they argue. Therefore genetic engineering is unnatural and should not happen. We have made plants like cotton and soy do things that are not part of their essential character, and we deviate from nature at our peril[v].

Representations of nature do not merely describe the world as it is. They also serve as a guidepost for imagining the world as it should and should not be. People on both sides of the GMO debate use different meanings of nature to mark the boundary between the safe and the dangerous. Western society tends to treat nature as a source of concrete, objective truth. However, as the GMO example shows, this guidepost is ambiguous in practice. It is important to ask what is nature, because the meaning can change with every use.

 


[i] Williams, Raymond. “Nature,” Keywords: a vocabulary of culture and society. New York: Oxford University Press. 1985.

[ii] See also, Cronon, William. “The trouble with wilderness: or, getting back to the wrong nature.” Environmental History 1.1 (1996): 7-28.

[iii] See, for example, the website of Monsanto, a key developer and patent owner of GMOs: http://www.monsanto.com/improvingagriculture/Pages/our-role.aspx, http://www.monsanto.com/improvingagriculture/Pages/our-role.aspx, or http://www.monsanto.com/products/Pages/biodirect-ag-biologicals.aspx all touch on the arguments I have paraphrased here.

[iv] For an in-depth discussion infused with the humanities, see also, Francois, Anne-Lise. “’O Happy Living Things’: Frankenfoods and the Bounds of Wordsworthian Natural Piety,” diacritics 33.2 (2005) 42-70. Online: http://muse.jhu.edu/journals/diacritics/v033/33.2francois.html.

[v] In the words of Prince Charles, a long-standing skeptic of GMOs, “manipulating nature is, at best, an uncertain business.” In Shiva, Vandana, Ed. Manifestos on the Future of Food and Seed. Cambridge, Mass.: South End Press. 2007. 26-27.

Share

Leave a Comment

Filed under Basic Concepts