The politics and economics of land reclamation, the process of creating new, habitable and agriculturally profitable land from otherwise inhospitable spaces (usually seas, riverbeds, or disturbed land), is fraught with debates that turn on questions of habitat destruction, the aesthetics of preservation, and the practicalities of urbanization[1]. In the Canadian West, where coal mining and oil sands-based petroleum production is of significant national and global worth, the issue of land reclamation is of particular concern[2]. On March 27, 2012, the Premier of Alberta, Alison Redford, announced a 3 billion dollar initiative to create the Alberta Oil Sands Technology and Research Authority (AOSTRA)[3]. This initiative, designed to provide 150 million dollars annually to technology and research that is of environmental importance is an exemplar of the way in which the energy industry is implicated in the cultural, political and economic fabric of Canadian life. The AOSTRA was conceived of under Peter Lougheed, the 10th Premier of Alberta and was an important part of his political strategy to defeat the Social Credit party, which had maintained control for 36 years. Indeed, “one of the reasons for the Social Credit defeat was concern that Alberta was not receiving its fair share of oil and gas revenues” (Doern, 208). Importantly Lougheed’s plan was not to place checks and balances against multinational energy companies to ensure royalties and public input, but rather to capitalize on speculative Canadian energy futures. Lougheed thus conceived of the AOSTRA as a long term, research based initiative that would trade the promise of future independence and potential leadership in the energy game, against the careful deliberations of his predecessor, Ernest Manning. Manning had seen his role as “providing a stable political environment for the foreign-based [energy] industry” while Lougheed had put more stock in both small Canadian companies and, importantly, in the future (Doern, 209). While the AOSTRA has gone through several transfers of leadership, funding, and ideological backing, its enduring characteristic is its close connection to Canadian culture, politics and economics. In 2012 that characteristic is very much alive in the rhetoric of the current Premier of Alberta as well as in the machinations of Canada’s largest oil producers. With regard to Premier Redford, the announcement of funding for the AOSTRA comes at a crucial political moment as the 2012 General Elections will occur on the 23rd of April. Where Lougheed successfully traded on a culture that was beginning to see the presence and federal acceptance of multinationals as an exploitive venture, Redford is attempting to capitalize on a culture seeking to become, or perhaps remain, independent and profitable in a global market[4].

This ongoing and historical discourse demonstrates the way in which energy is deeply enmeshed in Canadian politics, culture, and economic prosperity. It also demonstrates the way in which the discourse is figured as that of a speculative future. The speculative character of energy discussion is tied, perhaps more now than ever before, to a discourse of environmental responsibility. Alison Redford herself spoke of the ways in which the AOSTRA, under a Progressive Conservative government, would continue to “[honor] and [preserve] the environment” (Redford in Makowichuk, 2012). So too is this rhetoric found in the practices, promises and profits of Canadian oil producers. This paper seeks to tease out that rhetoric by examining the public engagements Canadian oil companies promote themselves through, by making problematic those engagements and by offering both a rhetorical and practice based interjection into the issue. To this end, this paper will first examine the way in which Cenovus Energy, a Canadian oil producer operating in the Alberta oil sands, represents their reclamation practices and promissory futures. After a brief examination of secondary literature, the paper will then turn to a recent article from Applied Soil Ecology to examine the ways in which a particular reclamation practice holds the promise of a novel mode of environmental engagement. Lastly this paper will be interested in ending on its own speculative note by implicating these rhetorical and practice based strategies in wider philosophical debates about the ethics of a global ecosystem. The speculative and promissory futures found in the land reclamation practices of Canadian oil producers will be seen to be approaching questions of biodiversity and ecological complexity from a tenuous position. The promise of a discourse that engages these questions as means, rather than as ends, will be presented as addressing both the practical and philosophical demands of a shared biological world.

Oil Production and Land Reclamation in Canadian

Oil production practices come with a devastating cost to the environment that surrounds production sites. In many cases, this cost has become the central issue in debates over the intrusion of industry in otherwise more natural places. In the Canadian west this is especially so, as the future of the provincial and national economies are tied both the acquisition of research funding and the acquisition of public acceptance. The practice of land reclamation has consequently become central to such acquisitions as reclamation is now a governmentally mandated practice in Alberta as well as a means by which cultural anxieties surrounding environment destruction can be quelled. This is perhaps seen more clearly in no other place than in the public relations commercials and proclamations made by oil producers.

Cenovus Energy is perhaps the most recent company in the Alberta oil sands scene but has developed a substantial production practice since their split from the Encana Corporation[5]. With their gross revenue at almost 4 billion dollars, Cenovus is quickly becoming a major name in the Alberta oil industry. What is particularly interesting about Cenovus is their carefully planned public image. Putting aside familiar points of corporate pride found on the Cenovus website, those that highlight the companies “environmental leadership” or the way in which their practices are put into practice “safely and responsibly”, Cenovus has also begun a image campaign chiefly through the production and dissemination of commercials[6]. This paper will attend primarily to the collection of commercials found on Cenovus’ own YouTube channel[7]. The channel contains 15 individual commercials that are played on television and are aimed at helping the public “learn more about Cenovus, [their] operations and how [they are] doing things differently”[8]. At least three themes emerge most strongly from these commercials that have relevance for this paper; environmental responsibility, wildlife and biological diversity, and Canadian innovation. The video titled “Environmental Compliance” is a short, 37 second piece about the importance of clean water for both the industrial operations Cenovus is engaged in as well as for the wildlife that share the area. The notion of a shared environment is central, as the video titled “Wildlife caught on camera” attests to. The video is time-lapse montage of “336,000 hours of wildlife monitoring” that hopes to provide the company with insights into “future developments at [their] field sites”. Both the respect for and engagement with the environment are highlighted and cast in terms of the innovative character of Cenovus. The video, “A Different oil sands”, exemplifies this relation by connecting the recently implemented Steam-Assisted Gravity Drainage (SAGD) method for oil extraction, to the pristine forests that are left less disturbed than they would otherwise be. One comment about the video prompted Cenovus to provide an explanation about the differences between their operations and traditional mining practices. The lack of a tailings pond, a pond dedicated to holding water used for bitumen extraction, is heralded as one of the defining characteristics of Cenovus’ innovative practices. It is true, however, that the building of an industrial scale operation, the roads and land plots, do still require time consuming reclamation efforts. In this regard, Cenovus is no different than any other oil producer that is required by the Government of Alberta to have reclamation practices in place[9].

Land reclamation, then, has become a necessary and integral part of an image of corporate responsibility, and is crucially linked to a familiar speculative future rhetoric that views that future as potentially suffering or benefiting from choices made here and now. While methods for accomplishing the reclamation of land vary widely, there are consistencies in the ideological foundation of the work:

Most of today’s practitioners understand that reclamation must be viewed and practiced through an ecological lens, which sees an integrated system that benefits from the study of individual parts but that can only be truly understood and successful when viewed as a whole. The complexity and the role of diversity in the reclaimed landscape have become more appreciated by practitioners and are being incorporated into landscape and reclamation design.

(Canadian Land Reclamation Association, 2007, emphasis added).

The Canadian Land Reclamation Association (CLRA) bills itself as the “only national organization dedicated to the reclamation and rehabilitation of disturbed lands and waterways”[10]. While research into the CLRA’s practices is limited due to time and funding constraints, here is it is enough to reproduce some of the material available online. Of note are the connections made in the CLRA’s “Strategic Plan” between their mission and the concerns of governments and publics, the self-imagining of Cenovus, and the economics of energy production. In the extended quote reproduced above the CLRA highlights both an “ecological lens”, through which “reclamation must be viewed”, as well as the holistic goals of complexity and diversity. There are obvious connections between the self-imagining of Cenovus and the statements by the CLRA, but there are also important connections to the way in which government and public concern enter into reclamation discourse. The CLRA notes that from the very beginning reclamation was about “[making] it green” but has also importantly become about making it “productive” (CLRA, section 2.4). The notion of productivity resonates in a number of ways. In the first case, the land needs to be productive enough to be used for something in the future. This may be for an urbanizing project or for more industrial work. The land also needs to produce a certain aesthetic. The CLRA reproduces a project they were involved in at the Westville Surface Coal Mining Site in Nova Scotia. The before and after images depict a semi-urbanized area in which a large blighted section of land, a product of coal mining, cuts through an otherwise pristine living area. The CLRA website notes that the reclamation done here was done in conjunction with the Town of Westville and will be an important location for residential and commercial development. Here, then, the notion of productivity takes on at least two senses. First, an aesthetic is produced that is desired by a town, and second that town is allowed to return to or obtain a level of productivity that it was robbed of by the depreciation of the land.

In both the case of Cenovus’ self-imagining and the public and governmental interaction with land reclamation practices, the economics of the energy industry exceed the sales of crude oil or the reproduction of industry. Because of the public interaction with land reclamation, in both a residential, commercial and cultural-political context, the promise of a green tomorrow is cut somewhat short by the demands of today. Given that large, multi-billion dollar corporations trade on a future in which ecological complexity and biodiversity will be returned to an area while politicians trade on a future in which national worth and independence are linked to energy concerns; in every case, the notion of a promised future is central. Treating the public as a passive actor here is intentional. What is lost to a public in these accounts and representations is a valuation of today, a concern for a lived, rather than promised, quality of life. A passive, even submissive public shares a crucial, marginalized role with the natural world in this story. Plants, animals, microbes and all manner of life, save for those humans in control of the means of production, are cast, in practice, as similarly marginalized and forcibly submissive creatures. Any consideration for them, it would seem, comes as an afterthought. The remainder of this paper will be interested to ask if this need be so, if there are practices or ways of being deeply enmeshed in, or dependant on, a techno-industrial infrastructure, while simultaneously reorienting practice and discourse to reflect a concern for otherwise marginalized actors.

Biodiversity, Ecological Complexity and Symbionts

The issue of marginalized biological actors has been treated by a number of scholars and writers that have an interest in upsetting, reorienting, or otherwise troubling the categories and modes of subjectivity that seem to persist in contemporary discourse. One particularly important author, for the purposes of this paper, is Hugh Raffles. Raffles is an anthropologist at The New School in New York City whose 2010 book Insectopedia explores the myriad ways in which human life is connected to, and in important ways, dependant on the lives of insects. In a brilliant chapter titled “The Sound of Global Warming”, Raffles invites his readers into a world of auditory strangeness located inside piñon pine trees (Raffles, 318-330). In this astonishing chapter Raffles examines the work of a sound engineer who has been recording the sounds produced by the pines and the insects that inhabit them. Raffles leverages this ordinarily inaudible “soundscape” to make an argument about global warming, bark beetle behavior, and human ethical responsibilities. Piñon trees, and the alarming rate at which they are being destroyed by the invasive beetle species, has to do with the way in which “plants and insects have fallen out of step” due to the “higher temperature and lower rainfall” associated with global warming (Raffles, 325). Insects adapt much faster to these changes and as a result breed, spread, and consume at a rate that the trees can not keep up with. Humans have tried, through the insights gained in chemical ecology, to quell the growing swarm of beetles that are felling trees at an astonishing rate, but have failed (Raffles, 326-327). Perhaps, suggests Raffles and his auditory cohort David Dunn, science has been looking in the wrong place; perhaps it is not pheromones that directs the signaling and behavior of the beetles, but auditory queues. In that case, there is an entire world, a world that David Dunn has recorded into a symphony of bites, crunches, tunneling, creaks, and explosions of sap, to which one might attend to. This world is one that those looking to blame the beetles for not only the loss of acres of forest, but also for the potential contribution, through the felling of trees, to global warming its self, are unfamiliar with. That is precisely the point. This is a case in which beetles are marginalized, their behaviors and intentions are translated into disdain by human beings who are happy to ignore, or at least play down, their role in the issue of global warming. Raffles case ends on what is, for this paper, a central point. If making the beetles the enemy of mankind has merely resulted in a misunderstood and ineffectual attack on their population, then “somehow, we will have to cohabit…we will have to make friends” (Raffles, 330). This is fundamentally an ethical call to action. Raffles is asking his readers if there is another way, if there is a way out of the us-them dichotomy that has left humanity at odds with nature. The question of exactly what kinds of practices could bring about a new relationship with insects, indeed with the entire natural world, is still left unanswered.

Michael Pollan has written a book that is helpful in thinking about such practices. Pollan’s 2002 book The Botany of Desire has a considerably different approach to the question of species interaction than Raffles but comes to a strikingly similar conclusion. Pollan’s book is organized around four human desires and the way in which particular plants have captured and used those desires to their own ends. Pollan’s strategy is rooted in a reading of co-evolution, traditionally that between the flower and the bee, that suggests the needs of both parties get met in a kind of “bargain…[in which] both parties act on each other to advance their individual interests but wind up trading favors” (Pollan, xiv). In the case of the bee and the flower, the flower uses the bee to transport its genes while the bee uses the flower for food. Is it not also the case, asks Pollan, that the same relationship adheres between him and the potato he is planting in his garden? On this reading, plants and humans have a much more interesting history together than one might otherwise think. In the case of the apple, Pollan suggests that the apple successfully captured the human desire for sweetness while humans, through cultivation and transportation, spread the apple’s genetic material. It is not merely this insight that is important but the history that it reveals. In Pollan’s hands, an entirely new kind of apple, that distinct from its wild Kazakhstanian ancestors, and an entirely new kind of human, that distinct from its European roots, emerging together on the American frontier[11]. To appreciate this sense of being in the world, one in which humans are no longer the sole agents driving history, is to be caught up in what Pollan calls the “reciprocal web that is life” (Pollan, xxv). In a move quite similar to Raffles, Pollan is challenging an entire dialectic discourse that continues to be at odds with the human experience of the world:

There’s the old heroic story, where Man is at war with Nature; the romantic version, where Man merges spiritually with nature (usually with some help from the pathetic fallacy), and, more recently, the environmental morality tale, in which Nature pays Man back for his transgressions, usually in the coin of disaster – three different narratives (at least), yet all of them share a premise we know to be false but cant seem to shake: that we somehow stand outside, or apart from, nature.

(Pollan, 2002 pp. xxv)

In many ways, these are the stories that get told in the political, corporate, and public examples found above. In the case of land reclamation practices the environmental morality tale stands out, in the case that Raffles investigates it is both the old heroic story and the morality tale; in all cases, and in league with Pollan and Raffles, there is a sense that the natural world, the insects, apples, wildlife, and perhaps even the public, get lost in a tangle of hegemony and the promise of producing a better outcome the next time around. This paper would suggest that both Raffles and Pollan are looking for ways to be more attuned to the complicated and diverse set of connections that make up life. Raffles, with Dunn, uncovered a mode of auditory communication that may well help get concerned parties out of the heroic war with bark beetles. Pollan ends his chapter on apples by reflecting on the wild varieties he planted in his garden, a kind of moral gesture that allows for a larger measure of apple-agency to infringe on his controlled landscape (Pollan, 58). These are not merely rhetorical games or conceptual apparatus, these are practices. It is to a particular practice, then, that this paper now turns in the hopes that, like Pollan’s wild apples and Raffles soundscapes, a new mode of engaging ethically with the world might emerge.

Alder-Frankia symbionts and Land Reclamation

A recently published article in Applied Soil Ecology, titled “Field Performance of alder-Frankia Symbionts for the Reclamation of Oil Sands Sites”, offers a potential starting point for thinking about a novel land reclamation process. Elisabeth Lefrançois et al. offer in their paper an introduction to the issues surrounding land reclamation practices, a detailed account of their materials and methods, a section on the quantitative results of the experiment, and a brief discussion of those results. This paper is concerned principally with the introductory remarks and the concluding discussion, and no attempt will be made to labor over the specifics of the results.

Dealing with several Canadian oil producers, the introduction is reminiscent of the concerns raised in the first part of this paper. That is, the authors situate their work and results within the same economic and political milieu that Cenovus and the CLRA find themselves (Lefrançois et al. 2010, pp 183-184). The impetus for the experimental setup was to “evaluate the ability of Frankia-inoculated alders to grow on reclamation sites containing tailings sand capped with mineral soil and peat according to current practices…and to characterize how alder establishment would impact soil quality and microbial communities” (Lefrançois et al. 2010, pp 184). Frankia is a nitrogen-fixing Actinobacteria that shares a “symbiotic relationship” with Alder trees (Alnus sp.). This relationship is important because it allows the trees to grow in otherwise inhospitable (nitrogen deprived) environments like those found in tailings sands after oil extraction. The relationship between the bacteria and the tree is also a starting point for thinking about the ways in which biodiversity and the complexity of life is being given a wider role in land reclamation. The notion of symbiotic life calls attention to the way that Pollan described his ideal relationship with the plants in his garden. Rather than conceiving of vegetation as an end goal, the Alder-Frankia symbionts position vegetation as the beginning of a reciprocal process reminiscent of the “cybernetic feedback loop” that Raffles wrote about in the case of global warming. While in Pollan’s case the relationship between human and plant was important, one should not be so quick to dismiss the same dynamic as having taken place here. One can think about the ways in which the energy industry, Canadian politics, public concerns, and corporate agendas provide the occasion for, but also rely on, the symbionts. Additionally, one can attend to the ways in which complexity is an important and functional component of this practice; in an important way the Alder-Frankia hybrid defies the normal boundaries of organismic life. Alder-Frankia, here, is a manifestation par excellence of the call for holistic and complex thinking about ecosystems that the CLRA highlighted and that both Raffles and Pollan situated as crucial.

Beginning in 2000, the Lefrançois team prepared a site at Fort McMurray in Alberta with mineral soil and peat that would eventually serve as the experimental area. Alder seeds for the study were obtained through the NationalTreeSeedCenter in Fredericton, NB, and had originally been collected in the Fort McMurray area (Lefrançois et al. 2010, pp 184). The alder seeds were germinated in 2004 and were inoculated as seedlings. Planting began in 2005 with both an inoculated and control plot, both being separated by a buffer zone to protect the purity of the results (Lefrançois et al. 2010, pp 184). The experimental sampling took place in two phases, one in 2006 and another in 2007 after 1.5 and 2.5 years growth. One reason to endure the details of the preparatory work is to highlight several important aspects of the experimental procedure. First, the paper having been published in 2010, it is worth noting that, as living organisms, the Lefrançois team were required to care for the Alder-Frankia for at least 7 years. Subject-object debates run through any discourse that takes as one of its central topics the notion of action or agency. The notion of care here should be understood once again as a reciprocal notion. It is not just that the plants were objects cared for by scientific subjects, but that the symbionts themselves were participating in a process of caring for and cultivating land for growth. The symbionts, in this sense, care additionally for the publics that desire aesthetically pleasing neighborhoods, for an environmentally responsible national identity, and for the continued profitability of the land.

The results of the Lefrançois study demonstrated clearly that not only were Frankia inoculated Alders effective in “[improving] remediation capabilities and [enhancing] soil quality”, but that they were so effective that the symbionts could “be part of a reclamation strategy” (Lefrançois et al. 2010, p 1). These results exceed a mere correlation as the authors can confidently recommend their botanical and bacterial cohorts for inclusion in corporate, government and non-profit strategies for reclamation. Crucially, these symbionts get to be part of something; they may well even be the most important part. In the final pages of the Lefrançois paper, the authors note that because of the symbionts the soil is now more hospitable to “more sensitive species” (Lefrançois et al. 2010 p 190). The language of helping a more sensitive species is important for this paper as it prompts a rethinking of where human life falls on the very same question. The nitrogen depleted area excludes more than just plants, after all. Indeed, the aesthetic, cultural, and productive wants and needs of Canada are held by a species too sensitive to thrive in a nitrogen depleted, visually unappealing, and unproductive area. In this sense, the symbionts take on a level of subjectivity that escapes humanity; they too are productive, laboring forces that are biologically fit to toil away in places that humanity shuns. In the case of Alder-Frankia, human beings are a considerably sensitive species whose fate importantly rests, for a time, in the hands of a remarkably resilient species.

Rethinking land reclamation

With respect to the discussion above, the Alder-Frankia case has important implications for several aspects of the land reclamation discourse. First, the symbionts offer an important alternative to the promissory and speculative future of other land reclamation practices. Rather than trading on a better tomorrow, political and governmental bodies have the opportunity to celebrate and trade on a better today. The conceptual refiguring here is not just about language. This is a functional practice that works to reorient both the place of the human in the natural world as well as equalize the marginalizing aspects of corporate and government discourse. In terms of a promised future, the symbionts provide the occasion to live in a practiced present. Pollan and Raffles both end their work asking about where to go and what to do. While Pollan perhaps takes on a practice of his own, he is still silent about how his efforts in his garden might counteract the pervasive and detrimental loss of genetic variation among apples. In terms of a call for a practical ethics, the Alder-Frankia symbionts provide an irresistibly interesting occasion for just such a move.

It might well be argued that, regardless of the immediacy of this symbiotic and conceptual practice, the concerns of a public dissatisfied with the aesthetics of an area, the needs of both the energy industry the Canadian economy, and the perpetual damage done by that industry remain. This is certainly true. The aim of this paper is not to provide a quick fix to any of these problems, but rather to challenge the existing rhetoric on the grounds that it is multiplicative of those problems. For example, rather than deploy and maintain a rhetorical and practical apparatus that demands patience and submission from a concerned public, the Alder-Frankia case holds the promise of immediate and future growth as well as participation in reciprocal, interspecies cooperation. It might also be argued that, in the end, the symbionts are being used, deployed, cut up and examined by scientists. Similarly it could be argued that lands reclaimed, full of vegetation and life, even in the most ideal circumstances, are often fated to suffer at the hands of, if not industry, at least urbanization. In response, it should be remembered that the conceptual work that Alder-Frankia is doing here is about creating more equilibrium. The realities of modern life necessitate, at least for the time being, a reliance on oil production. In Canada, the oil industry is also a crucial part of the national economy, as well as a structural component of political and cultural life. Within the current reclamation paradigm, little thought is given to issues of non-human agency, the marginalization of non-humans or public concern, or ecological interrelatedness. This paper seeks to raise questions about these issues, rather than provide determinant answers.

Thinking forward

During research for this paper, it became abundantly clear that very little historical material exists on the history of land reclamation. There are, it is true, many examples of individual reclamation projects. Of these, Robert Sauder’s The Yuma Restoration Project (2009) is among the best. Including both a historical account of the way in which the project was conceived, as well an analysis of the ways in which the social and cultural fabric of the American west rubbed up against the economic and political landscape at the turn of the century, Sauder’s is a model to be emulated. However, within the Canadian context, research produced no such equal. Several accounts of individual projects exist, to be sure, but are cast mostly in terms of the administrative, practical and logistical necessities of the projects, rather than their wider socio-cultural implications[12]. If a 1980 bibliographic document produced by the Canadian Lands Directorate is any indication, Canadian scholarship is indeed lacking in just such a manner[13]. This paper, then, hopes to open up a space in which to talk about not only the practices that underwrite Canadian land reclamation, but also the epistemological grounds upon which those practices rest. The case of Alder-Frankia provides just such a space. Beginning with the way in which oil production and land reclamation are figured as simultaneously redemptive and destructive processes in Canadian politics and business, one can begin to appreciate the gravity of debates that approach the topic. A close examination of the way in which reclamation discourse is deployed by corporate and non-profit organizations, by political actors, and the way in which reclamation impacts or inspires the public raises both practical and philosophical questions about reclamation practice, the relationship between humans and nature, and the political implications of such practices and relationships. This paper hopes to suggest, with these themes in mind, that a political discourse rooted in a speculative future is at odds not only with its public, but also with the natural world. The laborious work of Alder-Frankia at Fort MacMurray suggest that there is not only an alternative discourse rooted in the present, but also one that foregrounds the fact that humans live in a shared ecological environment.


BP p.l.c. BP Statistical Review of World Energy June 2011., retrieved March 5, 2012.

Doern, B. (2005) Canadian Energy Policy and the Struggle for Sustainable Development. University of Toronto Press

Environmental Design and Management Limited. Canadian Land Reclamation Association (CLRA) Five-year Strategic Plan, September 2007., retrieved March 7, 2012.

Lefrançois, E. et al. “Field performance of alder-Frankia symbionts for the reclamation of oil sands sites”, Applied Soil Ecology. Vol. 46 (2), 2010: pp. 183-191

Makowichuk, D. “Premier announces $3-billion oilsands initiative”. Calgary Sun, March 27, 2012. Online: accessed April 4, 2012

Marshall, I.B. (1980) The ecology and reclamation of lands disturbed by mining: a selected bibliography of Canadian references. Ottawa: Lands Directorate

Pollan, M. (2002) The Botany of Desire: A Plant’s-Eye View fo the World. New York: Random House

Raffles, H. (2010). Insectopedia. New York: Vintage


Stone, K. for Natural Resources Canada (NRC), APP 5th Coal Mining Task Force Meeting. Las Vegas USA. Overview of Canada’s Coal Sector, September 2008., retrieved March 7, 2012.

Paikin, S. (Interviewer) “Alison Redford: Canadian Energy Strategy” The Agenda on TVO with Steve Paikin. Online at accessed March 20, 2012.

Woynillowicz, D., Dyer, S. (2008) Fact or Fiction: Oil sands recovery. Pembina Institute for Appropriate Development

West, R.C. (1995) Plot trials to assess the potential of using pulpmill wood waste in land reclamation. Fredericton: CanadianForest Services

[1] See “Five-year Strategic Plan”, Canadian Land Reclamation Association 2007.

[2] See “BP Statistical Review of World Energy” where Canada ranks 6th on the world list of oil production and 13th on the world list of coal production. For the primacy of the Canadian west in coal and oil production within the nation, see the Canadian Land Reclamation Association’s (CLRA) “Five-year Strategic Plan”  2007, p. 5 or the Natural Resources Canada (NRC) “Overview of Canada’s Coal Sector”.

[3] Makowichuk, D. “Premier announces $3-billion oilsands initiative”. Calgary Sun, March 27, 2012. Online: accessed April 4, 2012

[4] Paikin, S. (Interviewer) “Alison Redford: Canadian Energy Strategy” The Agenda on TVO with Steve Paikin. Online at accessed March 20, 2012.





[9] See Alberta Regulation 115/93 in the Environmental Protection and Enhancement Act for the Conservation and Reclamation Regulation accessed on March 27, 2012.


[11] See Pollan chapter 1

[12] See Woynillowicz, D., Dyer, S. (2008) Fact or Fiction: Oil sands recovery. Pembina Institute for Appropriate Development, which provides an overview of the totality of loss, the processes of government policy in obtaining reclamation certifications, and an analysis of risk, for an example of the administrative and logical practices of land reclamation in Canada. See also West, R.C. (1995) Plot trials to assess the potential of using pulpmill wood waste in land reclamation. Fredericton: CanadianForest Services, for an example of a practical discourse concerning land reclamation in Canada. These examples are meant to be representative rather than exhaustive.

[13] See Marshall, I.B. (1980) The ecology and reclamation of lands disturbed by mining: a selected bibliography of Canadian references. Ottawa: Lands Directorate.


Discourse in biomedical history is, as is the case with most historical scholarship, full of disagreement. Actors, objects, and practices will not sit still[1]. In the history of psychiatry, or of psychology more broadly, psychoanalysis is just such a historical object. Indeed, even this conflation of psychiatry and psychology represents a historical categorization that is up for debate[2]. Taken up in countless historical accounts, psychoanalysis has become increasingly complex. Simultaneously an immature precursor to experimental and biological psychology, an important addition to that psychology, and an indistinguishable part of psychology, psychoanalysis is will not sit still. This brief literature review works to explore several representations of psychoanalysis in order to come to terms with this complex historiographic object and to contribute to a growing literature dissatisfied with dichotomous descriptions in the history of biomedicine[3].


This literature review has three specific goals. First, an argument about the historiographic place of psychoanalysis will be cast in terms of a biomedical context. The purpose of this demarcation is to situate the topic in relation to relevant literature, to illustrate the grounds for the perceived tension between biological and social discourse in that literature, and to provide for a historiographic strategy that seeks to relieve that tension. The second part of this paper will formally review several distinct but interrelated historical representations of psychoanalysis. The final part of this paper concerns a historiographic alternative to conceptualizing psychoanalysis, and perhaps many other biomedical topics, as one marked by the bio-social divide, roughly speaking. Drawing on the following work in the history of science and medicine, an argument for complexity as a historiographic approach to biomedical representations will be made[4]. It is worth noting that this review makes no claim to establishing the “right” way to do history. Instead, in reflecting upon the inherent tensions that a comparative literature review brings to the fore, this review offers one conceptual tool that might assist in coming to terms with that tension. History always engages a moving target; as much as possible, it is the job of scholars to attend to those movements without stifling them.


I. The Biomedical Context


It is crucial for this review to make the claim that the historiographic placement of psychoanalysis is a biomedical topic. It is important not only because it sets the foundation for the historiographic tensions to be discussed below, but also because it implicates scholarly writing its self within those tensions. It is important here to be clear about these claims. What is being suggested here is not that psychoanalysis, as a therapeutic technique or ideological commitment, is its self a biomedical topic, though it may well be. The claim here, instead, is that the historiographic positioning, those methodological and conceptual apparatus that structure historical writings, operate, in the case of psychoanalysis, in a biomedical context. Subsequently, this review is not interested in picking apart the specificities of psychoanalytic theory, but in addressing the ways that theory is represented, used, or obviated to achieve particular effects. The historiography of psychoanalysis is biomedical. But what does this mean?


It may be helpful to outline first what is meant by the term biomedical. What is not meant by the term biomedical is the mere appropriation of biological or laboratory based science to medical practice. One reason that this is a problematic view of biomedicine is that it represents the relationship between the laboratory and clinic as a chain of biologically determined truths about clinical practice. This definitional error is the product of what Annemarie Mol calls the “prescriptive…[nature of]…epistemological normativity” (Mol, 2002: 6). The notion that the laboratory determines, for the clinic, how to genuinely know its objects is a mistaken one brought about by this epistemological commitment. Mol asks an important question for biomedical scholars; what happens to this lab/clinic relationship, and especially the objects of knowledge that bound that relationship, when practice, rather than epistemology, becomes the focus of attention? Mol’s well supported answer is that reality “multiplies” (Mol, 5). But the ontological status of a biomedical reality is not the chief concern of this paper. What is of crucial importance for this paper, in Mol’s study, is the realization that “if reality doesn’t precede practices but is a part of them, it cannot itself be the standard by which practices are assessed” (Mol, 6). A pair of examples will illustrate the point. In Mol’s anonymous Dutch “Hospital Z”, atherosclerosis is enacted in different ways at different sites. In discussing the differences between “clinical atherosclerosis and pathological atherosclerosis” Mol points out that the two ways of “doing” the disease reorient the way we conceive of the lab/clinic relationship (Mol, 35). The important take away from her example is that the pathological determinations of atherosclerosis to not “underlay” the clinically assessed legs that feel pain; instead, out of necessity, they come after them (Mol, 37). As Mol contends, “in practice, if pathology has anything to do with atherosclerosis at all it is not as a foundation, but as an afterthought” (Mol, 37). But it is not necessary to maintain this unidirectional description either; the issue of epidemiological atherosclerosis is instructive. In considering the way in which populations are afflicted by atherosclerosis, Mol uncovers a way in which, rather than determining primacy, the lab and the clinic enact a population together (Mol, 127-133). In clinical diagnostic files, all the myriad problems of a patient “come together” when determining the need for an angiogram; other diseases, employment issues, home care issues are all noted down (Mol, 127). These files are part of the numeric admissions count by the hospital administration, and they also serve the double function of providing information for the study of the epidemiology of atherosclerosis (Mol, 128). This admission count, however, is dependant on the “current state of diagnostic technology”; the angiogram counts where the non-invasive duplex scan does not (Mol, 129). In addition, the admission counts themselves acts as ways of determining a mean, of averaging things like the normal or acceptable level of cholesterol (Mol, 131). Here it is then: the individual in the clinic being translated into the lab, moving into the population, and returning to structure the lab. Once complete, the cycle can start over and new averages and new admission counts will structure another clinical encounter. What is important for this paper is that in practice the clinic is not subsumed by, nor does it subsume the lab; they are constitutive of one another. This is good reason to avoid this problematic reading of “biomedicine”. Instead of subsumption, of synthetic or syncretic combinations[5], biomedicine becomes a kind of hybrid. Indeed, Kenton Kroker, a scholar in the history of biomedicine, concludes just this about the seemingly “self-evident” word (Kroker, 2077). Instead of “the application of biological principles and practices to medicine” or “the use of pathology to investigate the normal parameters of life”, biomedicine has restructured the relationship between the clinic and the laboratory to be a “hybrid practice that takes the merger of the clinic and laboratory as the engine of innovation” (Kroker, 2077).


With regard to the historiography of psychoanalysis, then, in what way can it be said to be a biomedical topic? There are at least two answers to this question. In the first place, the debates over the historical place of psychoanalysis in relation to that of experimental or biological psychology, is very much rooted in a lab/clinic dialogue. As will be shown below, there is relevant literature that takes either side, or a relevant variant thereof, to be the defining characteristic of psychological investigation. Psychoanalysis is subsequently subsumed (most often) or subsuming of experimental psychology. The second way in which the historiographic debate over psychoanalysis can be conceived of as biomedical is found in the notion of hybridization that Kroker indicated. Indeed, as will be shown below, several scholars are deeply interested in the ways in which the biological and the social, the laboratory and the clinic, and indeed the psychoanalytic and the psychological, are mutually constitutive of dispute and innovation.


II. Representing Psychoanalysis


The single most prominent trend among the texts under review here was the notion of warring sides. Under this historiographic regime, psychoanalysis is seen to be at either theoretical or practical odds with biological or experimental psychology/psychiatry. Of these, Edward Shorter’s A History of Psychiatry (1997) is perhaps one of the best known contemporary examples. Shorter is a social historian of medicine working out of the University of Toronto in Canada. Shorter has published widely in the field, addressing the histories of obstetrics and gynecology, the doctor-patient relationship, psychosomatic medicine, and psychiatry and psychopharmacology[6]. His magnificently written history of psychiatry is a considerable achievement as it condenses nearly two hundred years of relevant history into one book. One of the most praiseworthy aspects of this text is its attention to the personalities and personal commitments of its actors. For this paper, though, the most interesting aspect is in the way Shorter characterizes psychoanalysis’ relationship with psychiatry.


The history of the rise of psychoanalysis, for Shorter, is a story of conflict, and the infusion of patient-sensitive treatments into mainstream culture. This is true of the European context as well, but Shorter deals overwhelmingly with the American scene. With regard to conflict, the battle lines are drawn on multiple fronts; professional concern over the foundations of Freud’s theories (sexual causes in particular) on one hand, and the shift in psychiatric focus from those maladies requiring institutional care to those that could be said to be everyday problems of the mind (psychosis and neurosis respectively) on the other (Shorter, 155-156). Along with the contentious nature of early psychoanalysis, the appeal to the everyday patient, whose concerns over anxiety, depression, and the like were well met by psychoanalytic sensitivity and the potential for private practice; psychiatry was undergoing a tremendous overhaul. The first aspect of historiographic note is the divisional work Shorter accomplishes with his sectional headings in the book. The majority of Shorter’s historical work on psychoanalysis is done in a section titled “The Psychoanalytic Hiatus” (Shorter, 145-189, emphasis added). Flanked, chronologically, on either side by nerves as well as therapeutic alternatives, psychoanalysis takes on a fleeting, if not somewhat insignificant, role on Shorter’s account. This is especially telling in the case of the chapter on alternatives. In the opening of this chapter Shorter describes the situation:


In the first half of the twentieth century, psychiatry was caught in a dilemma. On the one hand, psychiatrists could warehouse their patients in vast bins in the hopes that they might recover spontaneously. On the other, they had psychoanalysis, a therapy suitable for the needs of wealthy people desiring self-insight, but not for real psychiatric illness. (Shorter, 190, emphasis added).

This is the take away from a history spanning nearly fifty years; psychoanalysis was for a brief time at war with biological approaches to the mind and that war was won by those proponents of the biological approach. Indeed, Shorter says exactly this when he states that “psychoanalysis appears as a pause in the evolution of biological approaches to brain and mind” (Shorter, 145). It is important here to note that it is not only that biological and psychoanalytic sides were warring, but that there were winners and losers. On Shorter’s account, it is not the case that the war brought about a synthesis or a view of the common ground of inquiry into the mind; biological approaches merely paused for a moment, and were then taken back up. Indeed, after the rise of this second biological psychiatry Shorter dedicates a small section from his chapter entitled “From Freud to Prozac” to the “Decline of Psychoanalysis” (Shorter, 305-313). In this section there are at least two points worth mention. First, psychoanalysis is made interchangeable, with respect to its success, with “alternative psychotherapies”; “it didn’t seem to matter whether one chose interpersonal therapy, group therapy, psychodrama, hypnosis, or narcosynthesis…the patient’s chances of recovery were mostly the same” (Shorter, 306). In support of this claim Shorter cites the American Psychological Association’s (AMA) commission on psychiatric therapies from 1984 (Shorter, 413, notes). Below we will have occasion to trouble both this account, as well as the evidence for this account. Here it is enough to identify the loss, not only of psychoanalytic camp, but of all alternative psychotherapies, to biological psychiatry. The second point worth mentioning is the “loss of confidence…medicine” experienced with regard to psychoanalysis from the late 1970’s to the late 1980’s (Shorter, 311-312). Shorter’s claim here, then, is that during this period not only was psychoanalysis “demedicalized” but it also was largely brushed off by the medical world. Again, below, a contrasting discussion will be brought to bear on this account.  Historiographically, Shorter participates in the theme of warring sides for which there is a winner and loser; the combatants were biological medicine on the one hand, and psychoanalysis on the other, with psychoanalysis suffering a definitive and disconnecting loss.


This is not the only historical story that can be told of the “war” between psychoanalytic and biological approaches. Gail Hornstein, a professor of psychology at MountHolyokeCollege in Massachusetts, USA has written on the history of psychology, psychiatry and psychoanalysis. Hornstein is an interesting scholar to put next to Shorter as she was trained as a “personality/social psychologist” rather than a historian[7]. The curious part about that academic lineage, then, is her tempered and inclusive reading of the history of psychoanalysis. Indeed, while Hornstein depicts psychoanalysis and biological psychology as warring sides, she comes to a considerably different conclusion than Shorter.


In her 1992 article “The Return of the Repressed: Psychology’s Problematic Relationship with Psychoanalysis”, published in American Psychologist, Hornstein chronicles the development of the battle between these two sides and localizes the terms of that battle in a deep epistemological aversion to subjectivity on the side of biological psychology. Hornstein’s historical account begins with a discussion about the nature of the differences between psychologists and psychoanalysts. For Hornstein, the principle difference was in the way science was demarcated from non-science; for analysts “science had nothing to do with method” while for psychologists method, and especially their “bulky equipment and piles of charts and graphs”, was the defining characteristic of science (Hornstein, 254-255). Importantly then, psychoanalysts and psychologists (of the biological variety) were at odds. Analysts concerned themselves with getting at the reality of the mind, at what was “true” (Hornstein, 255). The terms of the debate changed when psychoanalytic thought and practice began to gain public attention and indeed became the object of such cultural interest that professionally and academically the authority of psychologists was perceived to be under threat (Hornstein, 254, 255). It would not be long before all out “skirmishes” would develop; criticisms flew around on both side but the central tension, Hornstein contends, was between subjective and objective methodologies (Hornstein, 255, 256). In both the inherent secrecy of the psychoanalyst’s office and the seemingly interpretive flexibility of their theory, psychologists found their enemies deplorable, even more so as they claimed the status of scientific knowledge (Hornstein, 256). A major crisis developed when a prominent experimental psychologist, Edwin Garrigues Boring, entered into psychoanalytic therapy; others would soon follow. The reports that these sessions generated provoked some initial interest, but it was ultimately the popularity of psychoanalysis in the 1940’s that caused the situation to “[reach] a critical stage” (Hornstein, 258). The psychological response was to subject psychoanalysis to experimental inquiry. Throughout the 40s and 50s this kind of work was immensely popular, but tended to yield less than certain results. As Robert Sears, a researcher charged with sifting through this rapidly growing corpus of work concluded; the reports suggested that, in the case of experimentally tested Freudian concepts, “some of it was, and some of it was not…valid” (Hornstein, 258). This historical reconstruction stands in direct opposition, both chronologically and ideologically, to the citation of the singular AMA commission cited by Shorter above.


Ultimately, psychological researchers considered the point mute, so long as psychoanalysis was being subjected to experimental testing. In league with these tests, many psychological “behavioralists” settled for simply “co-opting” Freudian terms into “mainstream psychology” (Hornstein, 259). This strategy was bolstered by the production of text books; the content of psychoanalysis was largely ignored and the concepts merely translated into the more appropriate psychological jargon fit for novice consumption (Hornstein, 260). The strategy backfired; the repressed subjectivity was allowed, by the mere acceptance and co-option of terms, to creep back into the science of the mind. As Hornstein notes, “even in scientific disguise, [psychoanalytic terms] were still dangerous” (Hornstein, 260). The take away from Hornstein’s story is one in which the entire discipline of psychology, through its interaction with psychoanalysis, was forced to occupy a more and more specific, and especially defended, scientific territory. The casual critiques, the experimental testing, the co-opting of terms all served to only delay the irresistible character of psychoanalysis from permeating every space occupied by psychologists. In the end, “instead of concentrating on…criticism, [psychologists] identified those parts of the [analytic] theory that were…useful to their own ends and incorporated them” (Hornstein, 261). Indeed, the critical discourse and continual push, both professional and popular, of psychoanalysis resulted in an awareness of some shared values. Of these, Hornstein identifies “a commitment to psychic determinism, a belief in the cardinal importance of childhood experience, and an optimistic outlook about the possibility of change” (Hornstein, 261).


For Hornstein, unlike for Shorter, psychology was very intimately concerned with the perceived threat from psychoanalytic camps. Further, far from psychoanalysis being a mere hiatus that would eventually be done away with, in the forced reconfiguration of their discipline Hornstein contends that “psychology itself has benefited from having had the psychoanalytic wolf at its door” (Hornstein, 261). The cases share a fundamental belief that there were tensions, but the contrast in conclusions is striking. This is especially so considering the disciplinary leanings of the two authors. Where Shorter sees a fleeting annoyance, Hornstein sees a fundamental contribution to the strength of biological psychology. In this way, Hornstein represents a view of warring sides that resulted not in winning and loosing, but in adaptation and methodological flexibility.


One might well ask about the nationalist character of these accounts. Indeed, both Shorter and Hornstein, in their reading of psychoanalysis, rely primarily on the American context. One interesting contrast with Shorter that serves to weaken the outright dismissal and loss of the psychoanalytic camp is found in Erika Dyck’s 2008 book, Psychedelic Psychiatry. Dyck is primarily concerned with examining the brief rush of interest in LSD as a therapeutic tool, followed by its cultural, and then medical, decline. The details of her entire book at not wholly relevant to this paper. What is of particular importance is that her research concerns the Canadian psychiatric context and suggests a somewhat similar reading of the place of psychoanalysis to that of Hornstein. While Dyck certainly recognizes the triumph of drug therapies and psychopharmacological treatments over that of psychoanalytic work, her examination of Abram Hoffer and Humphry Osmond, the two historical figures around which her historical work revolves, reveals something different occurring than the winning or loosing of a battle. Indeed, in the 1950s Hoffer and Osmond were promoting their research on LSD to various professionals and institutions in which “psychoanalytic and psychosomatic approaches…carried significant professional currency” (Dyck, 45, emphasis added). The important point from the perspective of this paper is that Hoffer and Osmond considered the psychoanalytic sympathies toward “personal experience in therapy” a point of departure that could be “improved upon” (Dyck, 45). While it is important to note that Hoffer and Humphry did come to distance themselves from the psychoanalytic influence (Dyck, 46) their initial reaction was not one of hostility, but of shared beliefs and reconciliation. In this example, drug therapy, assuredly located within a biological paradigm, was in practice seen to hold the potential for producing positive adaptation in psychoanalytic practice.


One can do better than to merely shift ones gaze north, however. Andrew Lakoff, in his 2005 publication Pharmaceutical Reason, allows for a more international character to enter the historical scene. In this stunningly brilliant ethnographic account, Lakoff chronicles the meeting of two competing knowledge systems, and the role of economic, political and epistemological value in that meeting. Again, for the purpose of this paper it is not important to lay bare the entirety of Lakoff’s account. At least two aspects of Lakoff’s text are important for this paper. First, the significance and durability of psychoanalytic theory in the Argentinean context demonstrates the inability for Shorter’s chronology to capture international contexts. The encounter between the French biotech company Genset and the local Argentine psychological tradition demonstrate “the challenges faced by a global technique such a genomics in assimilating mental disorder” (Lakoff, 3). Far from being an outdated psychological perspective, in Argentina psychoanalysis was importantly entrenched in not only the medical community, but in the culture more broadly. Indeed, the cultural context in which Genset found its self is one in which pharmaceutical practice was seen to be “dehumanizing” (Lakoff, 15). Rooted in the political pressure that befell the country in the 1970s, the “socially oriented psychoanalysis was brutally repressed”, but by the 1980s, after the “return to democracy”, “psychoanalysis again flourished” (Lakoff, 15-16). The important point here, then, is that the demise in the 1970s charted by Shorter can not be seen to represent the influence of global psychoanalysis. Additionally, with the return of psychoanalysis to hospitals in the 1980s both the “hiatus” and the “demise” of psychoanalysis seem inaccurate. The second point of interest in Lakoff’s work is the way in which subjectivity and especially the “individual uniqueness” of patients came under threat in much the same way it had with “encroaching neoliberalism and savage capitalism” (Lakoff, 15).  The parallels with Hornstein should be apparent; subjectivity is one point on which the biomedical tensions turned. There is, however, a crucial difference in their account. For Hornstein the tension existed between professionals while for Lakoff it was manifest in cultural and professional resistance. This is not a trivial point, for much like Mol in her reading of atherosclerosis, bipolar disorder, the object of Genset’s attention, was enacted in a very different way than it might have been in the American context. Indeed, bipolar disorder was merely a blood sample for the biotech company, but for the Argentinean people its diagnosis was enacted as part of a hostile social order. Lakoff gives one reason to pause and consider the ways in which knowledge about the human mind is not merely a medical or biological question; in practice, it can be a hybrid positioning of medical and political subjects.


III. The Advantages of Complexity: Reflections on a Case Study


The texts drawn out above should, it is hoped, point to at the very least one thing; the absolute complexity of the situation. Far from being a simple case of biological versus social, psyche versus soma, or psychoanalysis versus psychological, biomedical practice, in practice, is much more varied that it might seem on first glance. It is certainly true that hostility and tension exist, but it is not clear that a history viewed through this dichotomous lens is capable of revealing the reality of the objects, actors, and practices involved. This is not a bad thing. One reason to rid ourselves of anxiety about complexity is that it can often function as a corrective historical method. In examining the historical reception of Electro Convulsive Therapy (ECT) among psychoanalytic thinkers, Jonathan Sadowsky exemplifies this corrective posture. Sadowsky argues for the inability of what he calls the “pendulum metaphor” to accurately describe the relationship between psychoanalysis and biological psychiatry. The notion of a pendulum describes the way in which the history of psychiatry has “swung back and forth” between these two poles. The issue, for Sadowsky, is that in practice the reception of ECT by psychoanalytic actors seems to trouble this dichotomy. This text is important because it engages Shorter directly on the issue of ECT and reorients the place of psychoanalysis in the history of ECT. Sadowsky demonstrates that the “sharing of assumptions” in the case of ECT is more “reflective of the reality of psychiatry’s history than the polarized picture” (Sadowsky, 2). Indeed, where Shorter saw outright psychoanalytic opposition to ECT as motivated by a perceived threat to psychoanalysis (Shorter, 222), Sadowsky sees a range of responses:

The psychoanalytic reaction to ECT was far from monolithic, ranging from those who were deeply hostile, to those who conceded efficacy but discouraged the practice for reasons of safety, to those who positively saw convulsion therapies as a useful adjunct to insight-oriented talk therapies (Sadowsky, 14)


Without going on at great length about his insights, it is enough here to say that both Weigart’s study of convulsive therapies (Sadowsky, 16) and Mosse’s interpretations of responses to convulsive therapy (Sadowsky, 16-17) would be at the very least problematic, and at worst inaccessible on a pendulum-based account. In particular, both being psychoanalytic thinkers, one would be perplexed to find, counter to Shorter’s characterization, outright support for ECT among these medical professionals in practice; they would quite simply have no place in that history.


This effect, of bringing to the fore complex, even seemingly contradictory, evidence is a product of thinking outside of dichotomies. It is also particularly befitting of the way in which biomedicine is coming to be conceived. As outlined above, a simple definition of biomedicine, one that depicts this emergent historical field as one in which lab simply influences clinic, seems disadvantageous given the vast amount of complex relations that adhere between the lab, the clinic, and indeed the entire social world. Given the vast amount of variation in biomedical practice, complexity seems a more appropriate starting point. For with complexity it is possible to see tension and cooperation at work simultaneously; to allow psychoanalysts the historical leg room to engage with, and perhaps strengthen, their experimental counterparts; to take on the global deployment of knowledge claims; and to achieve “that most sacred of historians’ goals: the reconstruction of proper context” (Sadowsky, 4).


Cambrosio, A. Keating, P. (2003) Biomedical Platforms: Realigning the Normal and the Pathological in Late-Twentieth-Century Medicine. The MIT Press: Cambridge, Massachusetts.


Dyck, E. (2008) Psychedelic Psychiatry: LSD from Clinic to Campus. The JohnHopkinsUniversity Press: Baltimore, Maryland.


Hornstein, G. “The Return of the Repressed: Psychology’s Problematic Relationship with Psychoanalysis”. American Psychologist, 47(2) (1992): pp. 254-263


Kroker, K. “Historical Keyword: Biomedicine”. The Lancet, 371(9630) (2008): pg. 2077.


Lakoff, A. (2006) Pharmaceutical Reason: Knowledge and Value in Global Psychiatry. CambridgeUniversity Press: Cambridge, UK.


Lawrence, C. Weisz, G. (eds) (1998) Greater than the Parts: Holism in Biomedicine, 1920-1950. OxfordUniversity Press.


Mol, A. (2002) The Body Multiple: Ontology in Medical Practice. Duke University Press: Durham and London.


Murphy, M. (2006) Sick Building Syndrome and the Problem of Uncertainty: Environmental Politics, Technoscience, and Women Workers. Duke University Press: Durham and London


Sadowsky, J. “Beyond the Metaphor of the Pendulum: Electroconvulsive Therapy, Psychoanalysis, and the Styles of American Psychology”. Journal of the History of Medicine and Allied Sciences, 61(1) (2005): pp. 1-25.


Shorter, E. (1997) A History of Psychiatry: From the Era of the Asylum to the Age of Prozac. John Wiley & Sons, Inc.

[1] See Annemarie Mol’s discussion of atherosclerosis in The Body Multiple (2003) for an example of multiplicity in both objects and practices. In addition, Michelle Murphy’s account of “sick building syndrome” (2006) is a wonderful example of the ways in which women, in the late 20th century office building, figured themselves as advocates simultaneously for feminist ideologies, occupational health reforms, and grassroots activism in the context of that period. Similarly, for Murphy, the objects that constituted the practice of identifying sick building syndrome were multiplicitous; airborne toxins, a culture of comfort, and epidemiological statistics.

[2] The conflation of psychology and psychiatry, as well as that of biological and experimental, should be addressed here for the sake of clarity. What is not being suggested is that these categories are interchangeable, but rather that they are positioned in opposition to psychoanalytic concepts or actors by the historical scholars. Further work is needed to address the complexity of these categories.

[3] See Sadowsky, J (2005).

[4] See Mendelsohn, J. in Lawrence, C. and Weisz, G. (1998) for an examination of the explanatory strength of complexity over a related dichotomy in the history of epidemics, between holism and reductionism.

[5] See Cambriosio, A. Keating, P. (2003)

Objects appear everywhere but are rarely command attention. Things command much attention but appear under very particular circumstances. To venture into the world of things is to step out of the world of objects. It is not to abandon the object, but to bestow upon the great arbiter of the object, the subject, a periphery role in the determination of things. The subject here is the end, the thing the means. This paper seeks to explore the ways in which thingness challenges the place and meaning of objects by inviting things into the broad, academic category of museum studies. Within the field of Science and Technology Studies (STS) and certainly within academia more broadly, museums have become productive sites of investigation[1]. Significant attention has also been paid to the objects within museums, to their “biographies” as they move from place to place, living out their life[2]. Arjun Appadurai, in his introduction to The Social Life of Things (1986), is concerned with precisely these movements. In order to determine meaning, Appadurai insists, one must “follow the things in themselves”, examining the “trajectories” that disclose the “human and social context” of things (Appadurai, 5). Appadurai’s “methodological fetishism”, the commitment to following the trajectory of things, is one built upon a conviction that, along that trajectory, objects circulate in different “regimes of value” (Appadurai, 4). It will be the goal of this paper to examine the various regimes of value, the subjective interests that endlessly thing an object, within the context of a modern museum. By contrasting the simultaneous regimes of value that adhere to a Meroitic stele, on display at the Royal Ontario Museum, with an approach to museum objects from within the history of science, this paper will argue that the history of science, informed by critical and art theory, is in a much better place to begin examining the museum collection.


Objects, Museums, and the History of Science


In his 2005 article for Isis, “Objects and the Museum”, historian Samuel Alberti tells a story about “the careers of museum things from acquisition to arrangement to viewing”, in an effort to expose the human relations that surround them (Alberti, 561)[3]. On one level, Alberti’s discussion is an achievement that takes seriously Appadurai’s remarks about value, movement, and meaning. On another level, there is a structural component to Alberti’s treatment that fails to allow the objects themselves to speak, instead being translated through established categories of museum life. While Alberti’s demonstration of the potential dynamics of acquisition, collection and reception are insightful and admirably researched, the absence of a particular object to follow results in a number of generalized observations that this paper will have occasion to trouble.


Shaped by what Alberti calls the “three phases in the life of a museum object”, meaning is given a genesis, a shifting trajectory, and an end point (Alberti, 561). Examining the point of discovery first, Alberti uses just such generative language when he describes the “first in a series of convoluted meaning and context shifts” (Alberti, 562, emphasis added). One might surmise that the object had not been collected before this point or that it lacked a particular, local meaning at the time of collection. Considering the attention paid to “premuseum lives”, either conjecture would seem odd. While it is true that Alberti makes note of the way in which various museums, donators, and kingly benefactors “channeled objects to the museum”, the reader is still left puzzled as to the proposed novelty of collection and the lack of any examined, pre-collected value (Alberti, 564). Similarly, and resting on the “meaning and identity” afforded by “previous owners”, Alberti’s second phase, “Life in the Collection”, is equally puzzling (Alberti, 565). While living in the collection, it is true that Alberti’s generalized category of object is given a lively biography. Here, in the collection, objects gain new meanings, push the taxonomic limits that they have previously been ascribed, and function as boundary objects[4]. More problematic, however, is Alberti’s contention that, once collected, an object is “removed from circulation and rendered singular and inalienable” (Alberti, 565). One may well ask about the circulatory trajectories an object takes even after being acquired by a museum or personal collector; the results may well trouble the inalienable and singular characteristics that Alberti identifies.


The puzzling aspects of Alberti’s case should not be read as stark criticism but, rather, as an occasion for intrigue. Alberti is, it should be remembered, speaking of museums and collections by way of reference to methodological fetishism. Indeed, Appadurai appears alongside his anthropological cohort, Igor Kopytoff, in a footnote regarding the “biography” of objects (Alberti, 560, footnote 3). Before and after the publication of Appadurai’s edited edition, The Social Life of Things, much has been said of things. The remainder of this paper will examine one such contemporary text in theoretical discussions of things, before turning to a fetishised account of a particular thing found on display at the RoyalOntarioMuseum.


Thing Theory


Bill Brown is a professor of English and Visual Arts at the University of Chicago who has had occasion to write and talk about things a great deal (Brown, 1998, 2001, 2003, 2006, 2009). While Brown’s publications on ‘things’ are numerous, the scope of this paper necessitates a focal point for the sake of practicality. A 2001 article from Critical Theory, bearing the somewhat facetious title, “Thing Theory” is particularly apt in this regard. While Brown’s article sets out to provide a kind of program or manifesto for scholars of things, one of the most appreciable and theoretically telling ways of accessing the text is precisely through the facetiousness of the title. “Do we really need anything like thing theory the way we need narrative theory or cultural theory, queer theory or discourse theory? Why not let things alone?” Brown asks his readers (Brown, 1). The question, the facetious title, and the inherent humor in Brown’s work are important because they disclose a central theoretical point regarding things- that we should leave them alone. Brown is not suggesting that we ignore things altogether, after all his text is full of things he calls things, but rather that we give things some room, free from the confines of theoretical determinations, to breath. The point is important because it drives home the epistemic place of things in contrast to that of objects. Objects, on Brown’s account, command little of our attention precisely because of their determinant, functional ever-presence in our lives (Brown, 3-4). Things demand our attention, however, because of the way in which their thingness represents a “chance interruption” in that functional determination:


We begin to confront the thingness of objects when they stop working for us: when the drill breaks, when the car stalls, when the window gets filthy, when their flow within the circuits of production and distribution, consumption and exhibition, has been arrested, however momentarily. The story of objects asserting themselves as things, then, is the story of a changed relation to the human subject and thus the story of how the thing really names less an object than a particular subject-object relation.

(Brown, 4, emphasis added)


The changing of relations, between and for human subjects, is perhaps another way of expressing Appadurai’s notion of regimes of value, and still yet another of expressing Kopytoff’s object biography. It is this notion of change, of the incessant, intoxicating multiplicity of the object, through its fleeting thingness, that may well work as an explanatory additive to Alberti’s call to follow the objects found in museums. Rather than present museum-objects as having discrete meaning in particular places, the historian of science and of museum studies might do well to free those objects from the theoretical confines of phase-based reasoning, allowing them a moment to breath.




Visiting the Roman Hall at the ROM


Tucked away in an unassuming corner of the Roman Hall in the Royal Ontario Museum (ROM) is a collection from the Kushite and early Nubian empires. The unassuming quality of the collection is found in its modesty, in its humble periphery role behind the collection of Roman busts that students and the public have come to worship (Figure 1). Ancient Nubia lacks the scholastic awe and public recognition that western scholars have come to revel in; the pantheon of Greek and Roman thinkers that stand at the origins of the most praised of intellectual lineages. Passing by the heroic heads, their command of the visiting subjects firm, something altogether more unknown peers out of the Nubian collection. Resting quietly in the corner is a sandstone stele with Demotic script carved into its soft, inviting surface. The form is familiar to a student of history, to be sure, as is the ancient script scrolled across its face. Something else, something resting quite outside the western consumptive tradition, outside the exhibitory worship of heady thinkers that guard access to the hall, is provoking the eye and the mind. The Demotic prose is worn, perhaps error ridden in spots (Figure 2). The visitor is pressed up against the glass now, drawn in by the thingness oozing out of the porous stone, searching the accompanying information card for something to cling to (Figure 3). Not an erroneous text at all, nor a time-worn Demotic inscription, but a script in its own right; the scriptural form of the Meroitic language, sharing here in the history of the Egyptian empire.




            The Meroitic period of ancient Nubia is one generally held to have begun around 300 BC and to have come to a conclusion close to 350 AD. David Edward’s, in An Archaeology of the Sudan (2004), comes to this chronological determination by way of the cultural and power shifts occurring at the time. Specifically, the Napatan period that had come before had its center of power in the Napata region, evidenced by royal burial sides located there (Edwards, 141). The Meroitic period, then, is “usually linked with the move of the royal cemetery from the Napata region to Meroe” and the “burial there of King Arkamani” (Edwards, 143). This shift can also be understood to have occurred within a period of shifting cultural identity for the Kushite people that began with deep Egyptian roots and that, with the move to Meroe, began a period of “development [independent] of Egypt” (Metz, 5). While a comprehensive historical treatment of the Kushite Empire and of Meroe would be ideal, there is little room in this paper for such an endeavor. This brief foray into the potential cultural and political separation for Egypt is not devoid of worth, however, as it points to a pre-collected sense of value that attaches its self to the Demotic-Meroitic stele. The linguistic presentation of the Egyptian derived Demotic and the newly emerging Meroitic scripts together, embody this shifting cultural and political dynamic, infusing the stele with an ancient, pre-collected regime of value. In terms of the objects in a museum, Alberti’s conception of valuation beginning “first” at the point of acquisition is seriously troubled by such an account.


There is also, in this stele, an emerging sense of value that adds to an object-biography that is begun but not yet finished. For the visitor searching for a thing to occupy the pages of a paper on things, the singularity of the object is already starting to crack and splinter. A crack; the value here is one circumscribed by its contrasted, perhaps even contested, place in the Roman hall. A splinter; the mysterious, unknown wonder of the Meroitic language further challenging that same collected singularity that Alberti had found (Figure 3). A crack, this one more material than metaphorical, in the accompanying stele resting beside the Demotic-Meroitic thing, the visitor drawn back into intrigue, to wonder (Figure 4). Perhaps the cracked stele, hosting only the Meroitic text, is not cared for to the same degree as the other thing first encountered. There is no information card to center this cracked thing, to bring it back to the safety of a subject-object discourse. Access to more information would need to come through other channels not open to the public; information housed in the labyrinth-like corridors of the curatorial and registration offices that snake around the ROM, guarded by access keys. A splinter; the value of knowledge, about the objects on display, is subjected to a politics of the private and the public spaces in the ROM.


Visiting the Collections Manager and Registration Coordinator at the ROM


The Collections Manager for the Ancient Near Eastern and Ancient Egyptian department nods toward the guard on duty, who clips a “visitor #3” designation to the researcher as the smell of musty books and old things rush through the now open door. The manager is excited about the visitor’s interest in the collection under his purview, relating a story about the great passage of time between the current and last visitor entering this carefully policed space[5]. Bags and pens are to be locked away before entering into the sanctity of the collections warehouse; the visitor complies. Prying at the cracks and splinters, the visitor is still a mess with intrigue over the care, or lack of care, bestowed on the second stele in the collection. A result of two very different valuations, the manager contends. The second Meroitic stele, one of the largest in the ROM’s possession, suffered at the hands of careless packagers at the Gebel Adda dig site[6]. Thingness explodes, the singularity of the collected item holds yet another variable valuation, that between the ROM and the workers at Gebel Adda. On this account, then, the display is perplexing. Size and Egyptological worth, the manager notes[7]. These are the qualities that the ROM found desirable in their choice to display both the Demotic-Meroitic stele and the large cracked Meroitic stele. Museum display its self is a regime of value that is caught up in educational, commercial, and economic tensions. On one hand, the ROM seeks to maximize awe and wonder on the showroom floor. This is altruistic on one level, but deeply competitive in a museum that prides its self on the relative size and worth of its collections. The public display and educational value are also at odds with the economics and academic practice of publication. Nicholas Millet, the director of the Gebel Adda expedition, who also has close ties with the ROM as the previous Curator of the Egyptian Department, haunts the private halls of the curatorial building[8]. Conceived of as a kind of Egyptological and linguistic obsessive by his peers at the ROM, Millet was notorious for his reticence to publish what he considered incomplete knowledge about the stele under question[9]. As opposed to the public display and educational tendencies that fixed the value of the stele for the current manager, Millet seems to have considered it intellectually irresponsible to publish an incomplete translation of the carved Meroitic text. While the tension may not be overt, such that Millet would condemn the stele to a life in private, there is at least a difference in the character of the value afforded by the ROM and by Millet himself. One instantiation of this claim is the fact that Millet’s work, kept from publication in life, was published posthumously in the Journal of the Society for the Study of Egyptian Antiquities in a form that Millet still considered incomplete at the time of his death[10].


Knowledge about such things, about the value Millet placed on the stele and about the travel they underwent, requires a visit to another sequestered area of the ROM. The manager is authorized to escort the visitor to the registration department where the registration coordinator can provide on-site (only) access to the large file the ROM keeps on all matters pertaining to the acquisition and trade of Gebel Adda items. Most interesting, for the purposes of this paper, are the ways in which, both methodologically at Gebel Adda and administratively at the ROM, Millet fixed the Meroitic stele as what Baudrillard would have called “loved [objects]”[11]. Contrasting his intellectual concerns surrounding publication, Millet can also be read as having a personal connection with the products of the excavation, as evidenced by the beautiful hand drawn registers from Gebel Adda (Figures 5 and 6). The excavation register produced could have, at the instruction of Millet as the site director, been composed of photographic records. Instead, Millet included in his list of staff the artist E. W. Mayer[12]. There is some ambiguity as to the official artist for the register in Figures 5 and 6, as no official names are listed. It can at least be concluded, by virtue the drawings found in the register, that there was some preference to care for the objects in a different capacity than the sites in general[13]. Localizing various regimes of value around individuals is even problematic on this account, with Millet shattering his own singular valuation of the stele. On one hand a choice to add a personal touch (literally) to the registration, on the other the determination of objects as troubled intellectual and public material.


Following the stele, through the curatorial and registration departments of the ROM, also reveals an interesting aspect of the travel of museum objects. Originally a donation from the National Geographic Society, reluctantly valued by Millet at 1,000,000$ for the purpose of a donation record, the Gebel Adda collection seems to have gone on several journeys after its arrival at the ROM[14]. In contrast to Alberti’s contention about the end of circulation, the ROM has a history of loaning portions of the Gebel Adda collection out. Indeed, the registration coordinator notes that loans were made back to the National Geographic Society as well as to other museums in the United States and Canada[15].


Thing Theory and the History of Science


Reflecting on the insights gained in the case of the Meroitic stele from the Gebel Adda excavation site, thing theory offers a particularly insightful way to get at the personal, intellectual, and circulatory aspects of museum objects. Concerning the personal and intellectual engagement with the Meroitic stele, the notions of pre-collected meaning and singularity that Alberti discuss are seriously troubled. Additionally, arguments concerning the inalienable quality of collected objects appear hard to maintain in the case of the Meroitic stele. Thing theory is not meant as a corrective to Alberti’s conception of museum studies, but rather an additive. If Alberti is read as asking the history of science to favor the analysis of objects in museums, thing theory invites attentiveness to the thingnesss that explodes the static nature of those objects. Thing theory asks what happens when Alberti’s museum objects are examined through the specificity a particular object, through its biography and the regimes of value that surround it. In the case of the Meroitic stele at the Royal Ontario Museum, the objects thingness forces the historian to give up the prescriptions that would corner and bound meaning and value, looking instead to give breath and life to objects. To abandon programmatic accounts of museums and collections is to allow objects to speak for themselves, but it is also to multiply the work of the historian exponentially. This paper has only briefly treated two of the several thousand items in the Gebel Adda collection; if nothing else, thing theory suggests that historians have a considerable amount of work ahead of them.


Figure 1.

Enterance to the Roman Hall at the RoyalOntarioMuseum

Students seated before teacher in front of busts of Roman scholars and politicians





Figure 2:



Stele fragment

Sandstone, Inscribed

Egypt. Gebel Adda. Cemetary 3, surface sand between pyramids 3 and 4

Meroitic. AD 200-300

Bears a short inscription in both Meroitic linear and Egyptian Demotic scripts




Figure 3.


Information Card for the Meroitic stele

Note: The mystery and unknown nature of the Meroitic script and the call for scholarship




Figure 4.




Sandstone, Inscribed

Egypt. Gebel Adda. Cemetary 4, Tomb 173

Meroitic. AD 200-300





Figure 5.


Gebel Adda Registration book showing the “Stele fragment” (Figure 2)

Note: The meticulous hand drawn images. The registration book contains a hand drawing of every one of the several thousand pieces from Gebel Adda.





Figure 6.


Gebel Adda Registration book showing the “Stele” (Figure 4)

Note: The meticulous hand drawn images. The registration book contains a hand drawing of every one of the several thousand pieces from Gebel Adda.






Alberti, S. “Objects and the Museum”. Isis, Vol 96 (4), 2005: pp. 559-571


Appadurai, A. (ed) (1986) The Social Life of Things: Commodities in Cultural Perspective. CambridgeUniversity Press


Baudrillard, J. “The System of Collecting.” Trans. Cardinal, R. In The Cultures of Collecting. Ed. Elsner, J., Cardinal, R. London: Reaktion, 1994, pp. 7-24, 275-277. (Originally published as a chapter in: Baudrillard, J. Le Système des objets. Paris: Gallimard, 1968, pp. 120-150.)


Brown, B. “Thing Theory”. Critical Inquiry, Vol (28) 1, 2001: pp. 1-22


Edwards, D. (2004) Nubian Past: An Archaeology of the Sudan. KY, USA: Routledge


Griesemer, J., Star, S. “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39”. Social Studies of Science, Vol 19 (3), 1989: pp. 387-420.


Haraway, D. (1990) Primate Visions. Routledge: Chapter 3, “Teddy Bear Patriarchy: Taxidermy in the Garden of Eden, New York City, 1907-1936”


Metz, H. C. (ed) Sudan: A Country Study. Library of Congress: Federal Research Division. 1992


Millet, N. “The Meroitic Inscriptions from Gebel Adda”. Journal of the Society for the Study of Egyptian Antiquities, Vol 32, 2005: pp. 8-9


Millet, N. “Letter to Gillian Pearson, Acting Registrar of the ROM” 1990. Gebel Adda Registration File


Millet, N. “Gebel Adda Preliminary Report for 1963”. Journal of the American Research Center in Egypt, Vol 2, 1963: pp. 147-165


*RoyalOntarioMuseum Curatorial File for Gebel Adda Excavation


*RoyalOntarioMuseum Registration File for Gebel Adda Excavation


*Personal Communication from the Collections Manager at the RoyalOntarioMuseum, Department of World Cultures, Ancient Near Eastern and Ancient Egypt


*Personal Communication from the Registration Coordinator at the RoyalOntarioMuseum


*Names of informants withheld from document. Available upon request. Registration and curatorial documents only available on-site at the Royal Ontario Museum.

[1] See for example Alberti, S. “Objects and the Museum”. Isis, Vol 96 (4), 2005: pp. 559-571. Haraway, D. (1990) Primate Visions. Routledge: Chapter 3, “Teddy Bear Patriarchy: Taxidermy in the Garden of Eden, New York City, 1907-1936”.

[2] See Kopytoff, I. pp. 64-94 in Appadurai, A. (ed) (1986) The Social Life of Things: Commodities in Cultural Perspective. CambridgeUniversity Press

[3] While Alberti should not be taken to represent the whole history of science as a field, he does admit to the novelty of his object, rather than collector, centered approach to museum studies in the field. See Alberti, S. “Objects and the Museum”. Isis, Vol 96 (4), 2005: pp. 559-571.

[4] See Griesemer, J., Star, S. “Institutional Ecology, ‘Translations’ and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907-39”. Social Studies of Science, Vol 19 (3), 1989: pp. 387-420.

[5] See Personal Communication from the Collections Manager at the RoyalOntarioMuseum, Department of World Cultures, Ancient Near East and Ancient Egypt.

[6] See Personal Communication from the Collections Manager at the Royal Ontario Museum, Department of World Cultures, Ancient Near East and Ancient Egypt. Also, concerning the employment records for Gebel Adda see  Millet, N. “Gebel Adda Preliminary Report for 1963”. Journal of the American Research Center in Egypt, Vol 2, 1963: pp. 147-165.

[7] See Personal Communication from the Collections Manager at the RoyalOntarioMuseum, Department of World Cultures, Ancient Near East and Ancient Egypt.

[8] See Millet, N. “Gebel Adda Preliminary Report for 1963”. Journal of the American Research Center in Egypt, Vol 2, 1963: pp. 147-165.

[9] See Personal Communication from the Collections Manager at the RoyalOntarioMuseum, Department of World Cultures, Ancient Near East and Ancient Egypt.

[10] For the publication see Millet, N. “The Meroitic Inscriptions from Gebel Adda”. Journal of the Society for the Study of Egyptian Antiquities, Vol 32, 2005: pp. 8-9. Concerning Millet’s reticence about incomplete publication and his dissatisfaction with his translation see Footnote 9, also Personal Communication from the Collections Manager at the RoyalOntarioMuseum, Department of World Cultures, Ancient Near East and Ancient Egypt.

[11] See Baudrillard, J. “The System of Collecting.” Trans. Cardinal, R. In The Cultures of Collecting. Ed. Elsner, J., Cardinal, R. London: Reaktion, 1994, pp. 7-24, 275-277. (Originally published as a chapter in: Baudrillard, J. Le Système des objets. Paris: Gallimard, 1968, pp. 120-150.)

[12] See Millet, N. “Gebel Adda Preliminary Report for 1963”. Journal of the American Research Center in Egypt, Vol 2, 1963: pp. 147-165. Page 148 includes a full list of staff at the excavation site.

[13] Ibid. Note the use of photography for the recording of sites and the absence of reference to photography of objects. Together with the artistic renderings in Figures 5 and 6, there is evidence that the objects were treated as requiring the care of a human hand.

[14] See Millet, N. “Letter to Gillian Pearson, Acting Registrar of the ROM” 1990. Gebel Adda Registration File for Millet’s reluctance to attach a dollar figure to the collection due to its size, provenance, and rarity.

[15] See Personal Communication from the Registration Coordinator. See also Royal Ontario Museum Loan Agreement Form, 1991, 1993, 1997. Gebel Adda Registration File.

This weeks reading from Natasha helped me work through an idea I have found immensely interesting, but hard to pin down. But first, some background on positivists and antipositivists (like nails on a chalkboard? Ill keep it short and try not to butcher it):

(see Galison 785)

Positivist conceptions of science sprang forth from a dedication to the observational or empirical basis of scientific practice/progress. It is from the amassing of sense data that theories can be generated and, in light of which, progress can be understood to occur. Science has a continuous substratum of observation upon which to rest its overall practice. We might point to Rudolph Carnap when thinking about this conception of science.

(see Galison 794)

Antipositivists, at least on Galison’s account, explicitly reject any stable scientific method or unproblematic treatment of observation. Thomas Kuhn’s conception of scientific progress went a long way toward what might now be described as the “social construction” perspective in science theories. For Khun, paradigm shifts force an incommensurable break in theory, and as a result, observation. There is no linear picture of scientific progress on this account.

In Image & Logic Peter Galison offered an argument about the place of material culture and practice in accounts of scientific history.  On Galison’s account, the antipositivists were right to throw away the notion of a perfectly linear conception of science. It is with the notion that science, as a result of the antipositivist periodization, becomes a series of incommensurable and discontinuous “blocks” (see figure 9.4 above) that Galison takes exception. Galison introduced the notion of an “intercalated periodization” to cope with the discrepancy between the block periodization forced by understandings akin to the “paradigm shift” and the capacity for scientists, across many generations, cultures, and disciplines, to nevertheless build an ever increasing body of knowledge. Intercalated views, says Galison, reorganize the bottom-up conception that both the positivists (observation->theory) and the antipositivists (theory->observation) had by introducing the role of the material culture of science. On this view instrumentation, observation, and theory all find themselves overlapping, all “quasi-independent”; that a break in theory and observation occur at the same time is a matter of historical coincidence and not a fundamental quality of science. Most important perhaps, is that out of this intercalated view Galison localizes what he feels to be one of the major forces binding science together. It is a kind of anarchistic and material pluralism; “it is the disunification of science – the intercalation of different patterns of argument – that is responsible for its strength and coherence” (Galison, 781). An anthropological romp through the “subcultures of physics”  (theory, experiment, and instrumentation) results in a rather ingenious linguistic description (beginning on 803) of the ways in which these worlds are able to communicate and foster ways of working and knowing together.

I find the visualized periodizations and the entire framework to be particularly provocative when thinking about all aspects of scientific practice but I also get stuck thinking of ways that this framework can be productively adapted outside the world of physics. After reading Moderling Proteins, Making Scientists I have one answer. I think Myers does a particularly good job of straying away from singularly theoretical or observational accounts of the work protein crystallographers engage in and answers Galison’s invitation to add more subcultures to the periodization. I get stuck working with Galison precisely because the framework seems to fit the history of physics so well; the contestations over image and logic traditions in post WW2 industrial science and the rich, far reaching history instrumentation from both worlds enjoys (bubble chambers, spark chambers, etc..) both fit nicely into Galison’s project. As I read Myers I began to think that so too do the contestations over statistical and structural accounts of molecular goings-on rend time and place into an intercalated mix of representation, embodiment, instrumentation, theory, and observation. Another subculture that might be added to the intercalated periodization is pedagogy, so crucial a site for reproduction of knowledge in the world Myers describes.

In any event, I think Myers’ piece points nicely toward the ways in which sites of contestation, complex and embodied learning, and subject-object entanglement display the kind of disunification Galison is striving after. It didnt surprise me to find Image & Logic tucked away in Myers’ bibliography, but it did surprise me how portable the intercalated framework is becoming as I continually rethink its extensions. I dont want to suggest that there is a 1:1 mapping of Myers’ work onto Galison’s, but I do want to suggest that protein modelling provides a refreshing alternative to linear/progressive or incommensurable/block periodized accounts of scientific practice.

Galison, P (1997) Image & Logic: A Material Culture of Microphysics (Chicago: University of Chicago Press)

Myers, N. (Forthcomming) Modeling Proteins, Making Scientists: Rendering Molecular Life in the Contemporary Biosciences. (Duke University Press)

I apologize for the lateness of my post, I have only just returned home….


I realize that our postings generally reflect on the week’s readings, but I felt that since this was effectively part two of the “Rendering” weeks I still had some leeway with regard to Barad and particularly the idea of agential realism that we only had a chance to touch on last week. Also, it is almost midnight and this was my alternative, unpublished post from last week, which means much less work for me and a bedtime that allows me to at least appear functional tomorrow 🙂

On to the post…

I wanted to explore a pair of renderings that, when I first encountered them, clarified a great deal of the different epistemological approaches that Einstein and Bohr had as well as the way Barad made use of Bohr. I remember reading Barad with a great deal more clarity once I had examined and explored the visual representations of Einstein’s thought experiments and Bohr’s real-world renderings of the same.

The Bohr-Einstein debates, as they are often hyphenated, denote a set of discussions that went on between the two well known physicists regarding the inadmissibility of Bohr’s quantum ontology in Einstein’s mind. I find a good way to understand the debates are through one of Einstein’s thought experiments, which is aimed at the demonstration of something incompatible with quantum mechanics.

Einstein was notoriously uncomfortable with the idea of indeterminacy and chance in nature. The type of ontology found in this type of thought is the kind Barad described in Classical terms as those that “include an autonomously existing world that is describable independently of our experimental investigations of it” (Barad, 168). For Einstein, the probability in quantum mechanics arose from a kind of epistemological-probability; we simply don’t know enough to say what will happen, if we knew more we would certainly know the result. Bohr, on the other hand, had a discernibly metaphysical-probability in mind; probability exists in the phenomena, indeterminacy is a part of the ontology and even if we knew everything there is to know, we could still not predetermine the result. These are the grounds upon which Einstein mounted most of his attacks on QM (the EPR paradox is a particularly good example of this discrepancy in understanding, but perhaps one that is a bit difficult to illustrate fully in this post).

The thought experiment I am interested in and one that I thought made particularly clear Bohr’s “philosophy-physics”, as Barad terms it, is one from the Solvay conference in 1930 concerning the conjugate variables time and energy. Einstein brought with him what he surely thought would serve as a striking blow to the quantum mechanical world:

Einstein suggested in a thought experiment, which he drew out in his notoriously parsimonious fashion, that it was possible to determine both time and energy in a given system. The setup would feature a box with a hole in it, through which a discernable amount of radiation would be let out according to an internal clock. The genius of Einstein’s thought experiment lies in the fact that time could be known (as the clock would be set) and that energy could be determined through the famous formulation of energy that Einstein himself developed (E=mc2). All one would have to do is weigh the device before and after the timed release of a single photon and both time and energy (through the equation) could be accurately determined. The implications for quantum mechanics are staggering since time and energy are considered conjugate variables (variables which, when knowing with greater certainty the value of one, one looses the capacity to know with certainty the value of the other) and to determine them simultaneously would violate the uncertainty principle.

The next image is one that Bohr (and presumably some others at the conference) developed to realize the experiment Einstein had proposed:

The point Bohr always wants to make, and always has to make when Einstein brings one of his beautifully simple drawings to the table, is that experimental setups themselves come to play an important role in the determination of some variable. The setup pictured above is one that we might expect to find in the real world in its three dimensional fullness. The very nature of the drawing, with its giant bolts fastening the apparatus to a stationary table, is indicative of Bohr’s focus on measurement, practice, experimentation, precision, and the limits of abstract speculation. When Einstein says, ‘I have thought of a way to avoid some of the impracticalities of your strange theory Bohr’, the Dane responds, ‘Build it and we will talk’.

The problem with Einstein’s reasoning comes about when we come to build THIS particular setup. Bohr wants to make it clear that theory must be understood as part and parcel of practice; he “situated practice within theory, since to ignore practice is to get the theory wrong” (Barad, 166). This setup includes a method for weighing the device, the clock, the sliding door and hole for photon release. When the photon is released and the device moves up and down the scale, bouncing until at rest (giving a determinant value for mass and subsequently energy) it can be considered as an accelerated frame of reference. Following Einstein’s own demonstration of the equivalence of the effects of gravity and phenomena within accelerated frames of reference, the clock will be subject to gravitational time dilation. Time, as we come to know energy with greater and greater exactness, looses its definite value.

The inner workings of quantum mechanics, relativity theory, and physics in general are terribly important, but can largely be avoided for those of us with little experience in the field (myself especially) in order to get at Barad’s understanding of Bohr’s “philosophy-science”. The thing that these images bring into focus, at least for me, is practice. Bohr is all about measurement, he doesn’t care to talk about God’s plan, the really real world out there, or any other metaphysics. This is the kind of down-to-practice epistemology that Barad wants to bring out; the kind that Objectivity and Hacking’s microscopes really got us to focus on; and the kinds of visualization, perception, and observation that “situated knowledges”, “performativity”, “constrained constructivism”, and “immutable mobiles” asked us to pay attention to. The worlds of experimentation, objectivity, observation, and even theory are all thoroughly local; they attest to the ways in which the subject and object of knowledge always work together, in this case in a very real way, to generate the renderings we are exploring.

I found the Bohr-Einstein debate, the various thought experiments, and especially the EPR paradox particularly illuminating when I first encountered Barad and I think you guys might enjoy some of the source material from which these images/ideas sprang:

“The Bohr-Einstein Dialogue” from Neils Bohr: A Centenary Volume, A.P. French & P.J. Kennedy, ed(s)., Harvard University Press, 1985, 121-140


Nelson, J., Nelson, L. H., ed(s). (1996) Meeting the Universe Halfway: Realism and Social Constructivism without contradiction, Springer. pp 161-194

I believe in 2007 Barad also had a book published under the name “Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning”, but I only have access to the 1996 essay.

Within the arena of science studies, critical analysis has often focused on experimentation. Indeed, experiment has long been extolled as both the defining characteristic of science and its greatest strength. In the late 19th and early 20th century, positivist arguments sprang up describing the nature of scientific progress. For scholars like Otto Neurath, observation was the bedrock of science; it allowed experiment to see behind nature’s veil and build a knowledge base that moved incrementally toward truth. Opposition to these positivist claims came from ant-positivist thinkers like Thomas Khun. Khun found that science had not a linear trajectory that it followed, from observation to theory, but was rather fragmented. Breaks in theoretical understandings brought about new paradigms of understanding, constraining what could be observed in the first place. While many people have treated these two analytic camps as wholly distinct, there is a shared belief in what Hans-Jorg Rheinberger has called the “totalization” of science (in Biagioli, 1999, p 418). Both positivists, in their “[growth]…toward truth” and anti-positivists, during non-revolutionary “paradigms”, describe science as “a normative process… [with] an overarching chronological coherence to the process of gaining scientific knowledge” (Biagioli, 1999, p 418). While Rheinberger goes on to explore the temporal aspects of knowledge production and its depiction in science, for this paper it will suffice to add a parallel reading of the function of experimentation, rather than to extend his arguments. This paper will argue that there is another conclusion to be drawn from the debates between the positivists and the anti-positivists, whose acceptance of experiment as a “matter of fact”[1] hides the intricacies that work to carry experiment through to practice. Experimentation, as a sharply defined practice in science, does not exist; rather, it is the capacity of the experimental system to be reconfigured at critical times that lends strength and, crucially, unity, to scientific practice. This paper will first explore the origin of experimental practice in science, working to develop what the “experimental system” is actually made up of. Attention will then be turned to the treatment scholars have given individual components of experiment. It will be seen that it is not in a “totalization” of science, and experimentation, that we can find its most precious of virtues, but in the delicately balanced bricolage of experimental work.

The making of experiment

It is often taken for granted that experiment is a part of scientific life. Simon Shaffer and Steven Shapin were some of the first to ask what an experiment is in the first place, and why we actually do them (Shaffer, Shapin, 1985, p 3). In Leviathan and the Air-Pump, Shaffer and Shapin address the debates between Thomas Hobbes and Robert Boyle over appropriate methods of knowledge production. For the purposes of this paper, however, it is not the debate between these natural philosophers that is of critical import. It is the result of those debates, the category of experimentation championed by Boyle, whose characteristics will be important[2]. Shaffer and Shapin demonstrate that Boyle made use of three technologies to produce his experimental matters of fact. First, Boyle made extensive use of “material technologies” in the form of the air pump (Shaffer, Shapin, 1985, p 26). The air-pump embodied the objectivity of experiment[3]. Facts, for Boyle, were “machine-made”, quite elevated from human perceptions (Shaffer, Shapin, 1985, p 26). The authors note that it is in the “capacity to enhance perception and to constitute new perceptual objects” that the air-pump, and many other material technologies of the time, found their place (Shaffer, Shapin, 1985, p 36). Similarly, the capacity of instrumentation to enhance the senses calls attention to the foundation of Boyle’s empiricism: observation. Observation had a dual role, however. It was not just the material technology in which the qualifying “observation” would be made by the objective scientist, but also in the public witnessing of experiment as well. The second technology Boyle used to his advantage was a “literary technology” which allowed “virtual witnessing” of experimental processes (Shaffer, Shapin, 1985, p 55-57). In order for knowledge to be empirically based, and experimentally sound, a “multiplication of [witnesses]” was necessary (Shaffer, Shapin, 1985, p 57). Witnessing was facilitated through public access to laboratories, and through inscriptions containing appropriate practices, and through construction details for the replication of experiments (Shaffer, Shapin, 1985, p 59). The last technology that Shaffer and Shapin describe is a “social technology”, which “incorporated the conventions experimental philosophers should use in dealing with each other and considering knowledge-claims” in to discourse about those experiments (Shaffer, Shapin, 1985, p 25). The conventions that Shaffer and Shapin describe are not important for the purposes of this paper, rather, their presence and the implications they have, are. An importance presence in experimental work on the air-pump was the “long standing debates over whether or not a vacuum could exist in nature” (Shaffer, Shapin, 1985, p 41). Boyle’s social technology was constructed such that “there were to be appropriate moral postures, and appropriate modes of speech, for epistemological items on either side of the boundary that separated matters of fact from the locutions used to account for them: theories, hypothesis, speculations, and the like.” (Shaffer, Shapin, 1985, pp 66-67). Boyle made it taboo to talk of matters of fact without certainty, and he ensured that debate was conducted with the same gentlemanly conventions that would operate in social settings. But Boyle did more with his social technology. Boyle described the way theory should be treated in discourse; once an experimental program had distilled from theory to matter of fact, there was a new way to speak of it. These three technologies, material, literary, and social all worked together to embody objectivity, instrumentation, observation, and theory in various ways. This particular system of experimentation has since become the common practice of laboratory work. However, this is not to suggest that all experimenters and laboratories deploy these three technologies to the same end as Boyle. Rather, the examples taken from Boyle show the way in which particular and local understandings of these categories can become embedded in more than just those particular and local results. It is not Boyle, or Shapin and Shaffer’s, three technologies that exist today, but their precursors. Notions of particular forms of objectivity, of the power of observation, of theoretical discourse, and of laboratory and scientific practices are the components of rhetoric that popular discourse about science has inherited. It is precisely these notions, these matters of fact, which need to be explored, and it is to an exploration of each of these seemingly categorically unproblematic characteristics that this paper now turns[4].


Making Matters of Concern


If Robert Boyle deployed his three technologies, leveraging support for his experimental program, and turning that program into a matter of fact, then the aim of the following sections will be to do the opposite. An exploration of objectivity, observation, instrumentation, and theory will work to rob the category of experimentation of its autonomy over science and to demonstrate that, against our more conventional notions of what experiment actually is, there is no grand narrative to be found. In Rheinberger’s terms, it will be to remove the normative element from science and to destroy the chronological cohesiveness that history affords to scientific progress.  Further, it will be to extol this vision of science; a vision of disunity, of difference, and of interpretive flexibility, as the root of scientific strength.


In the case of Boyle and his air pump, objectivity is used in what Lorraine Daston would call a “hopelessly but revealingly confused [way]” (Biagioli, 1999, p 110). On the one hand, Shaffer and Shapin brought to light the way in which material technologies can have an objectifying quality, that is, they can do away with the failings of the human faculties, even enhancing them. On the other hand, objectivity figured prominently into the efficacy of Boyle’s literary technology, suggesting experimental results lacked the taint of human subjectivity due to the multiplication of witnesses. How best to reconcile the dual role objectivity seems to play with a descriptive image of scientific progress? Daston herself has explored the changing roles that objectivity has played not only in science, but in human discourse in general. Daston notes that the word “objectivity”, though often marshaled now as a removal of individual perspective in matters of science, was used originally far from the realm of scientific discourse. The initial use of the word was “of ontological, and not epistemological import”, indeed it appears in the works of Descartes when describing an “objective reality” (Biagioli, 1999, p 112). Even when the familiar “perspectival” connotation was attached to the word it was found not in discussions of “material nature… [but in] that of ‘catholic and universal beauty’” (Biagioli, 1999, p 114). Daston also describes the alternative uses of objectivity that were found in Boyle’s material and literary technologies. An important parallel might be drawn here as Daston notes that the adoption of any familiar sense of the term “objectivity” followed “global and local…changes in the organization of science” (Biagioli, 1999, p 120). Importantly, it is within Boyle’s literary technology, within what Daston calls “impersonal communication and a refined division of scientific labour”, that aperspectival objectivity manifests its self fully, at least in a way that is known to contemporary accounts of scientific work (Biagioli, 1999, p 119). It will be to the chronological aspect of the adoption of objectivity in science that resonance with Rheinberger’s concerns can be found.


Observation enjoys equally complex treatments within the history and philosophy of science. Observation also plays an obvious role in the empirical foundations of the experimental matter of fact. It was seen that observation had two prominent roles in Boyle’s experimental program; it was at once a cornerstone of his own interaction with experimental phenomena as well as of the virtual, and multiplied, witnessing his literary technology embodied. How can these two seemingly different operational functions of objectivity be reconciled with a satisfactory image of science? The answer: they do not need to be. Ian Hacking, in his introductory text on the philosophy of science Representing and Intervening, has explored the status of observation in science. Hacking notes that observation has taken a central role since the positivist thinkers and was rarely used before them by even the most prominent of philosophers like Francis Bacon (Hacking, 1983, p 168). The positivists claimed a sharp distinction between observation and theory, suggesting observation always came before theory, for instance. While what Hacking calls “realist” and “idealist” responses argued that observation and theory had a vague distinction or that all observations were theory loaded from the outset, respectively (Hacking, 1983, p 171). These two camps, the positivists and their opponents (whether realist or idealist), should remind us of yet another similarity that parallels Rheinberger’s. While there is ground for competing notions of primacy in the observation/theory debate, there seems to be a common conviction that observation, like objectivity, enjoys some stable and unwavering definition. Hacking has shown the opposite to be true. In fact, Hacking has described, contrary to the perceived importance of observation in philosophical discourses surrounding science, observation as a relatively small part of experimental life. Indeed the scientist does not spend their “life…making…observations that provide the data to test theory”, rather, they are more often engaged in “[getting] some equipment to exhibit phenomena in a reliable way” (Hacking, 1983, p 167). The more apparent observational practices that go on, says Hacking, are those that require a sort of ‘knack’ for noticing anomalies in experimental set-ups or results (Hacking, 1983, p 167). Even the question of ‘what it is to observe’ has come under fire by Hacking. He notes that rarely do scientists even observe in any physical sense of the word, rather, they “observe objects or events with instruments” (Hacking, 1983, p 168). The parallels with Boyle should be apparent. In keeping with the tone of this paper it will be important not to decide which observational niche Boyle’s experimental matter of fact occupies however, but to recognize that, contrary to positivist and anti-positivist conceptions, observation can be theory loaded, purely observational, aided or unaided, and still work to strengthen and demarcate scientific practice.


Lorrain Daston and Ian Hacking both present arguments that have resonance with another of the aforementioned characteristics of experimental life: instrumentation. For Daston, instruments figure heavily into conceptions of “mechanical” or “method” objectivity. For Hacking, instruments can be use to qualify the category of observation in interesting ways. But how can instruments be seen as more than just background noise in the analysis of science? How can those instruments be seen as vibrant sources of analytic interest? One answer is to be found within Peter Galison’s Image & Logic: A Material Culture of Micro-physics. Galison has worked hard, much in the same spirit as this paper has, to strip down the grand narratives of positivist and anti-positivists and reveal the strength and beauty of disunity in science. If the question of this paper is “what reason do we have to believe that experimentation is a necessary condition for scientific practice”, Galison might answer that there is no good reason. Galison destroys the way in which positivist and anti-positivists alike can speak of theory and observation. There is no longer a foundation of observation, nor are there incommensurable breaks in theory. There is, rather, an “intercalation” of theory, observation, and instrumentation (Galison, 1997, p 799). Again, the point will not be to find examples where breaks in theory, or observation, can be seen to be bordered by instruments. This would be to follow Galison’s argument which, admittedly, he expounds with much more rhetorical skill that this paper can. The point will be to take the case of instrumentation seriously when considering its interaction with theory and observation, especially in the case of Boyle and the experimental matter of fact. What exactly are instruments for Boyle? Are they the embodiment of a theory? Do they structure an observation? Or do they perhaps conjure up images of mechanical objectivity? Asking whether or not the seemingly dynamic role of instrumentation should alter static understandings of experimentation is crucial for the purposes of this paper.


Dudley Shapere has noted close relationships between theory, instrumentation and observation that echo of Galison’s intercalation and inform an understanding of what it is to do an experiment. Shapere is interested in the question of theory laden observations and the implications those theories have for the autonomy that science enjoys. In the case of Boyle, it might be asked to what degree theoretical commitments or theoretical axioms shaped what it was to do an experiment. If Hobbes and Boyle were debating over competing views of knowledge production, certainly their commitments to plenism or vacuism, respectively, should be taken into account. Surely Hobbes’ commitment to geometric precision and Boyle’s confidence in probabilistic certainty shaped not only their debates, but Boyle’s experimental form of life and those characteristics that contemporary experiment can call its ancestors. Framed differently, the concern is of sociological import. Constructivist accounts of science largely rely on natural causes for controversy and social closure in those controversies. To what degree, Shapere asks, does theory, or socially constructed matters of fact, shape observation or experiment? It would seem to a large degree in Shapere’s case on neutrino research. Shapere shows the way in which theories about the source, transmission, and reception of neutrinos “plays an extensive role in what counts as an ‘observation’” (Shapere, 1982, p 505). This idea is not troubling for the scientist, but it is the foundation for positivist and anti-positivist debates surrounding experimentation. Shapere localizes this dispute in the language with which each side approaches the same object. For the philosopher, says Shapere, observation has a perceptive value to it as well as epistemic import, but for the scientist perceptive and epistemic merits have become separated when dealing with the category of observation (Shapere, 1982, pp 507 – 508). If objects simply can not be accessed with the human eye, observation can still take place for the scientist, through the various instruments and inscription devices at their disposal (Shapere, 1982, pp 508). This is not true for the philosopher, who demands to know “how perception could give rise to knowledge or support beliefs” (Shapere, 1982, p 508). Thinking back to Boyle, can he be read as advancing a scientific or philosophical understanding of observation? Are his theoretical assumptions a threat to the validity of his observations? Again, it must be stressed that a single answer to these questions should seem absurd by this point. It may be that the theory did or did not structure his observational findings, and indeed the debate between himself and Hobbes; the important thing to remember is that theory can play the role of facilitator, as in the neutrino case, or it can be read as an inhibitor of observation and experimentation.


Thinking Beyond


In the case of objectivity, as in the case of observation, of instrumentation, and of theory, there remains a great deal of flexibility in the roles that each of these characteristics can occupy. What is it then to wonder about what experiment is, or in which form it must exist in? First, it is a question of semantic import. To answer the question ‘what is experiment?’ or the closely related ‘which form of experimentation should be preferred?’ one can only wrestle with ambiguity. The answer to the first question lies in a temporal characterization. For there is a point when even those most basic of experimental characteristics, like observation, objectivity, instrumentation and theory, are in flux; their roles, relative strengths and weaknesses, and their power to shape scientific practice can, and should, cater to any problem for which a scientific answer is desired. Where would Boyle’s experimental form of life be without his particular understanding of mechanical objectivity? Without his literary technologies bolstering a particular form of observation? The answers to questions about experimentation are not easily arrived at, nor are they always the same answers. Whatever the function of experiment, it can not be denied that a great deal of truly impressive work has been done with various incarnations of the experimental form of life. Second, it is a question about power and authority. If an experimental form of life is to become the matter of fact to be emulated, what power does it entitle to those that practice that form of life? Similarly, on what grounds does authority rest? The case of string theory is a classic example of the prowess of experimental practices in some of the highest funded science on the planet. Indeed, debates between string theorists and experimental physicists might be linked back to their particular understandings of objective observations, theories, and instrumentation. Where is it that the line can be drawn, once experiment is removed as in the case of string theory, where the practice can no longer be described as science? When can it gain the mantle of autonomy? If a demarcation must be drawn between science and non-science, on what grounds can it be drawn? The conclusion that this paper might be said to arrive at is at first very simple, but upon further inspection, subtly powerful. The logical empiricism of George Reisch suggests that science be allowed to demarcate its self. That is, that those theories, observations, instruments and otherwise ‘practices’ that science chooses to involve its self with, on local or global levels, be regarded as science. It would be a mistake to appropriate the label of “logical empiricism” to the demarcation of science since it would simply take one back to a grand narrative, depicting science as a cohesive practice. Rather, it is in the freedom that Reisch’s logical empiricism embodies that this paper would call attention. What is scientific or unscientific becomes largely a case of that which science does or does not make use of. This leaves room to reconcile the numerous roles of experimental characteristics, without accepting any single definition therein. Boyle and Hobbes, regardless of how the history of science has been written, both retain the mantle of “scientist” without the burden of experimentation. It is only in the restraints, in the application of normative characteristics, and in grand narratives that science can be burdened. It is in recognition of the capacity of experimental systems, and their constituent parts, to adapt to local and global needs that science can be free to produce knowledge without philosophical constraint.




Biagioli, M. ed. (1999). The Science Studies Reader. New York: Routledge


Galison, P. (1997). Image and Logic: A Material Culture of Micro-Physics. Chicago: University of Chicago Press


Hacking, I. (1983). Representing and Intervening. New York: Cambridge University Press


Shaffer, S. Shapin, S. (1985). Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. New Jersey: Princeton University Press



Shapere, D. (1982). “The Concept of Observation in Science and Philosophy” Philosophy of Science. (49) 485-525. Chicago: University of Chicago Press


[1] “Matter of fact” here is used for two reasons. First, Robert Boyle used the term explicitly when referring to the outcome of his experimental program, and second, it is to call attention to the work Bruno Latour has done regarding the abandonment of matters of fact for matters of concern (Latour, B. “Matters of Fact vs Matters of Concern”. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford: Oxford University Press, 2005). This paper follows in those footsteps, attempting to demonstrate the nature of experiments as a matter of concern.

[2] The focus on Boyle is a result of his historical “victory” in the Hobbes and Boyle debates. As the authors of Leviathan and the Air-Pump suggest, it is Boyle’s experimental program that won, and that is still with us today. Given that this paper deals with the construction of an “experimental system” as a matter of fact, an object, or a category that is unproblematic, it is to the victor and to its legacy that this paper must focus.

[3] The use of objectivity here is to call attention to the way it is used to leverage support for the material technology Boyle employed. Chapter 6 of Leviathan and the Air Pump describes, as the authors note, “a striking instance of this usage”, and it should be reviewed for further understanding.

[4] This is not meant to be a complete list of the constituent parts of experimental practice, rather, it is meant to be a short list, derived from a common source, that is seen to persist into contemporary science and that can each be made problematic for the purposes of this paper.


“Our intellectual life is out of kilter. Epistemology, the social sciences, the science of texts – all have their privileged vantage point, provided that they remain separate. If the creatures we are pursuing cross all three spaces, we are no longer understood. Offer the established disciplines some fine sociotechnological network, some lovely translations, and the first group will extract our concepts and pull at the roots that might connect them to society or to rhetoric; the second group will erase the social and political dimensions, and purify our network of any object; the third group, finally, will retain our discourse and rhetoric but purge our work of any undue adherence to reality – horresco referens – or to power plays. In the eyes of our critics the ozone hole above our heads, the moral law in our hearts, the autonomous text, may each be of interest, but only separately. That a delicate shuttle should have woven together the heavens, industry, texts, souls, and moral law – this remains uncanny, unthinkable, unseemly. “


–         Bruno Latour (1993, p5)


It is by now a truism of STS, as a field, that scholars find themselves more occupied with disagreement and contention, than with a unified context in which to place the elusive actors about whom we claim to know…something about. Camps of sociological, anthropological, philosophical and historical scholars, it is true, can be seen as generally united (though contention never fully vanishes); each tends to position their objects of study in such a way that they become accessible to particular forms of analysis. The effect of social groups and power relations, for instance, come to dominate cultural beliefs, epistemological concerns, and historical rhetoric; the case, however, is not unique to sociological inquiry. Indeed, each group of scholars takes up a prosperous vantage point and does away with contentious lines of inquiry that might force intellectually uncomfortable positions. The intractable problem we find across these camps has for too long had the Alexandrian solution applied to it; the Gordian knot, unwilling to be unraveled in all its complicated beauty, was simply cut up into pieces that were more intellectually palatable[1]. This problem is by no means a new line of inquiry and has been expressed by a number of authors in much more vivid terms[2]. This paper seeks to explore methodological alternatives to the Alexandrian solution by examining where problems in analysis arise, and how they might be subverted to accord more prosperously with the complicated enterprise of science. This paper represents the culmination of a project started by a great number of students producing, at first, ethnographic records of laboratory practices in a university setting. The ethnographic data was then distilled into individual analyses; organized by groups, and spanning four different laboratories. The entire set of local analyses produced, by all individuals, from all labs, serves as the ground work for this non-local analysis. It will be argued that the individual analyses point, not to a perfect, unified picture of scientific practice generally; but to the way in which the collective nature of the investigation serves as a methodological answer to the problem presented by the Gordian knot. It is not by a bold stroke, a stroke that severs the ties that bond scientific practice together, that the scholar can gain better access to science, but rather by appreciating the complexity of the many knots that bind it up so intricately.



The Influence of Approach

The nature of the project that this paper seeks to bring together in a cohesive manner was ethnographic at its heart. The rough data, from which individual analysis was later derrived, is comprised of interviews, observations (in lab and in class), laboratory-world investigations, and readings of scientific texts. The investigators involved in the project positioned themselves along one of these four paths and set out to obtain data from these vantage points. This was one of the first bold strokes, one of the first Alexandrian moves that demanded a categorical split in the object of study. A single laboratory became the source for four distinct types of data. The first analyses produced, it should be noted, did bring together the four disparate data sets. The bifurcation, however, had already occurred and there were glaring examples of it to be found in the resulting analyses. The quotation by Latour at the beginning of this paper, along with an examination of individual analyses, will serve to show lines of inquiry beginning to develop their own “vantage points”, subsequently ignoring “uncanny, unthinkable, unseemly” interconnections (Latour, 1993, p 5)[3].


The Social Scientist

An investigator engaged in lab-world investigations showed a distinct proclivity toward economic and political influence on the laboratory. Carla Murphy presented a sophisticated reading of the funding her laboratory receives and came to the conclusion that “the interests and motivations of the members of the deciding organizations affect the type of research that is done” (Murphy, 2009, p9). Supporting evidence from other group members serve to strengthen this argument; Murphy notes that her group members Prital and Colin have shown the degree to which decisions to pursue graduate work are largely dependant on “social factors” (Murphy, 2009, p11). The lab-world investigation Murphy presented is a socially constructed account of science; the laboratory is guided by particular economic trends, which serve to structure the research done therein. Similarly, the supporting evidence used depicts a socially constructed enterprise; accounting for the choice to become an investigator in a particular lab with direct reference to “social factors”.  The point to take from this account is not one of truth or falsity, but one of selective representation[4]. There could have easily been an example given of the way in which the laboratory, in turn, comes to shape the world. Indeed, another member of Murphy’s group had done extensive interviews and had come to the conclusion that the lab “is not the start-up or the culmination of the research on the aforementioned drugs” and that “work can be requested or implemented into other forms of research” (Young, p23). Colin Young has presented a line of inquiry that might, coupled with Murphy’s, paint a more complete picture of the work inside the lab[5]. Young demonstrates that it is not just that the laboratory is exclusively social, but that the laboratory and its products move out into the world and affect change. This point is passed over by Murphy, however, and the social is given dominance over the objects of study themselves; Murphy becomes representative of Latour’s “social science”, “[purifying the] network of any object” (Latour, 1993, p5). This is a function of the approach taken initially by Murphy; to explore the laboratory-world interactions was, in this case, to explore the way in which science, claiming to be objective, is actually socially constructed. It was to marshal evidence to that front and to leave out the equally important reading that Young hinted to; it was to leave severed the Gordian knot rather than to retie it.


Emily Martin might be a kind of prescription for this particular affliction. Martin’s “Anthropology and the Cultural Study of Science” offers us a number of metaphors that illustrate the ways in which the social sciences might retie the knot. The concept of a Citadel forms a major piece of Martin’s paper and it is worth exploring briefly in relation to the problem of the social scientist. The Citadel represents the image of science that STS scholars have tried to break down; it is a great autonomous building, untouched by the world and yet reaching out of its impenetrable walls to affect change in every aspect of life (Martin, 1998, p26). Culture, Martin suggests, is the key to breaking down these walls, or at least re-imagining them. Culture’s shared pasts, beliefs, and values bring into view the way in which “the walls of the Citadel are porous and leaky”, allowing “action and initiative to go in both directions” (Martin, 1998, p30). If the problem Murphy ran into is in the inability to deal adequately with the influence the objects of science have, then Martin assures us that it can be brought back through the porous walls. This is the move that Murphy needs to make, to turn to the crack Young found in the wall and exploit it; to demonstrate that while objects are coming in and constructing life within the walls, there are yet more coming from another Citadel of knowledge, itself warranting treatment.


The Rhetoricist

In a beautiful examination of the rhetoric of the scientific enterprise, Ira Goldman presented an argument about the way in which language gets used in the laboratory. Goldman notes that there is a way in which the social lives of the scientists pervade even the language they use to do their work. The laboratory under investigation uses “bands” to mark birds considered “pairs”; Goldman notes that these bands “can be seen as one of the most widely recognizable signs of monogamy: a wedding band” (Goldman, 2009, p9). There is a linguistic and cultural heritage bound up within a wedding band, and it is Goldman’s observation that this opens up a range of questions concerning the extension of that heritage to the research being conducted in the lab. Goldman navigates through the linguistic complex to arrive at an argument regarding the nature of our understanding of monogamy. Monogamy, notes Goldman, appears in scientific studies of birds precisely through the linguistic legacy that scientific rhetoric brings with it. But what of the incredibly complex and detailed cultural accounts we have of monogamy? What kinds of realities are being created in the lab when something as human as monogamy is being engaged with while examining birds? Goldman stops short of answering these questions, he “[purges his] work of any undue adherence to reality…or…power plays” (Latour, 1993, p5). What vision of monogamy dominates the power structure inherent in a University laboratory that is hierarchical by its very nature? Does this dominant vision structure the method and objects of investigation? How? There is no answer for these questions. The target was not the power relations, or the realities that might lay behind the language; Goldman’s interest was in “[retaining]…discourse and rhetoric” and not to those extant questions regarding power relations, objective realities, and extensions beyond the laboratory (Latour, 1993, p5). Nor was Goldman’s group devoid of such concerns, indeed Meer Islam was predominantly interested in how facts move outside of a laboratory and to what degree a fact could be understood as being true (Islam, 2009, p5). The concerns that Islam brought to light are precisely about those things that Goldman did not address; the reality of fact production as a functionary outside of a laboratory. Islam notes that we often “embrace” scientific facts with “excitement”, forgetting that seemingly objective facts have traveled a great distance, and at great pains, to achieve the autonomy they have over us (Islam, 2009, p5). Somewhere between these two accounts, between a linguistic analysis and social analysis, lies the real truth about facts; a truth that might only be accessed through a union of these two approaches, through a tying of the Gordian knot.


That textual analysis might obscure the realities and power relations that lay behind the words is merely a choice of approach. Indeed, with Joseph Dumit’s “Picturing Personhood” we find an exemplar of the textual approach that explicitly takes into account those realities and power relations. Perhaps the best way to access Dumit’s approach, in this context, is in his suggestion to his reader. Dumit pleads with his reader to become aware of the way in which “objective self fashioning” operates, and further to take command of it (Dumit, 2004). Objective self fashioning does indeed point to the ways in which our identity is bound up in objective facts; facts developed in laboratories. Another “perspective” that Dumit invites the reader to explore is the way in which our selves are “fashioned by us out of the facts available to us through the media, and these categories of people are, in turn, the cultural basis from which new theories of human nature are constructed” (Dumit, 2004, p164). Dumit takes a most innocuous and mundane of rhetorical classifications, that of being “normal” (or not-normal). He demonstrates not only that the walls of Martin’s Citadel are porous, allowing the social in to construct the very instruments and practices that give us the PET category of “normal”, and allowing those objects to move into wider realities like the media, but that these movements imply new kinds of persons and new kinds of power relations of our own making! Dumit moves seamlessly from a textual analysis, to a social analysis, to a cultural analysis and back again; truly this is what it might mean to tie our Gordian knot.





The Epistemologist

A final example derrived from my own work should quell any reservations one might have about this process. Richard Gignac selected a “textual analysis” as his initial line of questioning. His analysis is, in Latour’s scheme, epistemological. He is examining the ways in which knowledge is getting made from the ground up. With a focus on concepts and analytic tools derrived from Sharon Traweek and Bruno Latour, Richard provides an analysis of the changes in laboratory studies that contemporary laboratories might force upon the analyst (Gignac, 2009)[6]. What is noticeably missing from Richard’s analysis is any reference to socially mediating forces. Shaharyar Javed, a member of Gignac’s group, noted that the scientists actively working in the lab have “full confidence that figures spewing out of the inscription devices are accurate” (Javed, 2009, p 10). Concerns and beliefs about the nature of mechanical objectivity might help paint a more vivid picture of the process by which the laboratory has extended its reach with another technology in which great faith has been placed; the internet. Gignac’s argument relies upon describing the “new actant as a crucial part of modern science” (Gignac, 2009, p 15). The choice to do textual analysis, from an epistemological perspective, has resulted in the jettisoning of social, political and rhetorical forces as analytic subjects. No where does Gignac discuss the nature of the technological actor he is mobilizing to support his claims about contemporary laboratory studies; this should come as no surprise. Gignac was working on a textual analysis that explored the differences between the laboratories that Latour and Woolgar had studied, and the contemporary laboratory that served as a site for his own research. The choice of a particular approach has left a potentially fruitful avenue for investigation out of contemplation. The first Alexandrian move has come back to haunt Gignac; leaving questions about these new technologies to another analyst with a more appropriate approach, another analyst that will leave the Gordian knot untied.


Does the epistemological inquiry necessitate an ignorance of the rich analytic landscape that technologies provide? Bruno Latour and Steve Woolgar give us good reason to think quite the opposite. In the book “Laboratory Life”, and in particular chapter 2, Latour and Woolgar demonstrate that “by pursuing the notion of literary inscription, our observer…can explain the objectives and products of the laboratory…he can…understand how work is organized and why literary production is so highly valued” (Latour, Woolgar, 19 , p87). It is not just that, as Gignac exemplified, laboratories have changed the way literary production is done by a variation in scale; it is that the internet and the new practice of inscription needs to be re-evaluated in order to understand what changes in the organization of work, the objectives, and the products of the laboratory might have occurred. It is by paying attention to the way in which changes in a crucial material device affect changes in the whole organization, practice, and results of science that Latour and Woolgar tighten the knot; it is in the method and not the content.


The Limits of Analysis

This is not a point of criticism, but rather a description of the function of approach. It is to affirm that what we set out to do has a detrimental effect on the conclusions there drawn; a bifurcated and limited path of investigation, regardless of the capacity to follow other paths, results in a limited view of the way in which the laboratory and the world interact. It is to do exactly what Latour predicts; to use, for example, the social sciences to write out the role of the object and to leave us with some abstract concept of a socially constructed entity devoid of any conceptual or material extension in to the world. This might be considered a reflexive point about science studies in general; as scientists bring with them perspectives, beliefs, and experiences, so too does the analyst carry a great number of preconditions with them.


This paper would be ill informed if it did not take into account a beautiful, and much shorter, exposition of the very problem we are dealing with by one of its participating members. Meer Islam, in crafting his analysis from his group’s raw data, noted that after reading his groups work there were “a lot of things missing from [his] work” (Islam, 2009, p5). Islam makes mention of the work that Marina did on the machines in the laboratory, noting that a “better idea of what was going on” came from expanding his perspective (Islam, 2009, p6. Marina 2009). Similarly, another group member attended to questions about the products of the laboratory which helped Islam connect ideas about objectivity, fact production, and social influences (Islam, 2009, p6). This is precisely the action this paper urges its readers to take. To understand science as eluding the analytic eye of one discipline or approach, to extend our understanding of knowledge making past the scientific enterprise and into the entire gamut of human life.


The limits of analysis, then, are not to be found in what we can not attend to because of our line of inquiry, but rather the degree to which we can recognize and include approaches within that inquiry. What else is Jenny Reardon asking of our scientists when she confronts them with the co-production paradigm, than to turn their curious and scrutinizing eye toward all the relevant extensions that knowledge has in the world? What else does Donna Harraway’s situated knowledge and feminist objectivity demand than recognition of all those intermediary connections between the subject and the object, between the laboratory and the world? It is not enough to merely understand science in this way; it must be a point of action. It must be an action that the scientist, the science studies scholar, the intellectual, and the layperson recognize, and more importantly, put into practice.



There is no single report from the field that will adequately capture the work done in laboratories, for they all suffer from a similar bifurcation, by virtue of discipline, administrative constraint, or what ever else might plague the scholar. This is not to put forward a nihilistic view; rather, it is to point to the entire body of work that has been generated for this project, when looking for answers. It is only by recognizing the way in which each analysis plays an intricate part in making up scientific practice at York University, that this project can be truly understood. The laboratory study, the scientific controversy, the world, the language of science – these are categories that the modernist in us wants to defend, but we have never been modern. Our very existence as a people is a messy grouping of laboratory defined, scientifically justified, controversy laden, globally and socially (whatever this might mean) influenced, and linguistically contingent forces; how could we ever expect to treat these most intimately connected forces (and how many others?) holistically?. The power of this argument lies not in what it does for us directly, but in the doors that it opens. It is in the realization that no approach, no discipline, no method, can hope to capture science in all of its brilliant spectacle; for none could capture life in such a manner – and make no mistake, it is life, all of it, that the science studies scholar confronts in even the simplest of scientific objects.






































[1] See Latour, B. (1993) We Have Never Been Modern. Porter, C., trans. Cambridge: Harvard University Press (Chapter 1.2)

[2] The Latourian argument is obvious.  One might also find Reardon, J. (2004) Race to the Finish: Identity and Governance in an Age of Genomics. New Jersey: Princeton University press. to provide a methodological answer, much in the same way as this paper does, in the form of “co-production”, obliterating categorical separations in knowledge production (for our interests, the problem of appealing to categories). Similarly, a more pragmatic answer might be found in Dumit, J. (2003) Picturing Personhood: Brain Scans and Biomedical Identity. New Jersey: Princeton University press. Dumit argues for an engagement in the “objective self fashioning” that we are entangled with. This example is, however, only analogously representative of a solution to the aforementioned problem. (It would be a mistake to take ANY of these examples to be more than ways to access a general idea in this context)

[3] It should not be understood that “ignoring” here means an act of scholarly deception. It could be more accurately understood under the framework of Michel Foucault’s “episteme”. See Foucault, M. (1994) The Order of Things. Vintage, for extended use of episteme; in our context it is meant to imply a lack of possibility of engaging in particular avenues of discourse due to strictures in the current ones.

[4] Again the point must be made that it is in the nature of administrative constraints, and not individual character, that analysis comes to avoid particular topics (the same is true for all examples following).

[5] The implications of Young’s analysis are particularly interesting in this context. The reader might be interested in Robert E. Kholer’s “Crossing the Threshold” from Lords of the Fly: Drosophila Genetics and the Experimental Life, University of Chicago Press, 1994, pp 18-52. In this chapter the move into and out of the laboratory is described beautifully and has particular resonance with the movement of research that Young is talking about here, where in Kholer’s case the fly serves as the moving object.

[6] This should remind the reader of the current work, which puts forward a similar methodological argument on a more generalized scale.




*All references to student analyses (Murphy, Young, Goldman, Islam, Javed, Gignac, Marina, and Anonymous) scientific papers, interviews, and observations are protected by anonymity; please contact Natasha Myers for more information.


Dumit, J. (2003) Picturing Personhood: Brain Scans and Biomedical Identity. New Jersey: Princeton University Press.


Latour, B. trans. Porter, C., (1993) We Have Never Been Modern. Cambridge: Harvard University Press.


Latour, B. Woolgar, S., (1986) Laboratory Life. New Jersey: Princeton University Press.


Martin, E., “Anthropology and the Cultural Study of Science”. Science, Technology & Human Values. 1998, 23(1), pp 24-44


Reardon, J., (2004) Race to the Finish: Identity and Governance in an Age of Genomics. New Jersey: Princeton University Press


Felix Guattari was a French philosopher (and many other things). Guattari wrote a beautiful, and suprisingly short, essay entitled “Like an echo of a collective melancholia” that was largely a response to the notion of violence, and the use of violence by “urban guerrillas” in revolutionary contexts. His essay deals with the Red Army Faction and their 1970’s guerrilla tactics (kidnapping, public murders, hijacking of planes, ect…). If you are interested in political arguments revolving around violence, Frantz Fanon is an amazing source and indeed the RAF in their manifesto write very much in the tradition of Frantz Fanon (although they are notably perscriptive in their outlook on violence, where Fanon is much more descriptive of its place in revolution). The earlier arguments surrounding violence are not what concern me, Dumit’s text, or our class; Guattari’s message is.


The RAF argue that because capital pervades every part of the globe and has an equally oppressive character everywhere, that every part of the globe is a legitimate battle ground and that every place is as good as any other for a violent resistance. Guattari turns this guerrilla ideology on its head and agrees that indeed capital is global, but in stopping there, in only recognizing its spatial extension in our world, the RAF have missed out on the other ways in which the rest of the world interacts with the global phenomenon of capital, particularly its political interaction with popular media. If a binary power relation exists between the native/settler, capitalist/worker, ect…and this relation is maintained by force (legal bounderies, acctual physical force, guns, ect…)….the only way to fight back against that oppression becomes force its self (Fanon’s legacy). Ok…so far so good…Guattari agrees that indeed force seems like the only way to mount an offensive against an often times more powerful oppressive state. But, Guattari asks us, what is it that violence ends up doing once used?


First, Guattari notes that the choice to use violence is its self not a free choice, but represents an entrenchment in the very system you are trying to fight. If an oppressor uses force, and demonstrates that force is the law of the land, then any use of force, even against an oppressor, positions the revolutionary within the predefined catagories of the oppressor (ie. using violence is the only choice you were given by the oppressors, using it is giving in to those oppressors). Second, Guattari notes that capitalist (imperialist, nationalist, colonialist….it doesnt matter) media outlets pick up violence and use it to their advantage. The more familiar example might be 9/11. The violence that was used against the US was picked up by the US media and those involved became publicly demonized; the media used the violent acts to further define the us/them, native/settler, capitalist/worker, normal/not-normal lines. No great number of US citizens were sitting around thinking “we hate freedom fighters, and we love oppressing others” (one possible depiction of “terrorists” intentions), but the power of a symbol like the twin towers falling is such a powerful semiotic device that, for the general public, it is enough to embolden the lines of “us or them” and lend popular power to a war that seeks to erradicate terrorism…rather than erradicate freedom fighters (however anyone chooses to see either side if of little importance here). The point Guattari is making is that, by ignoring what Jenny Reardon would have called a co-production of knowledge and power (the knowledge that we can strike back with violence….and the power relations that come bound up in it in the form of media politics and their capacity to further catagorize people), the RAF’s violent tactics simply give general assent to a revolt AGAINST the RAF. Simply stated, those people that attacked the US were neither terrorists, nor freedom fighters any more than the US was until the media was able to construct those catagories by capitalizing on the violent visuals there produced.


If violence only leads to a futher polarizing of beliefs, if it only gives more strength to the oppressor through support, violence should simply be jettisoned as a means toward revolution. What then, is there to do? Guattari’s answer is that films like “Germany in Autumn” provide the context for real political action, real revolutionary action. What a film like Germany in Autumn does, says Guattari, is it offers, in its stark realism, “lines of flight” from the binary distinctions forced on people. Where capital serves to say “you are either with us or them”, where the US media says “You are a terrorist or you are fighting terror”, where Dumit has shown the public reception of PET scans to say “you are either normal or not normal”, Guattari says we are in need of a new way to think, of a line of flight from those most constricting and empty of choices. Instead of accepting one of the two polarized views on the RAF’s actions (namely, they are bad or they are good….violence is bad or it is good), the film shows how people tend to really act, before they are distilled into the more familiar positions discussed above. The characters spend scenes getting sad, angry, horny, drunk, high, and every other state imaginable that humans might react with; these are the lines of flight, this is what it means to be a real revolutionary of Guattari. If you cant use violence because it is the oppressors designated method of resistance for you, and it only serves to strengthen the oppressor and kill the revolution….create a Germany in Autumn….refuse to choose between their catagories…develop a new line of thought.


I will offer two examples that I find particularly illustrative of Guattari’s method before I link it to our work. The famous beat poet Alan Ginsberg (see Howl) was an openly gay man. His homosexuality permeated his work, lectures, speeches, and obviously his personal life. The interesting thing is that what Ginsberg and his partner Orlovsky’s work embody, is an artistic expression, in the mode Guattari has pointed to, of a line of flight from old tired catagories and modes of speech about homosexuality. They offer something more open, something real on the topic of homosexuality. It is just after Ginsberg’s graduation from high school that his recognition of his sexuality occurs. This also happens to be a time, early 1940s, that a great rise in homosexual activism occured, following World War 2. The point, in Guattari’s view, would be that these types of expression (of which there were many more and in many forms), opens up for the public what Foucault might call a new episteme; it makes possible new ways of thinking and speaking, it makes possible a political revolution without the pitfalls of violence. This process, or idea might better be caputed in a quote i was recently made aware of by the German composer Schtockhausen. After 9/11 Schtockhausen was asked what he thought of the attacks. Rather than play to the predefined catagories of “its horrible, kill the terrorists”, or the classic “it was an inside job”, rather than play to the native/settler, oppressor/oppressed catagories, Schtockhausen said that it was one of the most powerful pieces of art ever made (the image of the towers, he means). The point is that you cant fit Schtockhausen’s comment into those other two catagories, it represents a line of flight FROM those catagories and opens up something new to think about and do; it offers a revolution.


So how are we going to connect this to Dumit’s book?


There are two ways that I find compelling and would like to share with you. I believe that Guattari’s arguments (which do EXCLUSIVELY concern revoltion and politics) have extension past merely political contexts. I believe that Guattari has pointed to a means to any type of revolution. The notion of violence can easily be made into the notion of “force”, or any overt and oppositional push toward a new way of thinking. If this leap is made, I soon find myself reflecting on what it would mean to bring change to any part of human life, be it educational, intellectual, or scientific. During his entire book, Dumit is wrestling with the dangers, problems, and benefits of PET depictions of normalcy, of personhood, of our objective selves. Dumit pleades with his audience to recognize that not only are these catagories being constructed on uncertain ground, but that we are involved deeply in the way those catagories turn out. What else are we concerned about than the way in which ideas of normal/not-normal structure power relations (that then move into courts, public discourse…ect….a la Dumit), and the way in which those catagories dont capture the reality of the situation. If we are faced with a system that says “it is us or them”, a system that says “it is normal or abnormal”, what tools might we have at our disposal to bring about a revolution? I believe that Joseph Dumit’s book is a intellectual tool for revolution in the strongest Guattarrian sense. I feel that the book represents, in its descriptions of the making of brain images, to their uptake in popular discourse, a line of flight from stifling catagories developed therein. If objective self fashioning is what Dumit urges us to engage in, to recognize that there are a great number of catagories, not just dominant ones, at work that we might choose from, then Guattarri would no doubt say Dumit’s piece is nothing short of an intellectual revolution. It forces us to consider the chaotic, not so well ordered world that lay at the intersection between science, engineering, public discourse, personal psychology, social ideologies, politics, power and the whole lot; it responds to the affirmation “you are normal or not normal” with a resounding “oh yeah?”.


But I think the revolutionary metaphor can go further. I would point with great enthusiasm to the entire discipline of STS (or science studies) as a masterful and important line of flight. It is important not for what guarentees it makes, not what catagories we want it to tell us are clean and palatable, but precisely because it explodes the notion of those catagories and indeed the very NEED FOR those catagories. Perhaps I was simply feeling nostalgiac as I was brushing over my “Intro to Science and Technology Studies” class notes from years ago, perhaps I want to intrigue more minds with STS inflected thought, what I do know is that, for me, STS has functioned in much the same way as Guattarri’s expressive revolutions; it has taught me about the need for critical inquiry over catagorical acceptance, about contextual understanding over content based regurgitation.


This is not, however, the end of the story. We can not, after all our readings in this class that bring to light power relations and epistemological paradoxes, fall into complacancy. Guattari’s final lesson is one of impermanence, never stop running to a new line of flight…even when your standing still – keep moving. It is a mistake, on Guattari’s account, to stay in one place, to stay with the Ginsberg mode of expression about sexuality, to rest comfortably in Schtockhausen’s refreshingly different understanding of a “terrorist” attack; we must take from where we are what we can, and move on – we must ALWAYS look for a new line of flight that might help us hurdle the encumbering nature of stagnating thought. Already in the realm of STS there are whispers of lines of flight all about. Professor Myers’ work on visual culturs and practices, reconceptualized thoughts about technology….whatever they are…they are new….they bring with them something new….and they promise nothing more than a way out of the drudgery of catagorical imperatives. Never stop running, even when your standing still.




William Clark’s “Academic Charisma and the Origins of the Research University” is an exploration of the slow progression from traditional to modern conceptions of the academic world and the academic persona. Based on a reading of Max Weber’s three forms of authority, Clark explores the bureaucratization and commodification of the academic world, two forces he describes as “the twin engines of the rationalization and disenchantment of the world” (Clark, 3). The characterization Clark gives of the early academic world is one defined by nepotism; a place where judiciary and ecclesiastical disciplines held the greatest sway, and a place where the oral dominated the written word. Chapter by chapter, Clark uses curious, perhaps mundane, pieces of material culture from universities to explore the development of the research university. Lecture catalogues, paintings, visitation tables, and library catalogues all become interesting sites of investigation in Clark’s adept hands. Modern universities, on Clark’s understanding, are those defined by the triumph of “the visible and the rational” over the “oral and the traditional” (Clark, 3). In the modern, market driven university, managed by the cold quantifying ministries of the German state, oral culture seems to have been replaced by a focus on the written word or number. Disputations were replaced by dissertations, the examination lost its oral components, dossiers on professors were collected, and grading systems were implemented; academics and their authority “would be manufactured by publications and written expert or peer review, instead of by old-fashioned academic disputational oral arts, unsubstantiated rumors, and provincial gossip” (Clark, 29). Academic oral culture, according to Clark, would not be done away with entirely; traditional and oral culture is preserved in a number of managerial practices and continues to inform (for better or worse) the lives of academics. Engaging early modern German, English, and Jesuit institutes, Clark’s book is a vastly researched, well thought out, and tactfully written piece of intellectual history. Check it out if you get the chance.

The aim of this brief paper is to investigate the way in which a particular disease changed in its classificatory distinctions over the twentieth century, and to offer a preliminary explanation of that change. Making use of the “Index-Catalogue of the Library of the Surgeon-General’s Office, U.S. Army” from the National Library of Medicine[1], a close reading of changes in classification will lay the groundwork for reasoned hypothesis regarding mechanisms for those changes. The viral disease Herpes Zoster commonly referred to as shingles, will be the focal point for this discussion. Zoster and its accompanying virus varicella are well represented in the history of virology and continue to be of interest in contemporary biomedicine[2].



The index catalogue indicated above is organized around five series’, each chronologically distinct. Series 1 (S1) covers a period between 1880 and 1895, series 2 (S2) between 1896 and 1916, series 3 (S3) between 1918 and 1932, series 4 (S4) between 1936 and 1955, and series 5 (S5) concludes with a period extending from 1959 to 1961[3]. While this paper will be concerned with classifications found in all five series’, the classificatory shift to be discussed is located in series 5. Search keywords include both “zoster” and “herpes zoster” and yield identical results. The commonplace “shingles” was attempted, yielding several case studies but no classificatory indexes. Inferences will be drawn from the perceived change in classification identified below. Additionally, discussion will be drawn from the chronological structure of zoster classification expressed in the overall number of records per series, as well as the publication records of non-classificatory work[4].




Relying predominantly on the chronology of indexed results, the total number of classifications, type of classificatory terms deployed, and the location of relevant publication, some interesting results have turned up. Figure 1 compiles these factors in a chronological series beginning with the late nineteenth century and finishing with the mid-twentieth century:





Fig 1. Series 1 Series 2 Series 3 Series 4 Series 5
Total results 347 457 499 164 106
Total Classificatory Categories 3 16 8 5 8
Classificatory Terms herpes zoster, frontal, facial, ophthalmic Herpes zoster (2), causes, pathology (2), topography (2), complications, sequelæ, epidemic, infectious, frontal, facial (2), ophthalmic, treatment, recurrent, hysterical, gangrenous, traumatic Zoster, causes, pathology, complications, sequelæ, ophthalmic, otic, treatment, varicella (2) Herpes zoster, zoster, bismuth toxicity, neuritis, ear, cornea, [military section]  herpes zoster Herpes zoster, complications, cerebrospinal fluid, case reports, etiology, pathogenesis, epidemiology, therapy, zoster
Publishing by country France, Germany, Ireland,

Russia, United Kingdom, United States

France, Germany, Italy, Poland, Russia, United Kingdom, United States Australia, France, Germany, Italy, Poland, United Kingdom, United States Canada, France, Germany, Italy, United Kingdom, United States France, Germany, Poland, Switzerland, Uruguay, United States

* This table is exhaustive in its treatment of total results, total classificatory categories, and classificatory terms. For practical reasons (scope of project, focal point of classificatory shift, and lack of definitive information) the publication of dissertation or article by country is meant to highlight the cosmopolitan nature of research pertaining to zoster, rather than to represent a comprehensive list of participating countries. In the case of series 5, where the major components of the classificatory shift are found, the countries listed are covered in their entirety, and discussion will center on the distribution of research within those countries.


Classificatory Shift


With respect to the classificatory terms deployed through the index, it is possible to locate a classificatory shift in S5. However, it will be important to implicate the previous four series’ in the shift as both the chronology and the difference in publication distribution appear relevant. Series 5 demonstrates a shift in classificatory structure by its implication of causative agents and reductive scientific discourse. Evidence for the mechanism hypothesized below is drawn from the localization of publications in relation to the proposed ideological


S1 is, in terms of its classificatory terms, the simplest of the five series’. Organized principally around an anatomical rhetoric, S1 provides the background against which a shift occurs. In S2 the first appearance of “cause” is manifest; interest in anatomical as well as methodological and clinical classifications are apparent. The appearance of “pathology”, “complications”, “sequelæ” and “epidemic” are the best clues to these new dimensions. This is a concern for mechanisms of disease as well as ways of understanding the spread of disease. In S3 one finds many of the same terms but a crucial addition has been made; the appearance of the virus “varicella” is now part of the causative, pathological discourse. It is with S4 that the most perplexing part of zoster’s classificatory history is found; just as causative agents, changing conceptual understandings of disease distribution and clinical observations come of interest, there is a sudden drop in both the classificatory categories, a mere 5 to the previous two series’ 8 and 16 respectively, and the amount of relevant research interest, less than even S1 (see Fig.1). Perhaps more alarmingly, there is now no mentions of causation, of infectious agents, or of clinical experience save for the anatomical classifications. It should be mentioned here that the familiar classificatory category “h. zoster” is now shown with a bracketed “military section”; this may give some clues as to why less causative, or laboratory determined classifications, are represented in S4 (see Fig.1). The final series, S5, is where this paper locates the culminated classification shift in zoster. The word culminated here should be understood to imply some degree of continuity, rather than the type of stark contrast a shift might otherwise imply. S5 is characterized by its causative, and laboratory derived classifications; terms like “pathogenesis”, “etiology”, and “cerebrospinal fluid” should make this apparent. Cerebrospinal fluid, a product of surgical operation and an object of laboratory inquiry, indicates a reliance on scientific determinations, rather than strictly clinical observations. Indeed, pathogenesis and etiology call attention to both the mechanism and origination of disease, respectively. Characterized in this way, it remains for this paper to speculate as to why these changes culminated in a classificatory discourse of reductive causation. The following section will make use of secondary scholarly literature in order to furnish speculation with evidence.


Chronologically, the story of zoster’s classification changes and settling point in the mid-twentieth century might well be read as a story about the legacy of prominent German and French science. Both Louis Pasteur and Robert Koch might form ideological and methodological foundations for an explanation of the shift in S5. S1 begins in 1880 and ends in 1895 and is entirely free of causation, agency, or laboratory research in its classificatory scheme. The first whispers of causative classification in S2 can perhaps be attributed to these ideological and methodological influences. One could not find a more prominent French scientist at this time than Louis Pasteur, and the often discussed ‘germ theory of disease’ is perhaps his greatest legacy. It is toward this legacy that this paper would like to first gesture. One need not be inclined to think in purely deterministic terms about Pasteur’s germs to allow at least that the experimental and demonstrative character of the pasteurization process had an economic, political and social importance to the French nation. Indeed, Bruno Latour has demonstrated exactly this point, that “to study Pasteur as a man acting on society, it is not necessary to search for political drives…you just have to look at what he does in his laboratory”; Pasteur’s microbial work “adds to all the forces that composed French society” (Latour in Biagioli, 1999: 267). Here is it enough to suggest, in concert with Latour, that French society is becoming well aware of microbial agency. The link here between Pasteur and virology may well be harder to grasp at but it is much more important for the zoster case as it is through virology, and a particular virus, that zoster comes to be classified in S3. Writing about the legacy of the Pasteur Institute within the field of virology Marc Girard notes that as early as 1876 and for the next half a decade Pasteur’s laboratory was a site of furious research regarding the nature of various viruses (Girard, 1988: 746-747). Importantly, and in the name of linking Pasteurian causative germ-theory to zoster and virology, the prevalence of research attempting to distinguish between types of pox is crucial (Girard, 1988: 747). This is particularly important as the history of zoster is tightly woven with the history of smallpox. Thomas H. Weller, a prominent American virologist and Nobel laureate, published a small history of research into herpes zoster and notes that in the early twentieth century, the pioneering work centered on disagreements about the similarity of the virus responsible for smallpox and chickenpox (Weller, 1992: s1). Indeed, once the virus had been made distinct, one author could conclude “the accumulation of epidemiological and laboratory evidence in support of the hypothesis that a single etiologic agent is responsible for varicella and herpes zoster appears so impressive that the burden of proof must now shift to those who desire to refute the monistic concept” (Weller & Witton in Arvin & Gershon, 2000: 17). The rhetoric is its self reminiscent of the terms used in the S5 classification and indeed Weller and Witton are speaking in 1958 of these epidemiological, etiological, evidentiary, causal concerns; a mere year before S5 would turn to the very same classificatory categories. The early connection, then, between smallpox and chickenpox, and between Pasteurian causative agents and various forms of animal pox, give rise to speculation about why, during the late nineteenth and early twentieth centuries, S2 may have gained a causative classificatory category.


Still, more ought to be said about the reductionism implicated above in the history of zoster’s classification. What Pasteur had shown with regard to spontaneous generation was the province of causative germs and not infectious disease, and again, a link needs to be made in order to proceed. Robert Koch’s early concerns in the 1870’s were precisely about infectious disease and determining “where…they come from and how they get about” (Bos, 1981: 92). Indeed, Koch’s famous postulates, perhaps the example par excellence of mechanical reduction in biological science, were derived from these concerns as well as from the serial inoculation of animals and microscopic appearance and absence in the blood of various microbes (Bos, 1981: 92). Koch’s postulates are reproduced below as they appear in Bos (1981):




The rhetoric here is important as it bears a relation to S3 and S4. Isolation, purity, reproduction, re-isolation; these are terms that share a linguistic heritage with the modern reductive discourse. The point, again, is not to suggest that Koch’s postulates determined a scientific practice or methodology, but to highlight the ways in which they were taken up by researchers (and sometimes resisted or made problematic) as a path toward “artificial reproduction” and “the modern analytic approach to nature” (Bos, 1981: 94). One can begin to speculate then, with causative and reductive ideological foundations in mind, how the importance of causation, in S2, and artificial manipulability of a viral agent, in S3, might go some distance to explaining the classification of zoster in S5.


This interpretation is still left with the tricky case of S4 and its military dimension. Without further investigation about the effects on research during wartime, as would be necessary to do justice to the historical period, one might speculate briefly about how was has been written about in the basic-research context. S4 begins in 1936 and ends in 1955, effectively bordering the years of the Second World War. The wartime displacement of scholars is a well documented process in which scientific research has been seen to have suffered in certain respects[5]. While it is true that in many other respects scientific research was given capital and authority it had never experienced before, this seems like a logical starting point if one was interested in exploring the decline in research into zoster, as well as a change in its classification. Again, the point made above about S4 was that it lacked the causative and experimental or analytic categories that had been making their way into the classificatory scheme. It might be possible to examine the ways that basic research may have taken a back seat to practical therapeutics and resulted in the exclusion of particular classifications in favor of those found in Figure 1. The concern of this paper, however, is with S5.


The reader may well ask here after the quality of an argument that positions Koch and Pasteur as deterministic influencers, while claiming to dispel that determinism with some carefully chosen words – and they would be right to do so. However, in a slightly more speculative capacity, this paper is tempted to explore the geographical publication present in S5 as a link between the early twentieth century developments discussed above and the mid-century classificatory terms deployed in S5. Going through the 106 index references individually, a strange picture begins to emerge. Where, in S2 and S3, there had once been international representation with regard to publication, in S5 one begins to see a narrowing of geographic interest in zoster. Indeed, in S2 and S3 each country listed is represented a minimum of ten times in dissertation and publication material (the only exception being Australia)[6]. Curious then, that in S5 there are only 5 countries total (the listing of S5 is exhaustive of the index results) and that the published material belongs overwhelmingly to German and French institutions. Indeed, out of 98 entries (the missing 8 representing classifications) 56 are attributable to German institutions, 37 to French, and 5 are shared by the remaining three countries. While far too broad in scope for this paper, it is a measure of the importance, nationally and academically, that in particular Paris and Berlin, the respective institutional homes of Pasteur and Koch, are so well represented. Further inquiry might well investigate the links between advisors, department heads, and seemingly anomalous cases like that of Uruguay or Canada in order to determine how and why classifications might have taken on or resisted a national social, political, or academic character. Here it is enough to leave the reader with a speculative hypothesis regarding the shift in herpes zoster’s classificatory categories in the mid-twentieth century. Causative and reductive discourse and method, produced nationally a half century earlier, may well have been part of the mechanism for the classificatory shift that can be seen in the history of herpes zoster.




Arvin, A., Gershon, A. (Eds.) 2000, Varicella-Zoster virus: Virology and Clinical Management. Cambridge: University of Cambridge Press


Biagioli, M. 1999, The Science Studies Reader. New York, Routledge


Bos, L. “Hundred Years of Koch’s Postulates and the History of Etiology in Plant Virus Research”. The European Journal of Plant Pathology, Vol. 87, May 1981, pp. 91-110


Girard, M. “The Pasteur Institute’s Contribution to the Field of Virology”. Annual Review of Microbiology, Vol. 42, 1988, pp. 745-763


NLM’s IndexCat Quick Search Page., Accessed October 13, 2011


Weller, T. “Varicella and Herpes Zoster: A Perspective and Overview”. The Journal of Infectious Diseases, Vol. 166, Supplement 1, Aug. 1992, pp. s1-s6

[1] See, retrieved 10/02/2011

[2] See Weller, T. Varicella and Herpes Zoster: A Perspective and Overview. The Journal of Infectious Diseases, Vol. 166, Supplement 1, Aug. 1992, pp, s1-s6 for a discussion of the scientific development of knowledge about the virus as well as what outstanding problems remain. Weller, a Nobel laureate, worked extensively with Varicella-Zoster and was responsible for isolation of the virus in the early 1950’s. S2 outline Weller’s work on isolation while s4-s5 describes outstanding concerns and directions for further research.

[4] Non-classificatory work here refers to dissertations, case studies, articles and all other indexed items. Information is not uniformly available for all records and as such publication data will be drawn exhaustively from series 5 while only a smaller sample can be accurately and practically drawn from series 2 and 3. See Figure 1 note.

[5] See Kaiser, D “The Atomic Secret in Red Hands? American Suspicions of Theoretical Physicists During the Early Cold War”. Representations, vol. 90, University of California Press, 2005: 28-60 for an example of how wartime events can lead to the distrust and displacement of scientific experts. While the Cold War is certainly a different beast than WW2, the arguments may well cross those boundaries.

[6] The only exception to this is Australia (2 records), Canada (1 record), Ireland (3 records), Italy (3 records in S2, 5 records in S3). Additionally, it should be understood that S1-4 were not searched exhaustively as the number of records and the inconsistency in data made it an impractical task. However, S5 is completely represented and the arguments made about the geography of publication are not tasked too greatly by this asymmetry.

Disagreement in publication is a hallmark of Science and Technology Studies (STS) scholarship. Clearly articulated positions in scholarly journals often inspire responses that take aim at those positions and work to explore perceived inconsistencies, ineptitudes, or impracticalities[1]. Indeed, it is easier to find disagreement among scholars than it is to find anything resembling consensus. It might be argued that disagreement can indeed foster discussion and be a productive, intellectually rigorous activity. While this relationship may in some cases describe scholarly combatants, controversies often move out into the field and take on the character of a debate[2]. The rhetorical force of the “debate” metaphor is one that demarcates winners and losers, the right analytic tools from the inappropriate ones. This brief paper will respond to a recent publication dealing with one such disagreement within the field, and attempt to reorient the seemingly opposed sides into a constructive discussion. In doing so, this paper seeks to contribute to a body of work that works to eliminate the perpetuation of a win/loss dichotomy in the field, as well as to allow as many of the relevant concerns, on both sides of the discussion, to be fairly addressed[3].


Scholarly Combat


A recently published article in Social Studies of Science, entitled “Models of democracy in social studies of science” by Darrin Durrant, addresses a debate that is still very much alive in STS. Durrant’s article addresses the confrontation between “Sheila Jasanoff and Brian Wynne, on one side, and Harry Collins and Robert Evans, on the other” (Durrant, 691). The debate is, on Durrant’s account, between “opposing normative sensibilities within STS” (Durrant, 691, emphasis added). On one side of the debate is Collins and Evans’ call for a “Third wave of Science Studies” that takes into account the “social constructivism” of the second wave, while addressing a problematic they term the “Problem of Extension” (Collins and Evans, 2002). The issue at hand for these authors is that, while the second wave of STS succeeded in solving the “problem of legitimacy”, by demonstrating that legitimate authority did not adhere merely in “good scientific training” but rather in the retrospective attribution of expertise, it now encounters the “problem of extension” (Collins and Evans, 2002: 239-240). Extension is problematic precisely because of its limits, or lack of limits. In light of contemporary STS work, it became clear that not only could expertise be determined retroactively but that, by virtue of that attribution to the lay public, there may be no normative limit to expertise[4]. The third wave, then, seeks to set a normative limit based on a rigid categorization of expertise (Collins and Evans, 2002). On the other side of the debate is found Jasanoff and Wynne’s condemnation of the restrictive normative expertise that Collins and Evans construct (Jasanoff 2003, 2005 and Wynne 2003, 2007, both in Durrant, 2011).


Durrant’s contribution to this debate is to defend Collins and Evans against the charge of being “anti-democratic” in the setting of limits on the “public’s role” in expertise (Durrant, 692). Durrant relates the two side of this debate to what he considers a similar controversy in political philosophy between John Rawls, on the side of Collins and Evans, and Jurgen Habermas, on the side of Jasanoff and Wynne (Durrant, 692-693). Specifically, Durrant connects the idea of “public reason”, in the Rawlsian sense, versus “ideal speech situations”, in the Habermasian sense, to Collins and Evans’ “decision-making” versus Jasanoff and Wynne’s “deliberation” (Durrant, 692). With this “political lineage” Durrant is able to easily refute the claims of the Third Wave’s anti-democratic posture (Durrant, 697-700). A more sophisticated critique, in Durrant’s view, is one that is analogous to a political discourse of “identity politics”; one that shares with contemporary STS a focus on “difference” (Durrant, 700-702, 703). By mapping specific arguments that Wynne and Jasanoff make (Durrant, 703-704) Durrant is able to  implicate the participatory politics of those arguments within identity discourse (Durrant, 704-705) and to subject them to the, largely unanswered, criticisms of that body of theoretical work (Durrant, 705-706).


This conceptualization and critique, then, comes to the aid of both Collins and Evans, leaving Wynne and Jasanoff to deal with the exceptionally difficult task of settling currently intractable political debates. Durrant provides a skillful critique of a well known tension in STS between “situations where decisions are wanted” and those in which “facilitating discussion” is desired (Durrant, 710). In the case of the former, boundaries are helpful, while for the later they are up for debate. For the decision makers, an “unworkable politics” is the greatest anxiety, while for the discussants the politics of difference, identity, and exclusion are, if ignored, always productive of an unworkable politics (Durrant, 710-711). Durrant, on this issue, is very much in favor of Collins and Evans as an “antidote” to the all too pervasive “[splintering of] publics” in STS (Durrant, 711).


Unworkable Analytics?


For Durrant, and indeed for many involved in scholarly ‘debates’, there are winners and losers. In the most charitable case, there are strong and weak positions, and in the most antagonistic there are combatants that fall victim to venomous analytic poisons if the right antidote is not discovered. This is not only a dialog that exists in reflections upon discussions, but is also manifest in the work of the primary actors in those discussions. For the sake of space this paper will briefly address one of the most forceful critiques of the Third Wave, according to Collins and Evans themselves (Collins and Evans, 2003: 436), in order to assess both the level and necessity of controversy.


Jasanoff writes, “the intellectually gripping problem is not how to demarcate expert from lay knowledge or science from politics” but “how particular claims and attributions of expertise come into being and are sustained” (Jasanoff, 2003: 398). Collins and Evans seem perplexed. The two contrast this criticism with Jasanoff’s seemingly contradictory concern for the “integration” of “both strong democracy and good expertise” (Jasanoff, 2003: 398). For a moment Collins and Evans seem content with Jasanoff’s indication that a “pressing problem” has been identified by the Third Wave (Collins and Evans, 2003: 444). Indeed, earlier in their work the two seem happy enough to have begun a discussion:


If 3-Wave helps to turn the attention of the science studies community to the need for a new normative theory of expertise, then it will have served a purpose. If it is agreed that the foundations of the theory must be different to current attributional or constructivist approaches that can be applied only retrospectively, then the paper will have succeeded. (Collins and Evans, 2003: 336).

Is this not, in her call for further work on “how particular claims and attributions of expertise come into being”, exactly what Jasanoff has offered the Third Wave (Jasanoff, 2003: 398)? Jasanoff is asking after how expertise comes into being rather than being thoroughly attributional. On this account, by their own admission, Collins and Evans would surely have to consider this a success. The contrary, however, seems to be the case. Collins and Evans conclude at the end of their section that addresses Jasanoff’s several concerns, that what is needed is “more thought about whether interactional expertise…has a role” in cases where core sciences more directly engage the public (Collins and Evans, 2003: 446). The perplexing question, then, is would more thought about interactional expertise’s place not be facilitated, out of necessity, by further investigation into how particular claims and attributions of expertise come into being?  There is no less confusion when it comes to Jasanoff’s own treatment of Collins and Evans’ work. One need only consider the way in which Jasanoff writes off questions of extension as not “intellectually gripping” enough to warrant interest. Both Collins and Evans, and Jasanoff, appear to talk past one another. Collins and Evans characterize their own project as one that is a starting point, one that seeks to “hammer a piton into the ice wall of relativist”, to begin to climb (Collins and Evans, 2002: 240). When Jasanoff then begins to climb, her presumably trustworthy belaying partners simply let the rope slack. Jasanoff is no better of a climbing partner, for she sees no need to climb the wall in the first place. Durrant is of little help in the standoff given his partisan attachment to ice climbing. So many slippery slopes; it is no wonder that these analytic approaches appear unworkable from either side. Clearly


For those scholars who have a keen interest in climbing the wall of relativism, and consider it to be of great practical importance, there is hope. Consider the approach Eve Seguin took in attempting to reconcile the famous debate between Bruno Latour and David Bloor, between the Sociology of Scientific Knowledge (SSK) and the Latourian analytic framework of laboratory practice[5]. Seguin argues that the debate is maintained not by a central phenomenon, for which there are two competing accounts, but rather by a misunderstanding of the appropriate phenomenon of analytic focus for each camp (Seguin, 504). Seguin uses familiar terminology in delineating these two sides. In describing the relative sides, Seguin demonstrates the way in which SSK holds that “all belief systems are equal in the sense that their credibility is explainable by social factors” (Seguin, 504). SSK thus strips scientific knowledge production of any distinctive characteristic, setting it on equal footing with all knowledge beliefs. Latour, on the other hand, maintains that science is indeed different from other forms of knowledge by virtue of its “laboratory…activity” (Seguin, 504). While SSK works to “shed light on the social interests that condition the formation of scientific knowledge”, thus focusing its attention on “society in science”, Latour is interested in theorizing the “social function exerted by science”, with attention being paid to “science in society” (Seguin, 504-505). These two commitments are described in terms of being “upstream” and “downstream”.  In the case of society in science, the work is considered upstream precisely because it seeks to “scrutinize the conditions of possibility of the scientific enterprise” (Seguin, 505). In the case of society in science, the work is considered downstream because it examines the “impact [of science] on society”, “[reproducing] dominant social interests and the existing order” (Seguin, 505). Collins and Evans also deploy the notion of up and downstream work, locating their project upstream because it engages decisions “before the scientific dust has settled” (Collins and Evans, 2002: 241). Cast in these terms, Jasanoff, with her concern for how forms of expertise come into being and are maintained, is engaged with downstream work because determinations of being and maintenance rely on the scientific dust having settled.


The take away from Seguin’s reconciliation is that science can be a lens for social and political critique without troubling the analytic commitment of using society as a lens for scientific critique. In this way, Third Wave and the concerns of Jasanoff are not mutually exclusive; they are mutually supportive. While the work upstream can be carefully crafted and passed downstream, the work downstream can serve to ensure the quality of upstream work. We need not decide which camp we ought to belong to or which antidote to take. Nor are we resigned to raw confrontation. Instead, the interrelated spheres of knowledge in which each camp is interested in can be made simultaneously workable. This is not a merely a plea for us all to ‘just get along’. Rather, this brief response to Durrant’s article asks for open lines of communication, for a constructive analytic posture, and for the hope of a more politically, ethically, and practically just technoscientific world. Much work needs to be done, but “failure to acknowledge the existence of two different objects in [science studies] can only prevent new explorations”; a non-antagonistic conceptualization “is perhaps the only way to do justice to this rich area of research and to allow for a diversity of approaches” (Seguin, 507).




Ashmore, M. “The Life Inside/The Left-Hand Side, The Body Multiple: Ontology in Medical Practice”. Social Studies of Science, 35(5) (2005): pp. 827-830.


Collins, H., Evans, R. “The Third Wave of Science Studies: Studies of Expertise and Experience”. Social Studies of Science, 32(2) (2002): pp. 235-296.


Collins, H., Evans, R. “King Canute Meets the Beach Boys: Responses to The Third Wave”. Social Studies of Science, 33(3) (2003): pp. 435-452.


Durrant, D. “Models of Democracy in Social Studies of Science”. Social Studies of Science, 41(5) (2011): 691-714.


Epstein, S. ‘The Construction of Lay Expertise: AIDS Activism and the Forging of Credibility in the Reform of Clinical Trials’, Science, Technology, & Human Values, 20(4) (1995): 408–37


Jasanoff, S. “Breaking the Waves in Science Studies: Comment on H.M. Collins and Robert Evans, The Third Wave of Science Studies”. Social Studies of Science, 33(3): pp. 389-400.


Kusch, M. “Rule-Scepticism and the Sociology of Scientific knowledge: The Blood-Lynch Debate Revisited”, Social Studies of Science 34(4) 2004: 571-591.


Pickering, A. (eds) (1992) Science as Practice and Culture. University of Chicago Press


Seguin, E. “Discussion: Bloor, Latour, and the Field”. Studies in History and Philosophy of Science Part A. 31(3) (2000): pp. 503-508

[1] See the “Bloor-Lynch” debate, antagonistically characterized in Andrew Pickering’s sectional heading as an argument, in Pickering’s collected edition Science as Practice and Culture (1992).

[2] Indeed, the Bloor-Lynch debate still holds its antagonistic charge 12 years later: see Kusch, M. “Rule-Scepticism and the Sociology of Scientific knowledge: The Blood-Lynch Debate Revisited”, Social Studies of Science 34(4) 2004: 571-591.

[3] See Seguin, E. 2000. Also, Ashmore, M. 2005.

[4] See Epstein, S. ‘The Construction of Lay Expertise: AIDS Activism and the Forging of Credibility in the Reform of Clinical Trials’, Science, Technology, & Human Values, 20(4) (1995): 408–37

[5] See Seguin, E. “Bloor, Latour, and the Field”. Studies in History and Philosophy of Science Part A, 31(3) (2000): pp. 503-508.

Emotions, much like dreams, have a tremendously complex history. Presumably, and in a simplistic sense, both dreams and emotions have been with humanity since the very dawn of man. Having varied in both their socio-cultural and epistemological importance and influence, dreaming has been under scrutiny since at least the time of Aristotle[1]. Similarly, recent work has explored the history of emotions in great depth and added a cautionary provision about the stability of the very category of emotions[2]. On this account, the term ‘emotion’ is a very recent invention of the 18th century whose lineage can be traced back to early Christian and philosophical accounts of ‘passions’, ‘affections’ and ‘appetites’[3]. One particularly interesting aspect of dreams and emotions is that encounters between the two are very rarely attended to by historians. Much has been written about both dreams and emotions, but very little has been said about the way in which the encounter between the two historically embedded experiential categories may have played out. This paper will seek to open a dialog about just such encounters.

Considering the potentially endless work that could be done in the area, this paper will narrow its account to a specific time and author. This approach is intended to not only make the subject workable, as well as avoiding broad declarations about the nature of dreams and emotions, offering instead localized and specific conclusions. The choice of time is necessarily conditioned by the choice of author to be examined here. In 1899 Sigmund Freud published his now famous Interpretation of Dreams, a book that would change the way dreams were handled by sleep researchers for years to come[4]. The 1899 publication was also crucial as a kind of handbook for the newly emerging ‘talking cure’ that would eventually become the clinically based psychoanalytic method. The book, and indeed Freud himself, represent a kind of watershed in the history of dreaming (not to mention psychology) and as such is a crucial text to handle when thinking about the interaction between dreams and emotions. More important, for this paper, is the fact that Freud devotes an entire section of his sixth chapter to “Affects in Dreams”[5]. The title of that section alone is evocative of the historical trajectory that scholars have charted in the course that passions and affections took before being subsumed under the term ‘emotion’ in contemporary thought. This paper will have three major components to it. The first section will deal with the question of what emotions might be, around 1899 as well as in the contemporary academic scene. Once the reader is equipped to understand the baggage brought along with any use of the term ‘emotion’ or ‘affect’ an examination of “Affects in Dreams” will follow. A final section dealing with some contemporary writers who have engaged or brushed up against the topic will be used as a way to orient the interpretation of the relationship between dreams and emotion in Freud’s text. The meeting of dreams and emotions in Freud’s text necessarily casts emotions as distinctly involuntary things that held a historically unprecedented amount of subjective importance.



The Modern Emotion


In contemporary usage, emotions have taken on a somewhat dichotomous character with regard to rationality and the intellect. Robert Solomon, in his 2004 book In Defense of Sentimentality, is representative of this outlook[6]. In particular, Solomon’s work is structured around two interrelated theses; that “philosophy has as much to do with feelings as it does with thoughts and thinking” and that “philosophy and philosophers have much more often than not shunned the emotions and defined their profession and themselves strictly in terms of reason and rationality” (Solomon, vii). Taking Solomon’s perspective seriously, this paper begins by asking about how Freud’s Interpretation of Dreams might have contributed to a Western, emotionally devoid rationality.


In discussing grief, Solomon notes the way in which the emotion is characterized by the Claudius in Shakespeare’s Hamlet as “unmanly…simple and unschooled” (Solomon, 77). This is taken to be the way in which the Western world has received grief and mourning, as a foolish, often feminine attribute that is “impious” and at odds with heaven (Solomon, 77). It is only in European philosophy and the Freudian tradition, in opposition with the Anglo-American philosophic tradition, which grief is addressed. These exceptions, however, reflect on the “process of grieving”, rather than the way in which the emotion is implicated in, for example, the “capacity to love” (Solomon, 77). Indeed, when the process, rather than the place, of grief is attended to, often the “discussion gets clinical, cold, and out of touch with the emotion altogether (Solomon, 77, emphasis added). For the purpose of this paper, it is not necessary to chart every appearance of Freud in Solomon’s text. It is important to note, however, that Freud appears throughout the work as an ally. In terms of the passages quoted above, Freud represents one of the European resistances against a Western, sanitized emotional discourse. In Solomon’s chapter on horror, Freud appears as a character whose definition of anxiety is crucial, not to mention acceptable, in exploring exactly what “real horror” is (Solomon, 119). The crucial point is that Freud falls on the side of sentimentality over rationality and when he does not clearly fall on a side, he is emblematic of a dichotomy. For example, in a chapter concerning erotic love Solomon is dismissive of a conception of love that figures it “as a virtue”, “merely as a means, as Freud once saw anal retentiveness and sublimation as a means to great art” (Solomon, 169). Freud here appears neither as a defender of sentiment nor as a strict rationalist, but the passages that follow demonstrates the way in which the dismissal is rooted in a dichotomous reading of sentiment and rationality[7]. Here, utility (rationality) “demeans” love (sentimentality) and Freud is brought in, not directly, but through analogy to support the separation of the two.


Emotions in History


There is, perhaps, another way to think about emotions. One theme common to all of Solomon’s chapters is that of the unwavering character of emotions. Indeed, numerous scholars have commented on this very aspect of his work. Thomas Dixon has been particularly vocal about any examination of emotions that fails to ask about the historical context, lineage and linguistic expression of the emotions. In Dixon’s 2003 book, From Passions to Emotions: The Creation of a Secular Psychological Category, Solomon’s earlier work on emotions is cited directly[8]. Dixon takes issue with the very same thesis presented in In Defense of Sentimentality, specifically that “Western thinkers have been prone…to take a negative view of the emotions and to think of them as inherently bodily, involuntary and irrational” (Dixon, 2). Dixon’s assertions are particularly illuminating in the case of Solomon’s Freud precisely because the dichotomous relationship that Freud is depicted as being involved in, betrays a historical understanding of emotions themselves.


It is perhaps a happy coincidence for a researcher that Solomon invokes Freud in his discussion of Hamlet, as the religious overtones found therein are precisely the thematic starting point for Dixon’s historical analysis. Rather than suggesting rationalist ideological commitments forced emotions out of philosophical and psychological discourse, Dixon makes the claim that a departure from traditional commitments drove the creation of the emotions as a category (Dixon, 3). These commitments, those bodily, involuntary, or irrational in character, emerged from early Christian thoughts on the “passions and affections of the soul” (Dixon, 4). As early as Augustine (354-430) passions and appetites were understood as emanating from the lower animal soul while the affections belonged to a higher rational soul (Dixon, 29-30). Conceived this way, passions and appetites represented occasions for the mind to loose out against the body and for impious action being at odds with the heaven; for forbidden fruit to put the promise of paradise out of human grasp. Thomas Aquinas, writing in the 13th century also echoed a large portion of the Augustinian emotional theory. Aquinas maintained the familiar distinction between the same lower and higher faculties but also added a more contemporary distinction between cognition and the will, both being placed above base appetites (Dixon, 35-36). Passions then, it is important to note, were not eternally at odds with reason but along with appetites and affections were all linked through “rational and voluntary movements of soul” (Dixon, 3). It was not until the 17th century that individual will took a back seat as affections and passions came to engage a natural theological discourse in which both were products of a design by God. The work of Joseph Butler, especially his Analogy of Religion (1736), represented such a mechanistic notion of the universe. Butler, following the observations about a restful prime mover in Aquinas[9], developed a view of “appetites, passions, sentiments and affections” that implied a distanced Creator that “could only be inferred from the psychological constitution of man” (Dixon, 82). This point is important because Butler here, in linking the passions of Augustine and Aquinas to a mechanistic, cognitive program, imbued them with a kind of secularism in which reason held the greatest sway. Dixon notes that Butler is, for this reason, often depicted as a kind of “halfway house between Christian and secular psychologies” (Dixon 83).


It is with these, and many other considerations, that one ought to approach the arrival of emotions as a category. While the term appears in one of its earliest formations in Hume, it is really with Thomas Brown’s work that emotions received their first comprehensive treatment (Dixon, 101). In Brown’s taxonomy, emotions were “neither sensations nor intellectual states” but a new “de-Christianized” and potentially scientific category with which to examine the human mind (Dixon, 23). This is perhaps where the closest approximation of what Solomon calls emotions have yet to be found. This new, more secular category was readily adopted by physiological and evolutionary thinkers alike who, as Dixon notes, introduced a physicalist discourse “implied by their methodology and language that the real business of emotions went on at the physiological and neurological levels” (Dixon, 144)[10]. The meeting of emotion and religion here is not a straightforward affair and it would be inaccurate to describe even all psychologists (not to mention evolutionary or political thinkers) as having rejected theological commitments altogether. Instead, the Brownian category could be picked up and adopted while its physicalist dimensions might be left aside for cognitive alternatives (Dixon, 181-182). The crucial point here is that by the time emotions emerge in a historical sense, they are not picked up as whole packages but rather in pieces; there is no sentiment or emotion for Freud to attach himself to, nor is there a clear notion that emotions are necessarily at odds with reason. Rather, it is left up to the historian to examine closely the local and textual manifestations of emotion in Freud’s text. Sociologist Harvey Ferguson has pointed to precisely this aspect of Freud’s work, noting that the presentism responsible for figuring Freud as either a “mechanistic [biologist]” or a “morphological-descriptive” clinician is merely a product of retrospect (Ferguson, 15). Freud, much like the emotions and dreams he wrote of, was much more complicated than that. It is in this spirit, then, that The Interpretation of Dreams and its discussion of affects needs to be understood and investigated.


Affects in Dreams


Given the historical picture developed, albeit briefly, above, it is almost unavoidable that any discussion of Freud’s section of “Affects in Dreams” should begin by addressing the linguistic heritage of that title. Freud here is speaking of affects and chooses to use the term over emotions or passions. While Freud does casually use the term ‘emotion’, if one is to take Dixon’s claims seriously, the choice to use ‘affect’ as both a title and predominantly throughout his text almost necessitates questioning its use. While it is true, as was outlined above, that ‘emotion’ was picked up more rapidly by physiologists and evolutionary thinkers, Dixon notes that “some…[Christian thinkers] were still speaking in the language of ‘will’, ‘passions’ and ‘affections’ in the 1870s” (Dixon, 23). The implication to draw is not that Freud was a Christian but rather that Freud inherited a proto-emotional category that was rooted in the willful passions of the soul and then adopted it. This observation is crucial on two levels. First, Freud here becomes less a strict neurological rationalist, as both his linguistic tendencies and his distance from the physicalist Scottish category of emotions attest to. Secondly, it is important to recognize that for Freud the category of emotions was similar to that of William Lyall or James McCosh insofar as they all saw a particular part of the emerging category as desirable while avoiding those parts that were at odds with their disposition[11]. In the case of Lyall and McCosh, Dixon demonstrates the way in which the two Christian psychologists took up the Brownian category of emotions while leaving behind its physicalist tendencies. Freud’s choice of term, then, provides evidence of a non-Christian thinker avoiding some of the proto-epiphenomenal aspects of the Brownian category and opting for a cognitive (and importantly functional) view of emotions. Freud, then, can not be read as a straightforward ally or enemy of sentiment, as Solomon’s text tends to depict him. Freud’s sectional title attends to the way in which the entire history of emotions can be read as a “story of gradual, complex and incomplete secularization” of a psychological category (Dixon, 21).


Given the linguistic heritage outlined above, it is important also to recognize the way in which affects and willfulness interact in Freud. One important departure from early Christian notions of passions and affects in Freud is the involuntary character that affects take on. In strict opposition to a willful notion of passions or affections, Freud’s section is fraught with examples of the involuntary and perplexing appearance of affects. In a brief note on the disconnect between events and affects in dreams, Freud exemplifies this notion, highlighting that “in our dreams the imagined ideas of the content are not attended by the effect upon our emotions that we would expect they would inevitably have in our waking thought” (Freud, 299). There are at least two points to make here. One is about the way in which Freud casually deploys the term ‘emotion’ and the other about the involuntary nature of emotions. It was noted above that ‘affects’ are not used exclusively (although certainly predominantly) and the appearance of ‘emotions’ here in his section on affects serves to strengthen, rather than weaken, the argument presented about his linguistic heritage. Again a piecemeal image of emotions is being taken up by Freud as he uses the terms almost interchangeably. As regards the lack of a willful affective process, Freud here is pointing his reader toward the idea that affects lack the rational cognitive aspect of waking thought. In doing so, Freud is at once departing from the early, bodily Christian doctrine and from the strict physicalism of the Brownian emotional legacy. The crucial point, once again, is that Freud is constructing an image of affect and emotion that does not belong entirely to the body, to scientific discourse or to an anti-rationalist sentiment; it is an emotion all of its own.


To suggest that emotions, under Freud, are involuntary and perplexing is not to suggest that they lack importance. Indeed, one of the central aspects of Freud’s work on affect that sets him apart from his contemporaries is his depiction of affects in dreams as fundamental keys to understanding aspects of waking life and behavior. Thomas Dixon described the emotions in the work of William James (1842-1910) as “the culmination of a complex process of secularization and innovation in psychological discourse” (Dixon, 24). One of the aims of this description is to reorient the history of emotions away from a presentism that too often structures historical work. Rather than conceiving of James’ 1884 article ‘What is an emotion?’ as the beginning of an account of psychological emotions, Dixon wants to remind his readers that the epiphenomenal and scientific character of James’ work is predated and importantly structured by writers who had all “described emotions as aggregates or effects” (Dixon, 204). Importantly, then, this paper wants to reorient a view of Freud that is exemplified by Solomon’s work such that Freud is no longer seen as retrospectively belonging to one camp or another, but as being another kind of culmination of a long, complex historical story. In particular, it is Freud’s insistence that there remains, as there was in early Christian and medieval scholarship, a subjective importance to affective experience. Two examples will adequately support the argument. In the first instance, Freud is always speaking of the reality of affects in dreams; “If I dream I am frightened of robbers, the robbers are certainly imaginary, but the fear is real” (Freud, 298). Freud is pointing to the fact that, in opposition to James, affect is not epiphenomenal. Both James and Brown, whose “view of emotions was at the root of James’ own”, had imagined the “cause [of affects to be] sensations” (Dixon, 204). Sensations may not match up with, nor do they need to precede, affects for Freud; affects are themselves primary and real. Secondly, that affects take on a primacy and reality in Freud’s hands is absolutely crucial to his methodological approach to the mind:


Let us consider a psychicial complex which has been subject to the influence of the censorship set up by resistance; the affects are that part of the complex which are most proof against the censorship, indeed, they are the only part that can give us a pointer towards filling the gaps. This is revealed in the psychoneuroses even more clearly than in dreams. In psychoneuroses the affect is always right…

(Freud, 299, emphasis added)


Psychoanalysis, Freud tells his readers, is the practice by which the neurotic patient can be “put…on the right track” by taking the affect, rather than its potentially paradoxical referent in dreams, to be “a starting-point” (Freud, 300). Dream interpretation relies on the primacy, reality and subjective importance of affects.


The Freud encountered here is in several ways distinct from the initial image presented by Solomon. While Solomon is correct to imply that Freud has a high esteem for sentiment, it is a particular brand of sentiment that does not map easily on to the anti-rationalist picture that he paints. Instead, the sentiment that Freud endorses shares a linguistic heritage with ancient passions and affections (perhaps closer themselves to what Solomon is trying to defend), departs importantly from that heritage in its rejection of the willful affects and is non-epiphenomenal in character. Additionally, this very outline of affects in Freud attests to the fact that, contra Solomon, Freud would be a very peculiar ally and an outright denier of contemporary dichotomies regarding sentiment and rationality. However, to Solomon’s credit, Freud does fit in his narrative in one particular way that is never addressed in his text. Freud is important in linking sentiment and rationality and in being a living manifestation of Solomon’s thesis about the importance of sentiment in any rationalist agenda. Just as historians of psychology have neglected “the mind and its faculties… [by approaching]…the past looking for thinkers and thoughts that closely resemble present-day academic psychologists and their theories” so too has Solomon neglected an important aspect of Freud’s notion of the mind and its faculties when recruiting the psychologist into a relatively modern intellectual dichotomy (Dixon, 24)[12].


Dreams and Emotions


This paper began with the assertion that encounters between dreams and emotions are rarely charted. It is appropriate now, with the analysis above in mind, to treat some of those exceptions briefly. In a 2001 paper by John Deigh entitled “Emotions: The Legacy of James and Freud” an attempt is made to outline the various effects both Freud and James’ thought had on psychological theory. While Deigh is interested especially in the way Freud and James departed from Cartesian conceptions of the mind, there are several claims that are important for this paper. Deigh is right to draw a line between Descartes conception of the mind as a “field of thought and feeling, entirely conscious, and transparent to its self” and the way in which Freud conceived of the mind (Deigh, 1247). It is the way in which Freud is deployed as an explanatory device however, that deserves some attention. Deigh imagines Freud and James as gifting to psychology, “especially in the study of emotions”, a theory of mind that “freed philosophers and psychologists from the constraints of the old, classical conception that was born of the Cartesian revolution” (Deigh, 1247). This claim is at odds with at least one aspect of this paper, that is, the historical place of Descartes in the history of emotions. In the first place, Descartes is never mentioned directly in his review of literature on dreams. This seems appropriate given the history that this paper, in league with Dixon, has set out. As Dixon notes, during the Age of Reason, a transitional period in the history of emotions, the link between older Christian discussions of the soul, through Augustine and Aquinas, was bridged by thinkers like Jonathan Edwards and his Treatise Concerning Religious Affections (1746) (Dixon, 75). Crucially, Edwards did take a great deal from the Cartesian doctrine, especially in terms of perception over movement and a literal reading of the body-soul distinction, but this was only “half the story”:


Revivalist thinkers were deeply imbued with traditional Christian thought, especially that of St Paul and St Augustine. It is from these thinkers that Edwards inherited the strong distinction between nature and grace, between carnal natural man and the spiritual saved man – a distinction that was integral to classical Christian psychology and which must clearly be differentiated from Cartesian soul-body dualism.

(Dixon, 79)

            In terms of the story told above, the Christian influence on Scottish intellectuals like Alexander Bain and Thomas Brown was crucial. Deigh, much like Solomon, tends to read the story of Freud backwards. Rather than Freud having developed a doctrine that freed psychology from Cartesian commitments, history and Freud’s own texts suggest that Freud mixed a great deal of Christian language and subjective importance, Cartesian cognitivism and Scottish secular discourse to produce a hybrid, complex notion of emotions particularly suited to a clinical practice of dream interpretation.


In a special issue of Science In Context from 2006 on dreams, John Forrester provides an elegant illustration of the way in which Freud’s work was picked up and used by various writers in the interpretation of their own dream. Central to Forrester’s paper is the claim that, in their analysis of their own dreams as well as in their analyses’ relation to Freud, each author “forgot” Freud to some degree. This is a fitting way to read the reception of dream interpretation not just in Britain but in current discussions of dreams and emotions. In the case of Solomon, the story is very “faithful and acknowledging” to Freud but, as the historical narrative above shows, also tends to forget him in particular ways in order to “[establish] self-knowledge” (Forrester, 83). For Solomon, the image of Freud remembered is that of a staunch supporter of sentimentality and a character emblematic of a dichotomous relationship between rationality and sentiment. That which is forgotten is the historical legacy that Freud brought with him in his linguistic, methodological and conceptual apparatus. Self-knowledge here is less about the implication of the self in an analytic process or of analyzing ones own dreams, and more about carving up a space in which one can defend sentimentality. While Solomon’s thesis about Western thought is left somewhat unmarred by the remarks made here, the place of Freud in his story is seriously troubled.


This is perhaps the case with all historical work. This paper is its self about producing a kind of self-knowledge, through Freud, by remembering and forgetting particular parts of his doctrine. Indeed, the section under examination here is but a very small part of Freud’s book (not to mention his entire published works) intended to retain a particular vision of Freud that can serve as a foundation for a self that is hostile to grand narratives and to revisionist histories. Perhaps this is not at all a bad thing. Solomon’s target, after all, is not the history of psychology but a history of philosophy, just as this paper is interested in the history of psychological categories and not that of philosophy. In both cases there are technologies of the self at work and the history of dreams and of emotions, insofar as they are deeply personal experiences, are equally about producing and defining selves that fit with, react against or attempt to dismantle existing structures of power. One potentially profitable outcome of this paper is the inclination toward asking how emotions figure in still other contexts divorced from Freud. How, for example, did emotions figure in the Aristotelian or Augustinian understanding of dreams? How did medieval scholars figure emotions like shame, worry, embarrassment or anxiety in a period that held dreams to be born of demonic or angelic inspiration? These are the kinds of questions that this paper hopes to open up for discussion. One way of reading these kinds of histories is to look back and apply a certain amount of presentism to, for example, medieval scholars. Another, prompted by this paper, is to explore the local context and textual representation of those scholars to get a better feeling for the way in which passions, affects, and emotions functioned in particular settings. The conclusion reached here is a complicated one that views Freud’s dreamt emotions as carrying a very particular Christian linguistic heritage through to a view of affects that were distinctly involuntary and crucially of subjective importance. Freud here can be read neither as a strict ally of sentiment or of rationalism, but rather as having produced a particular technology of the self, and of himself, which brushed up against a whole history of dreaming, emotions, and theories of mind.

















Aristotle. 1996. Aristotle on sleep and dreams: a text and translation with introduction, notes, and glossary / [edited] by David Gallop. Warminster, England: Aris & Phillips


Deigh, J. “Emotions: The Legacy of James and Freud”. The International Journal of Psychoanalysis (2001) Vol. 82, pp. 1247-1256


Dixon, T. (2003). From Passions to Emotions: The Creation of a Secular Psychological Category. CambridgeUniversity Press


Ferguson, H. (1996). The Lure of Dreams: Sigmund Freud and the construction of modernity. Routledge


Forrester, J. “Remembering and Forgetting Freud in Early Twentieth-Century Dreams”. Science in Context (2006) Vol.19(1), pp. 65-85


Freud, S. 1899, Crick, J (1999 Trans). The Interpretation of Dreams. OxfordUniversity Press


Kroker, K. (2007). The Sleep of Others and the transformations of sleep research. Toronto: University of Toronto Press


Solomon, R. (2004). In Defense of Sentimentality. OxfordUniversity Press

[1] See Aristotle

[2] See Dixon (2004)

[3] See Dixon. In particular, his introductory remarks about the theological dimension of the history of emotions are extremely useful.

[4] See Kroker (2007) p.122

[5] See Freud in Crick (1999) Chapter 6.

[6] See also Solomon, R. (1993) The Passions: Emotions and the Meaning of Life. Hackett Pub. Co. Inc.

[7] See Solomon (2004), page 169 where the author examines the virtue-theory of love and demonstrates how Western thinkers have produced false objections to

[8] See Dixon p.2. See also in Dixon the recent wave of scholars that participate in a similar endeavor for which Solomon was the first (on Dixon’s account) p.2

[9] See Dixon for Aquinas’ distinction between the will and cognition deriving from the notion of an unmoved, cognitive prime mover pp. 35-39.

[10] See especially the case of Alexander Bain in Dixon for psychological physicalism, Ch. 5

[11] See Dixon p.23 and  pp. 181-182 for Lyall and McCosh

[12] Importantly, this criticism should not be applied to the broad philosophical argument that Solomon makes, but rather to his choice to include Freud in particularly troubling ways.

I was recently mulling over a relatively dated edition of the “Handbook of Science and Technology Studies” (1995) and found a piece by Gary Bowden that has implications for both of our readings this week. This response will be a little lengthy (perhaps to the detriment of my grade on it) and I apologize before hand as I hate to be the one to add more reading to our already busy schedules. I promise my responses in the future will be shorter, but I am very intrigued by questions of appropriate STS methodology so I could not resist.

Gary Bowden’s piece, entitled “Coming of Age in STS” (Jasanoff, pp. 64-79), explores what the author terms as “methods of explanation” and their relation to particular visions of the Science and Technology Studies (STS) field. Bowden is principally concerned with the way in which the collective culture (a term I do not think he would himself use) of STS produces a varying, and sometimes problematic, self image through the nature of the equally varying methods of inquiry it employs.

Bowden identifies three methods: topic focused, analytic issue focused, and combined focused.  A topic focused method takes a topic, perhaps ‘lay expertise in science’, and has little or no boundaries on its method of explanation. Bowden argues that this method embodies a “multidisciplinary” vision of STS as “one must study the topic from the viewpoint of several different disciplines to gain a full understanding of the phenomenon.” (Jasanoff, p. 68). The second method, analytic issue focused, takes its focus to be a particular issue, like ‘reflextivity’, which is “exemplified in, but neither emergent from nor limited to, science and technology.” (Jasanoff, p. 69). Bowden notes that this promotes a “transdisciplinary” vision of the field as there is no particular topic, but rather just an issue that is common to many disciplines. Lastly, Bowden points toward a combined method, one in which the “characteristics of the subject matter require a method that is unique to that topic”; it is “interdisciplinary” (Jasanoff, p. 68). A combined focus requires that the researcher choose the tools appropriate to the subject matter. Where a multidisciplinary approach would incorporate a host of different disciplines linked by the topic, and a transdisciplinary approach would incorporate many disciplines by a common issue, an interdisciplinary approach chooses those disciplines that the researcher deems appropriate for their work.

The remainder of Bowden’s piece deals with the implications each method, and subsequent vision, has for STS. What is important here, I think, is that the character of STS is in one way or another shaped by the collective scholarly work of all its members and it is to this that Bowden rightly points us to. Shall we consider ourselves a multidisciplinary field, in which all work borders every discipline available to us? Perhaps transdisciplinary, seeking instead resolutions to broad analytic issues that plague many fields? Or is the field interdisciplinary, relying on the character of the topic to stipulate which disciplines will provide the most appropriate explanation? These are questions I leave to you.

My opinion is that STS should be, in Bowden’s categories, interdisciplinary. We should ask ourselves what we are trying to find out, and which disciplines can provide appropriate tools. In relation to Emily Martin’s essay “Anthropology and the Cultural Study of Science” the interdisciplinary approach would seek to include an anthropological framework when its tools can be best used. Martin is, in my mind, right to discredit the accusations put forward by Latour of Woolgar, that anthropology is lacking in some fundamental way. The notion of culture as shared pasts, and concerns over shared futures can indeed explode some of the ahistorical aspects of an ANT analysis, for example. Martin is very wise to point out not only some of the benefits of a cultural anthropology of science but also the ways in which that anthropology can be,

greatly aided  by the work of numerous science studies scholars who have examined important aspects of the ways science is embedded in society…how science can be seen as culture and contains many different “cultures”…how scientific knowledge is as socially constituted as other forms of knowledge production (Martin, p. 29).

Martin seems, to me, interdisciplinary in her treatment of STS. She points out inadequacies of both anthropological and non-anthropological methods, directing us to a method of explanation shaped by our questions. Questions about the subject matter are indeed what guides Martin’s exploration of the appropriateness of anthropological investigation:

Would cultural anthropology have anything new to add to science studies? (Martin p. 25)

What if network building and resource accumulation are not the only way knowledge is

established? (Martin, p. 28)

What if important, forceful processes flow into science as well as out of it? (Martin, p. 28)

If we try to put science in a larger context, will we run up against Latour’s

(1987) claim that you cannot use “society” to explain “nature”? (Martin, p. 27)

And the list goes on….

The point is that Martin is chiefly concerned with questioning if current methodologies as well as potential ones (primarily cultural anthropology) are appropriate to our topic; she is the very definition of interdisciplinarism for Bowden. And with that in mind we can turn to Sharon Traweek’s prologue to “Beamtimes and Lifetimes” (1988). One does not need to look far to see what questions Traweek is interested in, the first page of her prologue plainly states what she is and is not interested in:

This book is not about how physicists have shaped our world, or why our society has given them power and prestige. Nor is it about the current state of knowledge and inquiry in high energy physics. Instead it is an account of how high energy physicists see their own world; how they have forged a research community for themselves, how they turn novices into ph\ycisists, and how their community works to produce knowledge. (Traweek, p. 1)

Given that the focus of my response is on the relative use of anthropology in science studies, following along in Emily Martin’s direction, the question we should ask ourselves about Traweeks prologue is, ‘Is she justified in an anthropological treatment of her subject?’ She is not interested in “how physicists have shaped our world” or “why our society has given them power and prestige”, perhaps best handled by a sociologist, political scientist or some other multidisciplinary blend. She is not interested in “the current state of knowledge and inquiry in high energy physics”, no doubt a question for the physicist. Rather, she directly states her interest is in those areas that anthropology has the most experience dealing with. Community development, communical processes of knowledge production, initiation rituals for novices – these all seem like questions that can be informed by anthropology. But is it anthropology alone that can address these questions?

That question will only be answered once we have been exposed to more of Traweek’s book. Will she, as Martin exemplifies, promote an interdisciplinary vision of science studies? Working to include those analytic tools appropriate to her work? Will she border many disciplines and advance a multidisciplinary role for science studies? Perhaps all these questions are subject to your own treatment of Bowden’s argument: which vision for the science studies is appropriate?


Jasanoff, S., Markle, G. E., Petersen, J. C., Pinch, T. (Eds.). (1995) Handbook of Science and Technology Studies. (pp. 64-79) California: Sage Publications

Martin, E. (1998). Anthropology and the Cultural Study of Science. Science, Technology, & Human Values, 23(1), pp. 24-44.

Traweek, S. (1988).  Beamtimes and Lifetimes: The World of High Energy Physicists. Cambridge: Harvard University Press.

The second half of the Tao Te Ching continues Lao Tzu’s discussion on his central themes: understanding of the Tao, ways of living the Tao, and applications to social governance (or lack thereof). A summary of the Tao Te Ching resists coherence as a number of the chapters are ambiguous in nature, subject to a great deal of interpretation, and are not presented in a distinct order. In the interest of space it will then be helpful to focus on a theme throughout this summery. Focusing on the Tao Te Ching’s application to social governance will allow summation of a great number of his chapters between 32 and 81, effect a degree of structure throughout this large section, and allow for space considerations. [1]


Perhaps one of the most provoking themes throughout the section is the applications of the Taoist system of thought to social governance. This theme provides a pragmatic link between Lao Tzu’s metaphysics and ethical actions taken therein. This is reminiscent of the overall structure of Plato’s Republic insofar as both set out a metaphysical understanding of the world, and both have a focus on what that metaphysics means for ethical governance. In chapter 32 Lao Tzu writes of the Tao, “Though its simplicity seems insignificant, none in the world can master it. If kings and barons would hold on to it, all things would submit to them spontaneously” (pp. 156, Chan). However, where Plato is careful to enumerate rules that ensure successful governance Lao Tzu entreats readers to ponder a system of rule without hard and fast rules. Chapter 60 likens ruling to cooking a small fish, wherein too much handling will spoil the dish. Lao Tzu notes that “the more taboos and prohibitions there are in the world, the poorer the people will be” (pp. 166, ch. 57, Chan), instead believing that by “[taking] no action…the people of themselves will be transformed” (pp. 167, Chan). Similarly where Plato envisioned a hierarchical system with philosopher kings ruling from atop the imaginary Kallipolis, Lao Tzu’s kings are to place themselves beneath the people. Chapter 66 illustrates Lao Tzu’s ideal of rulers who, “in order to be the superior of the people…must…place himself below them. And in order to be ahead of people…must…follow them” (pp. 171, Chan). This notion of common social caste between ruler and ruled is the cornerstone of the application of wu-wei to the governing system, harkening to a lessaiz-faire ideology. Indeed in his translation Wing-Tsit Chan marks a line in chapter 48 carefully in parenthesis, noting the similarities between the “…order by having no activity” and lessaiz-faire ideals (pp. 162, Chan). Another strongly related discourse found in this section of the Tao Te Ching is Lao Tzu’s proposed traits of leaders. It is by “being greatly tranquil”, says Lao Tzu, that “one is qualified to be the ruler of the world” (pp. 162, ch. 45, Chan). Lao Tzu’s ideal rulers are humble, they “never [strive] for the great, and thereby the great is achieved” through wu-wei (pp. 157, ch. 34, Chan). They do not “dare to be ahead of the world” in Lao Tzu’s words (pp. 171, ch. 67, Chan). These traits all point toward the overarching theme of the Yin, or feminine qualities Lao Tzu had discussed in more detail in the first half of his major work. Where leaders for Plato are rigid like the old branch, Taoist leaders are flexible like the young plant. It is by becoming Yang, by forcing action upon nature that “things reach their prime” and “begin to grow old” like the rigid branch, liable to snap when it is forced against Tao (p166. ch 55. Chan).

It would seem a foolish thing to disagree with a single section of a major work, and as such I will not try to do so. There are however, in the opinion of this scholar, a number of problematic proposals in the Tao Te Ching. One of the most intriguing and shocking notion presented in this section is that of governance by wu-wei. If we are to take Lao Tzu seriously as he has been summarized above, we find ourselves in a peculiar position; we are ready to accept a state wherein people always govern themselves satisfactorily (provided all of Lao Tzu’s conditions are met). This is peculiar precisely because all our knowledge of human history is wrought with incidents of a seemingly harmful or destructive nature. In response to this, we might expect Lao Tzu to argue that there is no such historical example that we can find where the Tao was aptly followed, and superiorly virtuous leaders (by Lao Tzu’s definition) were in power. In this case our objection would become an argumentum ad ignorantiam; simply because there is no evidence that the system could not practically work does not suggest it can not. However, we can extend this argument to include a contemporary worldview. Human beings do not live, in general, according to the Tao. With this in mind, the question of ideal implementation (i.e. everyone is a blank slate waiting for a humble ruler to lead them without leading)  need not be asked given that we must, in reality, get past the problem of convincing every person to live without the conventions, values, and possessions they already have. This task seems much more daunting than engaging in a thought experiment to evaluate the efficacy of Lao Tzu’s system. It is in light of this practicality that I locate my objection. If a system of thought, and a system of governance, is to be implemented, it must do so with all the ramifications that supplanting a current governing paradigm would entail. To this query Lao Tzu has no apparent answer. Past problems of implementation, it seems problematic that while social conventions are detested by Taoists, it is simply another set of values that lie at the heart of the Tao Te Ching. These values and conventions are precisely the ones that Lao Tzu has offered us; non-action, humility, moderation, ect…. How can Lao Tzu handle the charge that the Tao Te Ching is simply a text that expresses values and conventions in the same manner as formative Christian texts (for example) which served as a foundation of a great number of western values and conventions? Is it simply because the Tao Te Ching promotes simplicity and accordance with its own definition of nature that it is to be considered prized? It is in the rejection of one set of values for what seems to be another that this objection can localized.

[1] While every chapter is not addressed individually (in an effort to cover all the material, but to avoid redundancy), this summary encompasses a large portion of the themes presented in chapters 32 through 81 and connects the section’s major discourse (that of governance) to the themes presented in the first 31 chapters for consistency.


Section D of the Maha-satipatthana Sutta is divided into five categories. The first category concerns the five hindrances: sensual desire, ill-will, sloth and drowsiness, restlessness and anxiety, and uncertainty (Maha-satipatthana Sutta, DN22, D, 1). The monk is directed to recognize that there is, for instance, sensual desire within them or, conversely, that there is not. Once desire has arisen and is recognized the monk should then recognize the abandonment of that desire. An understanding of that abandonment is said to negate any future sensual desire from arising. In this way is the monk to understand the origination of sensual desires, and the passing away of sensual desires (Maha-satipatthana Sutta, DN22, D, 1). The second category, concerning the aggregates, is focused the same origination and disappearance theme with relation to form, feeling, perception, fabrications and consciousness (Maha-satipatthana Sutta, DN22, D, 2). These aggregates seem to be the closest we get to a total picture of the human experience. Next the sixfold internal & external sense media are considered; it seems best to describe the sense media as the faculties that facilitate ‘clinging’ to the aggregates. The eye, ear, nose, tongue, body, and intellect are the sense media included, each, once again, subject to the same process of recognizing origination and disappearance (Maha-satipatthana Sutta, DN22, D, 3). The fourth category, awakening, is then considered. Within the category of awakening is found mindfulness, analysis of qualities, persistence, rapture, serenity, concentration, and equanimity (Maha-satipatthana Sutta, DN22, D, 4). Finally, mental qualities with reference to the four noble truths are addressed. The first noble truth is that of stress, depicted as including any range of aggregates, from birth and aging to loosing loved ones (Maha-satipatthana Sutta, DN22, D, 5a). It seems that the term ‘stress’, as used, connotes any form of suffering, physical or mental. The second noble truth concerns the origin of stress, and its origin is said to be in ‘Whatever is endearing & alluring” (Maha-satipatthana Sutta, DN22, D, 5b). Things that are endearing and alluring are said to be those sense media that cling to aggregates. This definition is illustrated, for instance, by the progressive line of though: “the ear…sounds…ear-consciousness…ear-contact…feelings born of ear-contact…perception of sounds…intention for sounds…craving for sounds…thought directed at sounds…evaluation of sounds” (Maha-satipatthana Sutta, DN22, D, 5b). It is the finalization of this progression in evaluation that is said to be “endearing and alluring in terms of the world” (Maha-satipatthana Sutta, DN22, D, 5b) and where the origination of stress is to be found. The noble truth of the cessation of stress is said to be the cessation of the aforementioned progression, the abandonment of “Whatever is endearing & alluring in terms of the world” (Maha-satipatthana Sutta, DN22, D, 5c). Lastly is the noble truth of the “path of practice leading to the cessation of stress” (Maha-satipatthana Sutta, DN22, D, 5d). The path is said to be the “noble eightfold path”, including “right view, right resolve, right speech, right action, right livelihood, right effort, right mindfulness, right concentration” (Maha-satipatthana Sutta, DN22, D, 5d). Each of the eight parts of the path is described in practical terms, for example, “abstaining from lying” in the case of right speech (Maha-satipatthana Sutta, DN22, D, 5d). The conclusion, or section E, very briefly contends that if the four frames of reference, mental qualities being only one, are developed correctly over any amount of time, the result will be gnosis. If, however, there is any clinging remaining there will be non-return, a term that seems to not be defined (Maha-satipatthana Sutta, DN22, E).


While I do not suppose to have access to a privileged interpretation of the material, I do have some reservations about the claims as I have summarized them. An overarching criticism might be one of contradiction. Having the desire for mindfulness, regardless of the declaration that it is “classed not as a cause of suffering, but as a part of the path to its ending” (Bhikkhu, Introduction), is ultimately a desire and as such should fall under the umbrella of ’causes of suffering’. The first category presents a problem in the form of the objective treatment of a hindrance. For instance, if ill-will toward someone leads to an understanding of ill-will as a displeasing thing, could it still be considered a hindrance? The second and third categories seem to present a confused ontological picture. A question about what entity could be clinging and sensing in relation to aggregates and sense media might be raised. If the self is left out of experience, as its absence in the aggregates suggests, what is there to make use of the proposed sense media? The fourth category presents a confused image of the relationship between ‘how things are’ and ‘how we should act’. A point might be raised about the potential for solipsism (were it not for the insistence on compassionate living) if the monk is to abandon the reality of even those mental qualities that would assist in the ultimate goal. Put more simply, if one is to live compassionately, but also to abandon the reality of mindfulness, would not the ability to live compassionately be threatened under the scheme outlined? In the last category, the text remains ambiguous with regard to evaluation. One might wonder if evaluation is the final part of a process needing to be relinquished or abandoned, or if any part of this process is subject to the same treatment and to the same end. Similar to the criticism of category 1 claims, the relinquishment or attachment to any of these things would seem to be at least potentially helpful under the right circumstances. A final inquiry worth pursuing concerns the notion that things that are endearing and alluring in terms of the world should be abandoned. It would seem to follow that there is a proposition to appeal to some objective reality, rather than the fictions of the world, a reality that brings the ontological picture the Buddhists construct into question.




Bhikkhu, T. (2000) Mahasatipatthana Sutta: The Great Frames of Reference, PTS: D ii 289 (translated from the Pali)


The production of scientific knowledge, its propagation outside the walls of the scientific laboratory, and the relationships it forms with human actors represents processes whose nature, function and effects are a source of tremendous academic contention. More recent trends in scholarship have led to a number of frameworks that favor an obliteration of the well known subject-object distinction. Long standing debates over this issue in philosophy have extended themselves into the realm of science studies; within this new realm evidence has come to be marshaled in favor of either the subject, or the object, in analytic discourse[1]. This paper will attempt to position three more recently developed frameworks within this well known debate. Jenny Reardon, in her book Race to the Finish (2005), has put forward a theory she calls “co-production” to understand the way in which knowledge and power get constituted simultaneously. Heather Paxson, in her essay “Post-Pasteurian Cultures: The Microbiopolitics of Raw-Milk Cheese” (2008) has explored the way in which knowledge about the human body has come to structure not only the received understanding of the body, but also how this knowledge structures what Paxson calls “biosocialities”. A final theorist whose work will inform this paper is Joseph Dumit, whose arguments from his book Picturing Personhood (2004) culminate in a prescriptive account of the way in which actors can engage with the production and reception of knowledge through “objective-self fashioning”. This paper will attempt to show that not only are these three theorists reorienting science studies within the subject-object debate, but that by understanding co-production, biosocialities, and objective-self fashioning, it is possible to extend both the two descriptive accounts, and Dumit’s prescriptive account, beyond the biological realm. The paper will be structured by three discussions whose topic will be the three theories presented above, and whose incremental structure will serve to show both the strength of each theory, as well as the capacity to move beyond the biological knowledge from which each theorist has begun.


Co-Production and the Biosociality


In her book, Race to the Finish, Jenny Reardon is interested in addressing the curious opposition that the Human Genome Diversity Project (HGDP) encountered in its formative days. Reardon’s argument is that frameworks that would otherwise cast “science and power as already-formed entities that oppose each other”, a “co-productive” framework is more appropriate (Reardon, 2005: 3, 6). The nature of “co-production” and its relationship to both the subject-object debate, and the work of Paxson and Dumit are of interest to this paper and as such it is Reardon’s understanding and employment of the theory that this section will address.


Co-production, for Reardon, is a framework which posits that knowledge and power are formed “simultaneously”; that when objects of scientific interest are constructed they require the simultaneous construction of “scientific ideas and practices and other social practices” (Reardon, 2005: 6). Reardon is arguing along familiar lines; the construction of an “object” comes hand in hand with the construction of a subject that embodies scientific and social ideas and practices. It would be difficult on this account of co-production to fit it neatly into a subject or object oriented argument. This is the case precisely because, as Reardon’s diversity example demonstrates, to argue from the object or subject position is to presume a two coherent matters-of-fact exist; that “human genetic diversity” and “ethical research” can be marshaled in support of  a particular scientific endeavor. It is this presumption, then, that Reardon points to as the source of dissent over the HGDP. A crucial example of the way in which the presumption of objects and subjects of study is counter-productive is found in chapter 4 of her book. Reardon notes that a “lack of involvement of anthropologists” in the project was a source for scrutiny of the project (Reardon, 2005: 74). The application of the co-production framework is to be found in an ignorance of it. Enlisting of anthropological expertise fostered questions about the “[constitution] of this expertise” (Reardon, 2005: 96). How this expertise related to “authority in society” and concerns over the role that the particular expertise had played in historically “[colonialist]” and “racist” research were bound up tightly with the unproblematic addition of an anthropologist. The particular knowledge being imported in to the project was seen, by project organizers, to be divorced entirely from questions of social power that Reardon reminds her readers are always present; this is the meaning of co-production. A simple object or subject account of science, then, is disadvantaged in its inability to recognize the simultaneous construction of knowledge and power, of objects and subjects; it is in this way that co-production is particularly useful in understanding the failure of HGDP.


Heather Paxson provides us with another case in which co-production is occurring and uses it as a launch pad for an examination of the social groups that build up around power-knowledge relations[2]. Paxson is investigating the way in which the paradigm of Pasteurian hygiene is being supplanted by a post-Pasteurian “biosociality” through the production and consumption of raw-milk cheeses. The place of co-production is overt in the story that Paxson tells. Co-production can be seen in the nature of the Pasteurian paradigm that “people in the United States live in” (Paxson, 2008: 15). If co-production argues that knowledge and power are simultaneously constructed, this argument seems justified in Paxson’s case when she states “Pasteurian regulatory practices work not only to produce safe food” (objects or knowledge): “they also work to cultivate germophobic subjects” (Paxson, 2008: 28, emphasis added). Important to this paper, in this case, is the way in which subject-object debates have no place where the co-productive framework works to both redefine the terms of the debate, as well as to show that the knowledge about germs is detrimentally linked to the power to regulate the body[3]. It is in this way that co-production can be useful in understanding power-knowledge relations beyond the HGDP case.


Biosociality and Objective-Self Fashioning


Biosociality seems, to common sense understandings of subject-object distinctions, contradictory in its message: ‘that which is biologically factual has a social component’. It is precisely in the recognition that the term is anything but contradictory that Paxson’s definition, as well as her place in the subject-object debate, can be found. The term is not contradictory precisely because Paxson recognizes that co-production is at work; that knowledge and power are simultaneously constructed, that discussion of an object or subject is constrained without looking at that simultaneous construction, and that human actors can impact this construction. In the previous section the links between co-production and Paxson’s work were located in her understanding of the Pasteurian tradition with which a majority of US citizens live. Paxson’s story is hardly finished here, however. Co-production can be seen to be at work in the same way with regard to the post-Pasteurian resistance that has come to combat the power relations inherent in the production of knowledge about microbial objects. Paxson notes that new forms of knowledge about cheese production, about microbial actors are springing up; the microbes become “a microbiopolitical object” of a distinctly new kind (Paxson, 2008: 39). No longer do the microbiopolitical objects induce risk and fear, promoting a reliance on an authority for mediating between the human and the threat. Rather, “alternative ways of thinking about what Pasteurians would simply dismiss as irrational risk behavior” produce new power relations, relations that the human actor, previously passively influenced by the Pasteurian power relations, now come to dominate and direct that power (Paxson, 2008: 36). This is precisely what a new biosociality entails; “a self-fashioning organized around a collective sense of biologized identity” (Paxson, 2008: 37). No longer does the knowledge produced just involve power relations that human actors are subjected to. Recognizing that “microorganisms are our ancestors and our allies”, that “eating raw-milk cheese…can…be perfectly safe, even healthy”, puts the power to define the human, to regulate the human squarely in the human’s hands. This is no slight distinction, the capacity to subvert the “modern risk” that “the FDA” puts forward as “the most relevant framing of certain food choices” is nothing less than emancipation, then a revolution in the battle to define ourselves for ourselves. If co-production teaches us that the object-subject distinctions can only be made on the assumption that they exist in an unproblematic way, the appearance of new biosocialities in Paxson’s story informs us further that human actors are an integral part of the very construction of objects and subject. In their descriptive accounts, Reardon and Paxson show a definitive weakness in the character of any object-subject debate; the object and subject are continually and mutually redefined along an axis shaped by the interaction its human actors have with it. To understand better the place of the human actor, it will be beneficial to turn to the biosociality that might be seen coming alive in the pages of Joseph Dumit’s book.


Joseph Dumit’s book, Picturing Personhood, is a formidable example of the dual appearance of biosocialities. Similar to Paxson’s outline, in that there is a biosociality of people for whom the unchallenged reception of power-knowledge is the order of the day, and there is a growing biosociality constructed around individuals that objectively-self fashion. In Dumit’s case the ‘collective sense of a biologized identity’ comes from a received understanding of the objectivity, or autonomy of PET brain scans. It is in the power of these brain scans to, on Dumit’s account; create a semiotic link between the object of knowledge, and the subject of power relations. Dumit demonstrates that, in legal battles of the kind John Hinckley was involved in, the image of a brain scan comes to dominate any supplemental information and “promise…to tell us about the mind” (Dumit, 2004: 110). This is so much the case that the judge presiding over the court, while allowing the brain scans as evidence, tempered the way they could be seen as to avoid the outright influence that they might have on the jury (Dumit, 2004: 110). The biosociality that can be seen in this case is particularly in the collective sense that our identities, our insanity, or our normalcy, are to be found within objects of knowledge; objects whose very mechanical form of production purport to lack subjectivity.



Objective-self Fashioning and the Subject-Object debate


Tightly bound up in Joseph Dumit’s process of “objective-self fashioning” are a myriad of terms that merit explanation; indeed, an understanding of exactly what objective-self fashioning is, how it is connected to co-production and biosocialities, and how it avoids the subject-object debate all together, would be incoherent without this explanation. Objective-self fashioning relies upon the existence of a “biomedical identity” and an “objective self”.


The “objective self”, as Dumit understands it, regards the “acts that concern our brains and our bodies that we derive from received-facts of science and medicine” (Dumit, 2004: 7). This self, then, is wholly different from our common sense understanding of the self; this self is a bio-scientific blend of received understandings and reactions to them. Closely aligned with the objective self is the nature of identities that come bound up with it. As Dumit notes, our objective selves “always pull at issues of normality”; it is in this capacity, in the power of received facts to be translated through images and seemingly innocuous, even objective, statements that the construction of identities can be formed (Dumit, 2004: 8). The biomedical identity, then, is one in which the objective self has come to dominate a characterization of identity. In Dumit’s case, this regards the process by which the “creative and laborious” production of both PET technology and the resulting images work to shape our objective selves and to inform a biomedical identity about what a normal, depressed, active, or psychotropic afflicted brain would look like. It is in the power of the images there produced to defy critical questioning. In this regard, Dumit points to the semiotic work that the images labor at; they play on our identity-bound assumption in that “because most of us think that the brain of an insane person should somehow be different from a sane person’s, we hope that there is a way to detect this difference” (Dumit, 2004: 127). Similarly, the sense in which our selves are “objective” are mediated through a number of paths including the autonomy science is granted, the need for singular accounts of scientific fact in popular media, but also in more subtle ways. Dumit points particularly to the capacity of mechanical objectivity to lend a great deal of strength to the images that make their way around our world; although “brain images are produced by people, they are coproduced by scientific machines. And it is the machines, especially computers, that leave their mark” (Dumit, 2004: 119)[4]. These are the ways in which knowledge, images, identity, selves, and objectivity are tightly bound together in a grand mangle of practice.


From this description, a better understanding of objective-self fashioning emerges; an understanding that implores us to acknowledge, take command of, and direct our selves to fashion, rather than be fashioned, a notion of self. This is the prescriptive aspect of Dumit’s account, and one that we can learn from. Dumit writes:


Objective self-fashioning is thus an acknowledgement of local mutations in categories of people highlighting the active and continual process of self-definition and self-participation in that process. Objective self-fashioning is how we take facts about ourselves…that we have read, heard, or otherwise encountered in the world and incorporate them into our lives.

(Dumit, 2004: 164)


This, however, is not the end of the story; objective self-fashioning is not about a subject oriented account of scientific knowledge, or knowledge in general. It is precisely the recognition of the role that both the object and the subject have in any practice of knowledge production. It is “not just to the social construction of mental illness” or “a simple story of the gradual emergence of the right view of depression, schizophrenia, and PET scanning”; objective self-fashioning is the “hope that responsibility might be multiplied – that accountability might adhere to experts, mediators, and laypersons alike for their participation in objective self-fashioning” (Dumit, 2004: 168). Here then is the crucial distinction; Dumit finds a way out of the subject-object debate. It is not a rejection of either the subject, or the object of knowledge production, nor a dismissal of the debate. It is recognition of what Donna Haraway calls “situated knowledge”:


The object of knowledge [must] be pictured as an actor and agent, not as a screen or a ground or a resource, never finally as slave to the master that closes off the dialectic in his unique agency and his authorship of “objective knowledge”


(Haraway, 1988: 592)


Thinking Beyond


With a line of inquiry that starts first at the understanding that knowledge and power are co-produced, proceeding through the construction of biosocialities based around this power-knowledge, and concluding with a tool for dealing with and engaging in the construction of biosocialities, we can at least two ways our three epistemological accounts are brought together. First, it is precisely because knowledge and power are co-produced that biosocialities can develop in the first place; for if the two were divorced one would never find a group of people, or individuals who commune around a shared understanding of identity. And it is with objective self-fashioning that we find a prescriptive recommendation to seize control of this process; to not leave things to the “master…and his authorship of objective knowledge” (Haraway, 1988: 592). Second, all three accounts serve to engage the same subject-object problem in the same way. It is not by ignorance, or preference that the debate is settled, but by situating knowledge between the two. In this way these modern science studies scholars reposition themselves in the age old debate, and in this way they are united.


One final point worth relating is the extension beyond the biological realm that these approaches might have. It is true that Reardon, Paxson, and Dumit are dealing with exclusively (for lack of a better term) biological practices. However, keeping in mind the way both Dumit and Paxson rely on a co-productive force to reinforce their cases, there is nothing strictly biological in an account of co-production[5]. This is to say that the implications might be stretched further, perhaps to engage any aspect of human activity in which identity and understandings of the self are furnished out of the seemingly objective semiotic devices we encounter. One example that can not be fully dealt with here is the nature of technologies and their impact on our identities. The recent development of Radio Frequency Identification (RFID) has lead to the development of what could only be described as new biosocialities[6]. The capacity for RFID’s to monitor people’s movements, spending habits, and the overt use of the tags for inclusion/exclusion in physical spaces work to redefine where the human can go, how suspicious their activity is, and questions over the control of this technology. It is in this way that RFID’s, as a technology, engage questions of identity and it is in this way that a co-productive force is at work. Already there are groups of individuals not willing to rest content in subjecting their bodies to technological surveillance and who have began to fight back. Cheap scanners and even cheaper Arduino microcontrollers are the identity-redefining forces around which new biosocialities are appearing, and with them practices of objective self-fashioning that Joe Dumit would no doubt be pleased to see[7]. It is in this way, then, that these new modes of inquiry that defy the strictures of distasteful subject-object categories might come to serve the responsibilities implicit in defining humans along biological and technological lines (and perhaps many more).










Dumit, J. (2004) Picturing Personhood: Brain Scans and Biomedical Identity. New Jersey: Princeton University Press


Haraway, D. (1988) “Situated Knowledge: The Science Question in Feminism and the Privilege of Partial Perspective”. Feminist Studies: 14(3), pp. 575-599.


Reardon, J. (2005) Race to the Finish: Identity and Governance in an Age of Genomics. New Jersey: Princeton University Press


Paxson, H., (2008) “Post-Pasteurian Cultures: The Microbiopolitics of Raw-Milk Cheese in the United States”. Cultural Anthropology: 23(1), pp. 15-47.


[1] The reader should be reminded of long standing debates starting first with positivist thinkers like the famed Vienna Circle. Rudolf Carnap, a principle actor in the Vienna Circle, represented the strictest in defenses of the object, in subject-object debates (Der Logische Aufbau der Welt, 1928, Leipzig, represents a phenomenological exploration of science along these lines).  Theorists like David Bloor later explored the capacity for the subject to structure the nature of the reception of an object of study in his “strong program” (see Bloor, 1983, 91, 97).

[2] Clarification is needed here. This comment should not be taken to mean that Paxson uses the term “co-production” openly; rather, the co-production that Reardon has outlined can be seen to be present in the case Paxson is interested in. Similarly the groups that “build up” will be discussed in more detail in the next section that addresses biosociality directly.

[3] Regulation of the body can not be fully developed here, but will be discussed more in the next section. The most overt form of regulation, or the exercise of power, can be found in the pharmaceutical industry’s ability to capitalize on the “germophobes” who need antibodies, and a whole host of other practices/objects to mediate the risk they perceive to be living with.

[4] The reader would be informed further on the notion of objectivity and the way in which Dumit’s conception of the power of machines is bound up in a remarkable dual historicity concerning the use of the term “objectivity”, see Daston, L., Galison, P., (2007) Objectivity. Boston: Zone Books.

[5] This is not meant to imply that Dumit or Paxson express this reliance, but rather that, as the paper has attempted to show, the two are bound up in an acceptance and understand of co-production.

[6] For a general, and non-scholarly, explanation of the RFID technologies as well as some of its potential applications see, accessed May 27, 2009. The reference list at the end of the page should be observed carefully.

[7] The reader would be informed further on the rise of this “biosociality” by referencing the websites of community members. holds information about just one of the many technological sites for the redefinition of identity. Projects that engage these types of redefinition and specifically the power relations that they upset can be admired at Both websites are active as of Wednesday, May 27, 2009.


I want to adress an issue that I found to be relatively prominent in both readings for this week; reflexivity. My first introduction to the concept of reflexive arguments was, curiously enough, in Harry Collins’ outline of his Empirical Program of Relativism (EPOR). The curiousity was to be found in his denial of the importance of reflexive arguments. For scholars like Collins, reflexivty was largely a waste of time as it put you in a position of perpetual circular reasoning. If you are a constructivist, for example, and you argue that some artifact of science has been “socially constructed” (I will leave this contentious term unqualified for the time being), then you would be irresponsible if you were not to consider the construction of your social constructivism. Another failing of the reflexive position, according to Collins, is that it forces the sociologist to see scientists in a differnet light than they see the world. After all, it is scientists themselves who, according to a number of science studies “camps”, spend their time defending their theories in a Khunian tradition. For a scientist, victory in defending a theory would be hard to find if a majority of their time was spent questioning their theories perpetually. I do not want to imply that Collins, or Bloor (who advocated the use of reflextive scholarship in his Strong program), are correct in their assumptions about reflexivity, but I would like to stress my own apreciation for that type of scholarship.


Karen Barad and Donna Haraway, in my opinion, both present largely reflexive arguments. Haraway, in advocating her feminist objectivity suggests that it is a failing of relativistic and totalizing pictures of scientific practice alike, that the subject is given a particular autonomy. In the case of relativists Haraway points to their “being nowhere while claiming to be every-where equally” (Haraway, 584). Her concern is that, like the scientists that relativists often criticize, there is an advocacy of some imaginary perspective from which conclusions there-drawn can be justified. But what perspective is this? What relativist can truly claim objective critique while remaining truly relativist? To me, Haraway is making a reflexive argument. That is to say, she is turning back on relativists and scientific empiricists their own arguments in order to generate a clear picture of her objectivity. In light of the reflexive failures of both camps, Haraway suggests that it is to “partial, locatable, critical knowl-edges sustaining the possibility of webs of connections” that science studies should look (Haraway, 584).


Similarly, Karen Barad advances a reflexive argument with regard to traditional treatments of realism; Haraway’s included. Barad explores the catagory of reality in hopes of giving a foundation to an ontology generated through scientific practice. Where does one find reality? Is it in the manipulation of physical entities as Hacking has suggested? Or perhaps in the reality scientists extol their theories for having? Or is reality just an ideal that we have created? Barad, like Haraway, finds a middle ground by making a reflexive argument. Indeed, Barad’s Agential Realism “entails a conscious, critical reflexivity” (Barad, 186). Agential realism asks of its self where the place of culture as well as nature is, where objects meet subjects, and how reality is constructed out of “shifting and destabilizing bounderies” (Barad, 188).


I would like to suggest that, contrary to some of the more eminent of science studies scholars very well formulated opinions, reflexivity has the capacity to enrich our understanding of science. To continually ask ones self the questions that one would pose to the object of critique is to open up the door to those aspects of objectivity, reality, and science that would otherwise stay hidden. I will be working hard to ask of myself the types of questions that helped Barad and Haraway develop their theories while working on my Science@York project. One parallel I have already drawn is in my treatment of my laboratory’s goals. I had, previous to these readings, spend a good deal of time developing what I felt the motivating factors in the lab might be without asking about my own motivations in analysis. What has come out of this practice is a capacity to see my own work as being structured heavily by our course work thus far (another testament to the efficacy of reflective practices is this very change of heart!). The picture I am able to draw now is one that is symmetrical with regard to its treatment of the object and subject of analysis and this is truly a gift. I would be interested to know if anyone else has taken any methodological turns in their work in this way.

The readings for this week introduce us to our first taste of gender and sexuality analysis (to be fair, Traweek’s book also deals with some of the gendered aspects of her and high energy physics work). However, I think that they offer us much more than simple accounts of the production of social or natural categories. I feel compelled to model much of my own analysis after some of the more formidable arguments made in contemporary feminist analysis. One the strongest themes found in quality feminist writing is one we might call a kind of “middle path” of analysis. STS finds its self in a relatively unique position – one of compromise. While most disciplines come down hard and fast on exactly what counts as good or bad work, STS occupies a kind of middle ground insofar as competing views are not to be chastised, nor are they to be ignored. Rather, much of the strength of contemporary STS work is found in its ability to incorporate and synthesize conceptual tools/methods.


In the first of our two readings, by Emily Martin, we find a criticism of the one sided regime that has put the sperm to work as an aggressor and the egg as its submissive counterpart in the process of conception.


One might, at first glance, feel compelled to write off the overt attacks on the sperm, which has enjoyed, as has its vessel, a level of superiority in the dichotomous relationship between the sexes (admittedly continuing on today). Why would one be moved toward this end? The answer: Far too often can we find examples of feminist writing (in science studies literature) that, commendably, seeks to strike down horribly one sided metaphors in scientific texts while at the same time ignoring counter evidence or offering equally one sided replacements for those metaphors. One example would be “The Science Question in Feminism” by Sandra Harding. While I won’t speak point for point about the entire book, there is an emphasis on the replacement of masculine metaphors for feminine ones in scientific literature. This type of solution seems to bring us right back to the initial problem of hard and fast definitions, of one sided views of sexuality and gender, and of possible social implications of those definitions and views. Emily Martin handles the issue with more skill, entreating her readers to look toward a “cybernetic model – with its feedback loops, flexible adaptation to change, coordination of parts within a whole…” (p. 499, Martin). This model, it would appear, is aimed at avoiding similarly dangerous terms and metaphors that would put the egg in the aggressive position (p. 498, Martin). Martin is obviously concerned not only with the problems that current literature has but, unlike Harding, finds overtly feminine metaphors to have the same potential for damaging views about gender and sexuality. It is the middle ground Martin is looking for. The cybernetic model seems to imply that “the egg and the sperm do interact on more mutual terms” (p. 499, Martin). This is, at first glance, quite neutral and presumably more digestible for men and women, scientist and non scientist alike. Martin quickly points out though that we should be aware of the implications a cybernetic model might have, THIS IS THE MOVE THAT SETS HER APART. Where Harding is quick to suggest replacement metaphors that are one sided, Martin is intrigued but CAUTIOUS about the use of even neutral language.


Similarly, Anne Fausto-Sterling points us toward increasingly more varied understandings of the categories that Martin was dealing with. Where scientific, social, and cultural practices seem to seek a definitive line between male/female, Fausto-Sterling engages us with stories that provoke questions of necessity in the telling cases of Dr. Young’s intersex patients. Must a sex be decided? On what grounds? Need there be grounds? What do patients use to evaluate the advantages of one sex or another? (or the need to choose between one or the other in the first place?) One of the strongest themes that comes out of Fausto-Sterling’s discussion, be it of Maria Patino in relatively contemporary times, of existing dualisms in human thought, or of the scientific category of intersexual individuals, is one of neutrality. Man/women, heterosexual/homosexual, reason/nature, civilized/primitive; Fausto-Sterling wonders, as should we all, what interdependence and middle ground there is to be found if we are only to look beyond dualisms and dichotomous relationships, beyond hard and fast definitions.


I think it would be of great benefit if not just feminist writers, but all academics worked from a neutral vantage point in their work, seeking not one right and wrong answer, but observing how a network of definitions is formed and interrelated in the hopes of avoiding the pitfalls of some of our more loaded language.


I want to comment on a common theme found in both Myers and Gusterson’s readings for this week: Making Scientists. There is something very important to be said about the way the laboratory, or practices across a discipline serve to shape scientists and it is something that is quite unique to material views of scientific practice. To me, the anthropological turn to the physical movements and environments that scientists work in is crucial to this understanding. We have a unique opportunity to investigate just these types of influences at York and this weeks readings can help structure those investigations. When I think of the difference between typical explanations of learning or practice (trial and error) and those that Myers and Gusterson demonstrate in science I start to think of all that we might have missed over the years in those subtle and unspoken recesses where learning actually happens in science.


I find a good way to conceptualize this is through the work of Michel Foucault. Foucault, in Surveiller et punir: Naissance de la Prison (Discipline and Punish) and in various other works, outlined the typical notion of power as a type of hard-power; power that is generated through punishment. There is an overt relationship in hard-power relations for Foucault, between actions and their consequences; the same could be said of the more typical explanations of relations that “make up” our scientists. Sociologies of science seem quite to brand scientists with these relations; Scientists X acted in such a way due to pressure from a political body, for instance.


But there seems a more intricate way to sketch out an image of how scientists learn to act and think; through what Foucault would have termed soft, or gentle, power relations (though he would say it, in French, with much more rhetorical pizzazz). The soft power relations that Foucault observed going on was an attempt to locate where real learning about relations and power came from; from subtle and unspoken places. Imagine a child that grows up in a house with a father that ritualistically goes to work every morning, after breakfast made by a mother who stays home and tends to the house…every morning. There is no explicit mention to the child that men do the work and women take care of the house, but in this mundane and repetitive act Foucault would locate the seeds of a burgeoning power relationship (between men and women) in the child’s mind.


Anthropological methods take this notion one step further in science, and with works by authors like Myers, Gusterson, Latour, Kohler, Traweek, and Rader (see references below), illustrate the way that what scientists physically do and the physical object with which they do things, can have an immense influence on their learning process and in turn their apparent commonplace actions. Soft power relations can be seen in several of the works we have covered. Traweek has outlined some of the moral economy that makes up the high energy physics community; the way in which novices are taught to act, in quite unscientific ways, if they are to fit into the wider discipline of their peers. Landecker explored biologists interactions with cells and how adding a temporal aspect to their material culture structure their work. Myers has done a beautiful job of articulating the ways in which an entire moral economy has been constructed around the physical interactions crystallographers have with their models as learning aids, instruments, practices, and ideologies. These approaches that point us to those places that are overlooked in science are, for me, some of the most interesting, and I think it is in locations of soft-power building that science studies will find its most fruitful intellectual challenges.


This type of analysis is especially relevant to the work we get to do in our various labs. I would be interested to see if anyone had any stories they would like to relate about the way in which physical actions, object, or any type of practice “made up” any of their scientists in the lab. I am only involved in publication analysis in my group so I don’t get the benefit of looking for these kinds of dynamics…I would love to chat with anyone that has.




I found myself pulled between negative and positive interpretations of either material technologies, or observation in general while engaged in our two readings. I thought that by drawing out some of my mental frustrations, some of you might have some insight to share:


1. Negative conceptualization in scientific observation:


Some high minded philosophers might, when witnessing technologically mediated observations, take the chance to attack the scientist’s rational footing by discrediting their reports of observation. He might suggest that Boyle is merely observing the results of a machine designed in such a way that we may only see nature in one light; that our material technologies embody theories, which shape our observations. Does this present a problem for the claims of scientists?


Similarly, a sociologist might ask of the scientist ‘is your objectivity not compromised when you come to use your observational technologies as tools of social practice?’ Is it a problem for the objective and rational stance of science to have to compete for resources and to use their potentially theory-laden technologies to assist in that competition? Are the high energy physicists allowing the direction of research to be guided by those that can be convinced a fruitful detector is in the physicist’s possession?


These material technologies, Boyle’s air pump or various particle detectors, serve to undermine the empirical ground that science stands on and the asocial integrity of scientific progress.


2. Positive conceptualizations in scientific observation:


The pragmatist might respond to the philosopher’s question of empirical certainty with a question of alternatives. If, in the tradition of Descartes, we can say that our senses can be deceived, and if we know, from our practice of science thus far, that the senses have a tremendously limited capacity, is technologically mediated observation not our “best bet”? It seems that technologies are the only way to view that which is unseen. Machine made or not, matters of fact can either be observed with the philosopher’s complaints unheeded, or we can take seriously those complaints, forego observation on philosophical grounds, and retreat into solipsism. What would there be to do?


A more contemporary science studies scholar might challenge the claims of our negative sociologist on linguistic or logical grounds. Is there merely one meaning of objectivity that stands to be undermined? One might argue that you merely sacrifice disinterest for mechanical objectivity (the elimination of human sensory failings), and that this sacrifice is necessary in the scientific climate? If a scientist does not fight for resources with passion will their research projects not simply go unheard and unimplemented? They may also question the assumptions of the negative sociologist. Why should science be asocial? If science is a human enterprise, and humans are (largely) social creatures, should not science be understood in realistic terms; as an inherently social activity?


Material technologies, like Boyle’s air pump or various particle detectors, present the only means we have to approach many of nature’s phenomena. Progress necessitates the battle for resources and the use of material technologies in that battle.





Schaffer, S., Shapin, S. (1985). Leviathan and the Air Pump: Hobbes, Boyle, and the Experimental Life. New Jersey:  Princeton University Press


Traweek, S. (1988).  Beamtimes and Lifetimes: The World of High Energy Physicists. Cambridge: Harvard University Press.


In the extraordinary and accomplished life of Captain James Cook, it seems only fitting that his death be surrounded equally by grandeur. Unfortunately, the wonder of Cook’s death comes not from a place of romantic heroism, or accomplishment, but from confusion. The reasons for his death have been the subject of numerous popular and scholarly works, and the debate still stands as a site for contention. This response paper will not presume to set straight the record of Cook’s death, nor will it shed any new light on the matter. The goal of this response paper will be to illustrate, from this scholar’s point of view, the most noteworthy events that may have contributed to the death of James Cook. There are at least three major events that contributed to the conditions surrounding Cook’s death; the initial contact with the “Sandwich Islands” 1778, the return to the Islands in 1779 after an attempt at the Baring Straight, and the brief return after the HMS Resolution’s foremast suffered damage in the same year. Each event holds at least some interest, united by a reoccurring theme, regarding the death of Captain James Cook and they will each be treated briefly, followed by a personal reflection on the entire episode.


Being the third of Captain Cook’s major voyages, the purpose of the 1776 voyage was, on its face, to return a Tahitian national to his home. The interested parties, principally those that financially backed the voyage, were more heavily invested in discovering a Northwest Passage. Moving north from the Polynesian Islands, and passing what he named the “Sandwich Islands”, Cook attempted passage through the Bering Straight but was unsuccessful. This initial passage, by the Hawaiian Islands and northward in 1778, constitutes the first contact that Cook and indeed any European had made of that archipelago. The relatively uneventful episode does carry with it one important thing for anyone considering the events that were to follow in 1779; the very naming of the islands. It is important that this period in European history is marked by its colonial temperament. It is not to denigrate the character of Cook to describe his naming of the island group as xenocentric at worst, and culturally ignorant at best, but rather to accurately transcribe the actions of the day into terms that resonate more clearly with us now. The presumption is that a name did not exist for those islands, or that the inhabitants were not of sufficient class to have their culture recorded in European maps and records. At a time when other cultures and peoples were a relatively new phenomenon and the “Sandwich Islands” were completely new to Europe, it is hardly appropriate to charge Cook or his crew with something with such a modern inflection as “racism”. The point to make is one of naivety; colonialism could not predict the problems it would facilitate, nor are they entirely visible without hindsight. Naive is a word that at once inverts the usual binary between European and “simple” peoples, avoids the in modern understandings of “racism”, and allows for the ubiquity of colonial attitudes that “ignorance” might deny. Naivety, on this understanding, finds its way into each encounter Cook had with the indigenous population, and factored heavily into his death.


The second time Cook encountered the Hawaiian Islands was in 1779 and is recorded in much more detail. Perhaps the most important thing to point out are the warm welcome, and even deification, that Cook received from the Hawaii island peoples. Gifts and signs of respect were exchanged between the European travelers and the indigenous peoples, and it is at this time that the adoration of metal goods was first seen in the islanders, a form of trade familiar to Cook from previous voyages. Naivety, on the part of Cook and his shipmates, might be said to have manifest in the expectations of the time. The question of why such a warm welcome occurred, for instance, seems not to have been thought of much. Indeed, much of the narrative surrounding James Cook and his interaction with native peoples seems to cast their interest in things like metal implements in a childishly amused way. It is perhaps the mode of colonial Europe to assume its own eminence at the time, and so paying homage, and being amazed by trinkets may have had an almost expected character to it. In any event, the crew did enjoy, by all accounts, a pleasant stay for roughly a month. The closeness of Cook to the King and his family was indicative of the lack of hostility between the two cultures; most narratives treat the minor thefts by indigenous people as the only deviation from an otherwise happy encounter (it would be interesting to know if there were instances of Europeans acting similarly). Members of the crews found themselves conversing openly and in a friendly demeanor with numerous locals, from children and holy men, to the royal family themselves.


The third “encounter” was brought about by damage suffered to the Resolution’s foremast during a storm while leaving the islands. It would appear, and at this point seem rightly so, that the Europeans had no hostility toward the native population as they worked to save some of the indigenous population from the same storm that damaged their ship. Similarly Cook expected no hostility, given their excellent treatment in the past. However, during the return, repair, and stay at the islands, Cook and his crew would experience a very different reception than they had during the last visit. The local interest in metal goods was, perhaps, the proverbial straw in this story. After the theft of some coal pokers and metal instruments, the Europeans pursued some of the native population off the European ships and, with the aid of one of the indigenous chiefs and priests, worked to recover stolen items. There seems some discrepancy between the hostile actions of some native islanders and the seemingly helpful response of others. The attempt to attack the Europeans with stones, for instance, was put to an end by one of the local chiefs, and the stolen items were returned. Again there is a discernable measure of naivety to be found in the behavior of the Europeans, especially as it regards the actions of the locals. There seems to not be very much apprehension regarding the character of the previously described violence. Attacks by stones, oars, and guns, by the locals and Europeans alike, seem to exceed by no small degree the kind of violence presented between cultures previously. This is not to say, after witnessing the attacks, Cook and the European travelers should have went on the offensive, but rather that they might have kept the most eminent among them (Cook) on their ships before engaging in what Cook seems to have assumed would be recognized as a diplomatic mission to the King. When a number of the local population stole a small ship from the Discovery, only apparent the morning after, Cook and a number of officers went ashore. Cook went to the King and hoped to secure some diplomatic end to what Cook assumed would be a simple matter of discussion between himself and the king. This is, unfortunately, where European naivety would have fatal consequences. Calling on the King, and bringing him to the beach to, presumably, go aboard one of the European ships, Cook must have assumed that he did not seem a threat to the people of the islands. It is while Cook is on shore that David Samwell, the Discovery’s surgeon, recorded that the native community appeared to be arming themselves as a response to the death of a chief at the hands of Discovery’s crew. The naivety comes from, once again, a xenocentric attitude; how would the Europeans have responded had one of their high ranking crew been killed, their king (Cook) being escorted off his ship and to land, and violence characterizing the most recent interactions between the two cultures? As tensions mounted, Cook and the crew on land with him were attacked and few escaped with their lives…Cook was not among them.


The naivety and xenocentric attitude, first manifest in the colonial practice of claiming otherwise “owned” land, seen once again in the response to Hawaiian behavior, and seen finally in the ignorance of the reception of events in the minds of the islanders, seems, on this author’s opinion, to have been the end of Cook. It is not to say that, in all encounters with foreign peoples we should be on the offensive, nor is it even to say Cook had good reason to distrust the people of the islands; the point to be made is that, in a months stay, with a ubiquitous attitude of xenocentric and colonial superiority, Cook failed to recognize that literally and figuratively, there was an ocean of difference between his people and those he was visiting. The story of Cook’s death unfortunately gets painted with native savagery at worst, or confusion at best; this seems a result of the inability to recognize both our distance from the culture Cook was immersed in, and the distance between the culture of the Europeans and the Hawaiians. There is a notorious, and often refuted, thesis about the death of Cook that argues the original contact with the Hawaiians came at a time, and in the physical form of, peaceful festival in honor of a particular Polynesian god. The fatal conflict, on this account, coincided with a period after the aforementioned festival in a time of war. The details of this thesis are not particularly important, nor are their accuracy. What is important is that the thesis stands as an explanation of the events that is not unilaterally European. The point is that, just as beliefs about the sophistication, dignity, and heroism of European exploration are often evoked, we must remember that these traits are not European alone, nor are they foreign to so called “simple” peoples of the time. This response paper suggests that, concerning the matter of Captain James Cook’s death, naivety on the part of Cook and his crew was ultimately responsible for the untimely end of a great and accomplished explorer. This explanation allows the seemingly ambivilant actions of the native peoples by understanding the timidity and tension that must have characterized interactions between two vastly different cultures. Similarly, this type of explanation works to infuse a measure of equality between European and Hawaiian beliefs during the 18th century. Stories about the confusing actions of natives quickly become stories of patriotic defense of a royal family when a symmetrical, perhaps structuralist, treatment of belief and action are employed in analysis of 18th century exploratory culture.





This paper is a response to the following reading for STS/3725:
Professor Ruthanna Dyer’s lecture on James Cook for STS/3725 at York University 2009


The following articles/chapters/readings for STS/3725 2009:
– Kaye, I. (1969). Captain James Cook and the Royal Society. Notes and Records of the Royal Society of London vol 24. #1 p. 7-18

– Beaglehoe, J. C. (1956). On the Character of Captain James Cook. The Geographical Journal vol. 122 #4 p. 417-429

– Cook, J.  (1955) The Journals pp49-53 and 337-348 . Penguin Classics

– Kennedy, M. J. (1996). Biotechnology Brought to New Zealand by Captain James Cook aboard the Endeavour, Resolution, Adventure and Discovery.  Australasian Biotechnology vol 6 #3 p. 156-160

– Thomas, N. (2003). The Extraordinary Voyages of Captain Cook .Walker and Co. p. xv-xxxvii


General Knowledge/casual Wikipedia reading on Polynesian Gods, Marshall Sahlins, and James Cook


Terrence Hawkes discussion of Giambattista Vico in his book “Structuralism and Semiotics”


In the modern university, traditional oral culture finds its self an object entangled in a mess of bureaucratic commodification and capitalistic trade. William Clark, in his book “Academic Charisma and the Origins of the Research University”, has worked hard to show the slow progression from traditional to modern conceptions of the academic world and the academic persona. Based on a reading of Max Weber’s three forms of authority, Clark explores the bureaucratization and commodification of the academia, two forces he describes as “the twin engines of the rationalization and disenchantment of the world” (Clark, 3). The characterization Clark gives of the early academic world is one defined by nepotism; a place where judiciary and ecclesiastical disciplines held the greatest sway, and a place where the oral dominated the written word. Chapter by chapter, Clark uses curious, perhaps mundane, pieces of material culture from universities to explore the development of the research university. Lecture catalogues, paintings, visitation tables, and library catalogues all become interesting sites of investigation in Clark’s adept hands. Modern universities, on Clark’s understanding, are those defined by the triumph of “the visible and the rational” over the “oral and the traditional” (Clark, 3). Crucial to this characterization then, is the place of oral culture. In the modern, market driven university, managed by the cold quantifying ministries of the German state, oral culture seems to have been replaced by a focus on the written word or number. Disputations were replaced by dissertations, the examination lost its oral components, dossiers on professors were collected, and grading systems were implemented; academics and their authority “would be manufactured by publications and written expert or peer review, instead of by old-fashioned academic disputational oral arts, unsubstantiated rumors, and provincial gossip” (Clark, 29). Academic oral culture, according to Clark, was not done away with entirely however. This paper will explore the sites at which Clark localizes the preservation of traditional oral culture in modern bureaucratic universities, and will then look to the material culture of York University to determine the influence of the same oral heritage. Traditional oral culture in the modern research university will be seen to manifest in the recasting of the academic voice through ministerial, or managerial, tools of registration, as well as in the hierarchical structure of academic authority.


The visitation and visualizing the oral


The practice of the visitation has its roots in the episcopal right of bishops to send visitors to any body under his jurisdiction (Clark, 340-41). Early visitations, on Clark’s account, subsequently found their way in to the judicio-ecclesiastic world of the early modern university. It is during the early 16th century that Clark localizes a number of practices organized around organization. To manage a university, and to manage academics, it was necessary to keep paperwork; “the essence of modern ministerial power and knowledge” (Clark, 50). A 1556 decree in Vienna, for instance, enlisted in two paid individuals to keep notes on the daily lectures and practices of professors, and even imposed fines on professors that failed to attend lecture (Clark, 50). Management of professors was obtained through these kinds of centralized and rationalized control, and the visitation was no exception. Clark explores the changes in visitation practices from the questionnaires used by a ministry in Wolfenbuttel in 1597, to a visitors journal of 1784, and finally to the Burgsdorf table of 1789. The 1597 questionnaire “allowed no distinction of public and private” (Clark, 351). This point is important as the eventual separation of these two lives defines our current, merit based system of managerial control. Indeed, even the manner in which the questionnaire was carried our was thoroughly early modern; even though the questionnaire “translated academic voices into legible traces…protocols…[showed] rank and tempo, moving from higher to lower in academic precedence (Clark, 351). Vacchieri’s visitation journal in the 1780’s, on the other hand, worked to bolster a kind of bureaucratic mindset rather than a nepotistic or traditional one. The narrative of the journal, on Clark’s reading, follows a kind of romance that “[constituted] and [preserved]…a bureaucratic persona” (Clark, 355). Nowhere is the personal “I” or “we” found in Vacchieri’s journal as it was in the early questionnaires in Wolfenbuttel; only “the abstract, third person singular….one” is found, “[instantiating] the commission as impersonal [and] disembodied” (Clark, 357). This move toward an impersonal, bureaucratic form of visitation culminates in Clark’s exploration of the 1789 Burgsdorf table in all its modern glory. With the Burgsdorf table, the German ministries had in their hands a tool capable of “[transforming] academic babble instead of suppressing it”, as the questionnaire had. The table, instead, followed in the trend of Vacchieri’s journal, giving “gossip, rumor, and opinion the rational guise of impersonal evaluation” (Clark, 362). Names and numbers, times and places are meticulously recorded; the qualitative becomes the quantitative and centralized control and management is realized. This is the way in which the academic voice is recast through the ministerial tool of visitation and record, and is a major site for the preservation of the traditional oral culture. This paper now turns to several pieces of material culture from York University to assess to degree to which the oral is represented, suppressed, or translated in the most modern of research institutes.


The course evaluation as a visitation


The course evaluation at York University is a small form delivered under close guard by a professor, to his or her students, at the culmination of a course. The professor instructs students to report honestly and freely, without any reservation or reluctance that might be pressed upon any student that wishes to denounce their masters. Professors leave the room and students are left to cackle wildly about professorial quirks and to turn their opinions into legible and manageable information. After students have completed their form, one student is often asked to deliver the class’s responses to the professor’s department office, ensuring that the professor is not able to compromise the holy sheets of bureaucratic dogma. The entire spectacle of the course evaluation is shrouded by a kind of secrecy that is indicative of the authority embodied in the masters of oral instruction, the professor. The theme of secrecy will be revisited in a later section as the implied roles of professors and students in the oral and written hierarchy are more fully explored.


Course evaluations function as a kind of meta-visitation in the modern university; they aim to collect data about professors and courses in order to manage their place in the academic market. Following the references section in this paper are four evaluations currently used at York University (as of the 2009/2010 year), indexed as Eval 1 through 4. The general arts course evaluation makes its intentions clear; it is intended to “provide feedback for the instructor and the Chair of the department/division” (Eval 1, page 1). The document reassures its handler that comments and scores will only be given to instructors after they have submitted grades for the students, presumably to eliminate bias in grading (Eval 1, page 1).The General Arts Evaluation (Eval 1, page 2), the Lecture/Tutorial evaluation (Eval 2, page 1), the Lecture/Lab evaluation (Eval 3, page 1), and the Division of Natural Sciences Course Evaluation (Eval 4, page 1) all similarly state their purpose, which is the same in all cases. Collectively, the evaluations maintain to be intended for use by:


“1) Instructor(s) and Chair of the department/division for feedback on the course curriculum and its presentation; 2) Administrators and Committees for the purposes of Tenure and Promotion and Curriculum Planning; or, 3) Students for aid in course selection.” (Eval 1, Eval 2, Eval 3, Eval 4).


Course evaluations demonstrate the practice of centralized control; the bishop has become the Chair of the department, the administrator, or the committee, and management of now rationalized and bureaucratized jurisdictions is possible. The course evaluation makes management possible, as the Burgsdorf table did, by transforming unruly aspects of oral academic life into a manageable format, namely, quantified or inscribed data. Indeed, the greatest portions of each evaluation concern turning an opinion about some aspect of one’s professor or course into a score out five. Eval 2 and 3 offer words in place of numbers but the message is the same; whatever opinion one holds about ones academy can be adequately represented as a scalar value, or perhaps more adequately managed in such a form. Messy opinions and qualitative evaluations of the oratory practice of professors all become a part of the academic’s exchange value in the wider collegiate market when they are inscribed on paper and made visible. Particularly interesting is the information requested of the student. In all cases the evaluations have a focus on evaluating oral practice. Eval 4, the Division of Natural Sciences Course Evaluation, asks for a scalar rating regarding “Textbooks and Handouts” and “Course Content”; this is the only place where students are asked to evaluate any form of written culture (Eval 4, page 1). All of the Faculty of Arts evaluations (1-3) ask predominantly about the lecturer and their oral capacities. One evaluation, part 1 of Eval 1, offers a place for “written comments” regarding general thoughts about the course; students are asked to communicate “other observations or comments” about “the course, instructor/lab leader, teaching assistant or marker/grader” (Eval 1, page 1). The only place for evaluation of the written word is limited to either being grouped in the general comments section alongside observations about the lecturer, or to scalar value that distils the subjective into the objective. Student academics, as the modern ‘ministries’ understand them, are practitioners of the written and evaluators of the oral. This practical/evaluative dichotomy has interesting implications for the academy and will be discussed further once the role of the professorial academic is more adequately addressed.


The course syllabus and academic roles


The course syllabus is another interesting site of information for the scholar interested in the place of written and oral practice in modern universities. Much as the above discussion on course evaluations placed the mantle of oral evaluator, and to a lesser degree scribal practitioner, on the student’s shoulders, the course syllabus will show a similar dynamic at work in the training and practice of students and professors, respectively. Included in the references section are four syllabi from four courses (years 1 through 4) within the same faculty (FSE) at York University, labeled “Syl” 1 through 4. This paper is interested in the course requirements and the structure of lectures and seminars as they relate to the prevalence of oral and written practices.


Syl 1, from a first year course entitled “Scientific Change”, presents a fairly simple marking scheme. All grades are to be obtained from “In-class written assignments”, a “Mid-term Exam”, and a “Final Exam” (Syl 1, page 1). There is no mention of any oral component, and the lecture its self is delivered in a straightforward manner; professors speak while students write. This is not to downplay the role of active discussion in the class, which had its place, but to illustrate the official position of departments and professors regarding the work of first year students. Syl 2 offers the second year student their first taste of official oral practice. In the “Grading Scheme” section, the professor allots 15 percent of the final grade is split between “Attendance, Participation, and Pop Quizzes” (Syl 2, Page 1). Oral components of the course are officially recognized in a second year course’s grading, and under the “Participation Grade” heading the professor notes there will be one hour, out of three, allotted to a “lecture/seminar” period (Syl 2, page 1). Second year, then, allows the student, in an official capacity, the chance to engage the oral academic culture in both assignments and class structure. The trend continues through third year as oral components become increasingly prevalent. Syl 3 includes in its grading scheme two oral presentations that result in a total of 10 percent of the student’s final grade (Syl 3, page 1). Also of interest is the nature of the “Science @ York” assignment; interviews conducted by students are a potential component of the assignment and offer the student a chance to vocalize (Syl 3, page 1). The chronological trend toward student oral practice continues in Syl 4, a fourth year seminar which is, by its very structure, oral. In a seminar more importance is placed on the student’s voice; the professor makes sure to clarify the distinction as he notes, “This is a seminar, not a lecture course, and full participation is mandatory in order to pass this course” (Syl 4, page 1). Syl 4 also has a greater emphasis placed on the grading of oral material. A full 35 percent of the final grade is available through “Participation (including seminar presentations and in class assignments)”; speaking and presenting orally have become important components of the mature undergraduate’s academic experience (Syl 4, page 1).


The first year student adheres to the oral evaluator/scribal practitioner dichotomy developed in the above discussion of course evaluations. Slowly there is an inversion of this role as the student gets closer to the charismatic authority of the professor. The commodification of the professor begins with this inversion; as a student becomes more proficient and moves through the hierarchy of academia they get closer to becoming oral practitioners and scribal evaluators, the role of the fully developed professorial commodity. This characterization is not meant to downplay the publishing duties that weigh so heavily on professional academics, but rather to illustrate the way in which commodification, oral and written translation, and authority are entangled in the academic world. With the limited material available to students I have no capacity to speak on the potential for the oral culture to be, as I suspect it might be, thoroughly entrenched in the written practice of publication.


Authority, secrecy, and the oral tradition


Analysis of course evaluations and course syllabi shed light not only on the way that opinions and judgments of professors take their first step in the commodification process, but also the way in which the student is perceived as a commodity. The student is an evaluator of oral practice, capable of delivering evaluations of a lecture and the professorial persona. The professor is, conversely, an evaluator of the written word, and a practitioner of the spoken word. These are roles that, as evidenced by the course requirements discussed above, are closely linked to the relative maturity of the student. At full maturity, then, authority is tightly bound up with oral practices. The authority that comes with this maturity is easily identifiable and important for this paper since it demonstrates another area in which traditional oral culture is preserved. The secrecy surrounding the course evaluations, and the great lengths one needs to go to obtain one as a student, show at the very least that with publications, with appointments, and with rationalized opinion, comes the authority to access secure documents. To obtain a course evaluation for the arts (Eval 1, Eval 2, Eval 3), a student comes face to face with how little authority their spoken word holds. Four separate requests for the evaluations, to three separate administrative bodies, over a period of two weeks, were all met with suspicion and eventual denial. It was only when a fully matured oral practitioner, a professor, called the administrative desk of the Dean of Arts and made an oral request that the document was handed over for use in this paper. The point is reinforced by how easily Eval 4 was obtained. One oral interaction between a professor in the STS department at York and an administrative employee at York resulted in the immediate release of the document. To be a speaker, a lecturer, is to have the authority to obtain a document that, in ones undergraduate years, one is only allowed to use for evaluation under close scrutiny; this is the power and authority whose characterization is so closely bound up with oral practice.


In an increasingly capitalistic environment, university prestige rests on the publications and charisma of its professors. Equally important, however, is the practical matter of class size. Without customers the market flounders, and what is important to customers is not what a professor has published, but what he or she says in lectures, in tutorials, and in private. Indeed what professors say is depicted as a chief concern of students in administrative eyes. Students learn to write and to listen as undergraduates, but are slowly trained to speak and publish as they mature. Oral culture is represented at both the undergraduate level, through the ministerial machinations of the meta-visitation, and the professorial level as the oral becomes an increasingly important part of academic life. The commodified professor, from their undergraduate origins, is ready for exchange on the wider academic market once they have fully adopted these characteristics; this is the romance of the academic world, the “moral authority” served by the stories we tell of management and control (Clark, 354). In the chaos of selling a university to a student the oral becomes increasingly important the closer one looks. The same might be true of selling ones university to the wider market for funding; the scope of this paper is limited and that stands as a question for another project. It would be interesting to explore the degree to which rumour, hearsay, opinion, or charisma impact professional hiring and publishing practices. It might show, as this paper has attempted to, that Clark’s material methodology can reveal places where the traditional oral culture is preserved. Oral culture at York University, as is true of Clark’s 18th century German states, remains preserved in the practice of rationalization and bureaucratization; academic duties, personae, authority, and management are all informed by the transformative tension that exists between the empire of the written, and the heritage of the oral.




Clark, William. Academic Charisma and the Origins of the Research University. Chicago: University of Chicago Press, 2006. Print


Course Evaluations from York University, from the Faculty of Arts and Faculty of Science and Engineering, are available in hard copy following this section. Evaluations are indexed in the top right corner of each first page by an “Eval #” notation and are referenced in the body of this paper according to that indexing.


Course Syllabi from York University of course levels 1000-4000 in the Faculty of Science and Engineering are available in hard copy following this section. Syllabi are indexed in the top right corner of each front page by a “Syl #” notation and are referenced in the body of this paper according to that indexing.


Knowledge Production and Social Structuring

J. Robert Oppenheimer and the place of personal virtue

The phone rings. “Hello?” It is an old childhood friend who is living many miles away now. He has caught me in the middle of my research for a final paper. He inquires as to the nature of the paper and I respond with a brief story about impersonal trends in the history of science, about the security hearings that were centered on J. Robert Oppenheimer, and about the place of personal virtue in the historiography of science. Currently a fourth year student at the University of British Columbia, my friend is calling to inform me of some recent academic choices he has made. “I’ve decided to go into forestry sciences” he tells me. I ask after the reasons for his seemingly late-game change from biology proper to the faculty of forestry; “it just makes more sense for me” he responds. We start to reminisce about the time we both spent at summer camp in the early 1990s, about the reforestation programs we were introduced to, and about how disconnected he felt from “those forests” sitting in biology classes. In an instant I become acutely aware of the import of his change of heart to my final paper. If I were to tell the story of my friend’s scientific career twenty years from now, of how he came to be educated in particular scientific traditions, how might the story go, if I were not to reflect on those summers spent in northern Ontario among “those forests” that inspired him so? Herein lies the crux of the problem; how does one reconcile this seemingly personal account of a scientist’s development with an understanding of scientific knowledge production that is “widely accounted impersonal” (Shapin, 2008: 1)? One way, as Steven Shapin suggests in The Scientific Life (2008), is to make “people and their virtues matter” when writing the history of science.

Controversy and that which matters

“What does it mean to say that people matter? I mean that we cannot understand how various scientific and technological knowledges are made, and made authoritative, without appreciating the roles of familiarity, trust, and the recognition of personal virtues.” (Shapin, 2008: 1)

Making people matter in the history of science is not just about the rhetorical concerns of a cloistered group of intellectuals. Rather, the concern bears meaning for the way we understand, mediate, and deploy authority in our society. Science enjoys an authority in the twenty-first century that few, if any, bodies of collective knowledge have before. This paper aims at examining a particular controversy in the history of science and implicating the role of identity and personal virtue in that controversy. The controversy surrounds the appointment of J. Robert Oppenheimer as the scientific director of the Manhattan Project, the case demonstrating the role of identity and personal virtue in scientific practice. This paper also suggests that a rethinking of the historiography of science is prompted by such understandings of science. In particular, this paper will be interested in accomplishing two tasks. The paper will examine some of the reasons for Oppenheimer’s appointment in light of his seemingly unorthodox character. The role of identity will be highlighted both before and during the Manhattan Project in order to place personal virtues at the forefront of the project. The paper is also interested in asking questions regarding the implication of familiarity, trust, and personal virtue in the reception of scientific truth. If scientific knowledge is understood to emerge exclusively from unproblematic, objective, and impersonal practices then the choices about its funding, direction, and use are has a potential to be made on equally exclusive and problematic grounds. The controversy surrounding the life and work of J. Robert Oppenheimer suggests that the personal virtues of scientists matter in the making of science. Given that the reception of impersonal knowledge can favor particular groups, the place of personal virtue becomes an integral site to view, and perhaps to prevent, the naturalization of unequal social structures.

Los Alamos, Scientific Direction, and the Atomic Bomb

The year is 1942 and the Second World War rages on. Japan has recently unleashed a devastating attack on the United States (US) in Hawaii and the Philippines, prompting the decision for the US to formally enter the war (Brier et al, 2000: 493). General Leslie Groves would soon serve as the military figure head of the American bomb project. The Manhattan Energy District, or Manhattan Project as it is usually referred to, grew out of the research dedicated to uranium fission that was going on through research institutes and official committees in the US (Bird et al, 2005: 186)[1]. Throughout 1941 J. Robert Oppenheimer attended numerous events at which he could discuss atomic weaponry and in 1942 became part of the S-1 Committee (Hunner, 2009: 75). The discussions at Berkeley, and around the country, that Oppenheimer and the talented experimental physicist John Manley organized, would cement Oppenheimer as a “leader of the theoretical work conducted around the country” (Hunner, 2009: 75). However, while his theory was sound and his intellect considerable, Oppenheimer did not seem an altogether apt candidate for the position of scientific director at what would become the Los Alamos National Laboratory. Indeed, from an early age Oppenheimer showed a distaste for and certain clumsiness with experimental work; “his weakness is on the experimental side…he is not at home in the manipulations of the laboratory” (Rhodes, 1986: 123). Hans Bethe suggested a lack of experience in directing large groups of people; his close friend and colleague Ernest Lawrence thought it “astonishing” that he would be selected for an administrative and experimental position; yet another friend, I. I. Rabi, was similarly surprised, noting that “he didn’t know anything about equipment” (Bird et al, 2005: 186).

Given the lack of experimental talent, the lack of administrative experience, and Oppenheimer’s left-wing background, it is curious that Groves, among many others, would come to trust in Oppenheimer “almost without exception”; believing “that without him [the atomic bomb] wouldn’t have happened…that it couldn’t have happened” (Freeman Dyson in The Day After Trinity, 1980, emphasis added). This image of Oppenheimer, as a talented theoretician from whom charismatic authority issued, is a motif that stretches across his entire life[2]. Important for this paper are the ways that his friends, colleagues, and students viewed his luminosity as it extends well beyond the realm of the objectively scientific, and into a world of “familiarity, trust, and … personal virtues” (Shapin, 2008: 1).

Of particular relevance to this project is Oppenheimer’s early life; spent in relative wealth with family and friends. Having spent many summers throughout the 1930s horseback riding through New Mexico, it should come as no surprise that the area would leave a lasting impression on Robert’s mind. In his customarily eccentric manner Robert, as described by his good friend and writer Francis Fergusson, would customarily “just get on his horse, put a chocolate bar in his pocket and be gone for a day or two”; “he eventually explored a large part of [the New Mexico] mountains…probably knew more about them than anyone else” (Francis Fergusson in The Day After Trinity, 1980). One point to be made of Oppenheimer’s experiences in New Mexico is an obvious one; he eventually selected, from his wealth of personal experience, the site for the Manhattan Project[3]. Robert once lamented that he could not combine “the two loves of his life – physics and New Mexico”; he would now have the chance (Hunner, 2009: 83).

New Mexico, perhaps more than just illustrating the place of the personal in one of the largest scientific endeavors ever, also points to one of the reasons Groves was initially attracted to Robert as a potential scientific director. Indeed, Groves’ “concerns for security” were addressed implicitly with Oppenheimer’s suggestion that the lab should be remotely located (Bird et al, 2005: 185). Consequently the ranch and the New Mexico area served as both inspiration for, and acceptance of Robert in the atomic project. The quiet summers spent with “all the different guests, most of them physicists [who] brought some ideas, new ideas, with them”, warrants particular attention (Frank Oppenheimer in The Day After Trinity, 1980). The ranch and its guests highlight the role of familiarity and trust that were so important in Oppenheimer’s appointment and in the way he was received by Nobel laureates and graduate students alike. J. Robert Oppenheimer was educated at Harvard, the University of Cambridge, and the University of Gottingen throughout his career. The individuals he associated with in classes and in private would go on to become eminent physicists in their own right. Among the intellectuals he studied with in Germany were Werner Heisenberg, Wolfgang Pauli, Paul Dirac, Enrico Fermi, and Edward Teller (Bird et al, 2005: 57). This is to say nothing of their brilliant professor and chairman of the physics department, Max Born. The time Oppenheimer spent in Germany, the associations he made, the students he directed, and the friends he would keep, all served to foster an air of familiarity among many of the physicists he would eventually ask to become part of the Manhattan Project. It must be remembered that scientists were being asked to drop their current projects, pack their families up, and move across the country to an undisclosed location, for work that they could not be informed about until they had officially joined the group (Hunner, 2009: 84). To accomplish the great amount of persuasion necessary to bring the desired scientists together would require a great deal of charisma on the part of a well connected, scientifically respected individual; one could hardly ask for a better person than Oppenheimer. After all, Oppenheimer had studied in Germany, the Mecca of theoretical physics; he had spent time in Cambridge under the renowned experimental program practiced there; he had come from a well respected American school – in short, he was a scientist of the world (Bird et al, 2005: 56-59). Robert’s selections were among some of the most brilliant scientific minds in the world, some holding Nobel prizes where Oppenheimer did not. It speaks volumes on the part of Oppenheimer’s received identity that so many individuals would put so much trust in him. The war itself was motivation for some, but as Jon Hunner points out, Oppenheimer’s “passionate” methods of “persuasion” were absolutely crucial in highlighting the “powerful incentives” of wartime science (Hunner, 2009; 84-85). Hunner uses the phrase, “the brotherhood of scientists” to refer to the way in which Oppenheimer positioned himself as an actor through which access to this exciting, necessary, “pioneering”, collective, and even “adventure” filled work, could be gained (Hunner, 2009: 85).  In this way, Oppenheimer could be read as what Michel Callon and Bruno Latour called an “obligatory passage point” in a vast network of scientists that were spread all over the globe[4]. In the environment of World War 2 America, the military connotations of Callon and Latour’s theoretical framework prove crucial. Not only does actor-network theory capture the centralized actions of Oppenheimer in the process of alignment, but it refers explicitly to the competitive nature of the scientific program during this period. As various countries, their militaries, industries, academies, and scientists vied for the rapid mobilization of allied actors, various methods of stabilization, like those promised by Oppenheimer of “idyllic settings” and well funded science, of intellectual rigor and intrigue, and of global necessity, worked on various levels to enroll individual scientists (Hunner, 2009: 84-85).

It would be a mistake, however, to think that Robert Oppenheimer’s charisma sprang up over night. It was at a very young that Robert would begin to distinguish himself academically and socially from his peers. When Robert was merely twelve he began corresponding with geologists about the rock formations in Central Park, New York. Using the family typewriter, Robert garnered such attention from members of the New York Mineralogical Club as to be invited, under the assumption that the quality of his work reflected a significantly older scholar, to give a lecture before the group (Bird et al, 2005:14). This type of virtuosity with intellectual endeavors would persist. In high school, for example, Oppenheimer took not only “English literature, math, and physics”, but also “Greek, Latin, French, and German”; he would graduate with straight A’s (Bird et al, 2005: 23). During Robert’s university years biographies, documentaries, and social science texts are full of accounts of the genius and awe inspiring nature of his persona[5]. In Harvard one classmate described him as “a Goth coming into Rome…’He intellectually looted the place’” (Rhodes, 1986: 121). Oppenheimer was recognized in university much as he was in high school, prompting his friend Paul Horgan to note “Leonardos and Oppenheimers are scarce…but their wonderful love and projection of understanding…offer us at least an ideal to consider and measure by” (Bird et al, 2005: 35). Nor was the sentiment lost on those great luminaries of German physics; Max Born himself noted that “he was a man of great talent” who would often with unconscious rudeness interrupt students and professors alike to demonstrate more elegant solutions to their problems. Paul Dirac noted Oppenheimer’s exceptional “intellectual versatility”, remarking on his capacity to wrestle with complicated physics and literature in the same breath (Bird et al, 2005: 62). From twelve years through to his appointment at the Manhattan Project, J. Robert Oppenheimer was regarded widely as an intellectual savant His scientific work aside, Robert was a formidable linguistic, artistic, and aesthetic mind; this was an individual who learned Sanskrit in order to read the Bhagavad-Gita in its original (Bird et al, 2005: 99). His long time friend Haakon Chevalier came to know of Oppenheimer through reputation before they had ever met; importantly, “it was not for his work in physics”. Instead, Chevalier heard of Oppenheimer’s ruthless reading of Das Kapital and his surprising consumption of the complete works of Lenin; a feat unmatched even by many Communist Party members (Bird et al, 2005: 117). There is a role played in this story of Oppenheimer’s life by what might be described as elegance[6]. While too vast a topic to cover here, the point to be made is that the growing charismatic persona of Robert Oppenheimer, in which descriptions of personal and mathematical elegance can simultaneously adhere, has important ramifications for objective and impersonal accounts of science.

Indeed, J. Robert Oppenheimer’s appointment at Los Alamos should give reason for pause. In light of personal virtues, and the themes of familiarity and trust outlined above, one might indeed ask exactly what it means to discuss the practice of science as drawing its authority and strength from supposed impersonality and objectivity[7]. How does one understand the appointment of Oppenheimer if scientific practice is said to be objective? Surely there were more experimentally gifted physicists at the time – Ernest Lawrence seemed a good choice on these grounds. Was there not someone with more political or administrative experience? Perhaps Edward Teller could fit this bill.  But it was the quirky, thin, awkward, socially intriguing, and intellectually gifted Oppenheimer that Groves’ decision turned on. After only one meeting Groves had found his man; he had interests in Robert’s recognition of the practical problems of the project, he knew of his theoretical prowess, but as Kai Bird notes, “more than anything else, he just liked the man” (Bird et al, 2005: 185). In the rationalized and bureaucratic environment of the modern research university it is tempting to divorce all signs of the early modern period that poured the foundation for what seems set in stone today[8]. As William Clark reminds his readers the “modern cult of academic personality” is not furnished by a collective or collegial charisma, but “by directing an institute or having a center through which to realize academic projects” (Clark, 2008: 19). In this way, J. Robert Oppenheimer finds himself implicated deeply within the modern cult of academic personality, his charisma flowing in and through his practices.

Who cares about all this academic claptrap?

The fundamental problem with a discourse that outlines the personal virtues, the themes of familiarity and trust, and the subjectivity of knowledge production is that the effort seems lost on those that are not interested in the historiography of science. However, the discourse can extend further than the walls of the academy to inform the power relations that govern broader social trrends. The intimate links between the production of knowledge and the exertion of power have been addressed by many scholars; here is it enough to suggest, in conversation with those scholars, that “knowledge is something that makes us its subjects, because we make sense of ourselves by referring back to various bodies of knowledge” (Danaher, 2000: xiv)[9]. The governance of people, bodies, places, and ideas under this framework carries with it a particular charge for the science studies scholar. To deconstruct the objective, impersonal nature of science is to become attentive to the way knowledge is received and used. Donna Haraway has used the term “situated knowledge” to describe the ways in which the subject-object divide is inherently problematic (Haraway, 1988). For Haraway, subjects of knowledge are always situated; there can be no object free from its subject, the two are mutually constitutive. The dominant vision of science as objective and impersonal suggests that the “god trick of seeing everything from nowhere” is possible; that knowledge can emerge untainted by subjectivity on its path from nature to the world of human beings (Haraway, 1988: 581). The scientific method, the rational, and objectivity are strategies that result in “mutual and usually unequal structuring” for bodies afflicted by power-knowledge (Haraway, 1988: 595).

A case in point…

One case that demonstrates the effect of unequal structuring is that of raw-milk cheese production. Heather Paxson’s 2008 publication Post-Pasteurian Cultures: The Microbiopolitics of Raw-Milk Cheese in the United States discusses the implications of the “hyperhygienic dream of Pasteurians” and of anti-microbial sentiment in the biopolitics of food production and consumption (Paxson, 2008: 1). In Paxon’s example, power over the biological body has obtained in the production of knowledge about the body as a site for invasion by microbial life. The impersonal and objective suggestion here is that nature has afforded human beings with the process of Pasteurization to fight off what are strictly dangerous microbial actors (Paxson, 2008: 3). Recently, as Paxson demonstrates, resistance to this notion of bad-microbes has cropped up around artisan raw-milk cheese production. These guerilla cheese producers view microbes as allies in the development of flavorful cheeses that have the capacity to enrich the flavor and health capacities of its consumers. Indeed, Paxson demonstrates the way in which microbes are implicated in a dialog that is an “expression of a people’s connection to a piece of land” (Paxson, 2008: 26). The flavors that particular, and healthy, microbial actors produce are unique to raw milk cheeses, their advocates shunning the “dead”, flavorless cheeses sold in stores (Paxson, 2008: 38). More than flavor, Paxson’s “post-Pasteurians” position themselves in relation to the entire biopolitical spectrum of food production[10]. Particular knowledge about microbes has lent justification to the widespread practice of pasteurization; so much so that small business owners are subjected to the same financial requirements as massive, multinational dairy producers (Paxson, 2008: 28-35). Owners of small farms have an uphill battle to fight precisely because of the context in which pasteurization was developed (Paxson, 2008: 17). The point will not be belabored here. It is enough to suggest that by positioning microbes within an objective discourse of scientific practice, they have affected an unequal structure, both for the biological legality of body’s contents, and for the economic demands put on small business owners. To describe science as impersonal in this account would be to fly a flag of objectivity in the face of small business owners who, to be sure, take it very personally. Instead, revealing the situated nature of knowledge about microbial life furnishes academics, politicians, lawyers, and scientists with the ability to recognize and intervene in developing structures that have adverse effects on various interest groups.

Toward a responsible history of science

It is the unequal structuring of power relations that this paper is interested in concluding on. If STS as a field is admitted to have some authority in the reception and understanding of science, a responsible position that admits the situated nature of the knower is absolutely crucial. A socially responsible scientific enterprise can not be obtained by maintaining objective and impersonal accounts of that enterprise. Numerous scholars have addressed cases in which various groups have been adversely affected by unequal structuring and this paper has examined one of them. Here it is sufficient to suggest that adherence to impersonal and objective accounts of knowledge production can produce unequal structures that can subjugate various interest groups. As the case of J. Robert Oppenheimer’s appointment at Los Alamos suggests, personal virtue is an integral part of the scientific enterprise. Given the capacity for the ‘God trick’ to disadvantage particular groups, personal virtue can not responsibly be divorced from those structures that condition social, scientific, and cultural understandings of reality.

[1] See Hunner, J (2009) p 73. See Rhodes, R (1986) in references section for a detailed history of the Briggs Advisory Committee and the S-1 Uranium Committee.

[2] See Clark, W (2008) for an examination of the role of “charismatic authority” in academia. Of particular relevance to this paper are the ways in which that authority is bound up in what could be described as the nepotism, personal authority, and familiarity of the early modern research university.

[3] This should not be read as glossing over the many discussions that went on about the site of the future laboratory; merely that J. Robert Oppenheimer was the one that suggested the Los Alamos Ranch School as a potential site. Indeed it is on a surveying trip at Jemez Springs, after Leslie Groves invited Oppenheimer to make a more appropriate suggestion that the idea of the Pajarito Plateau and the boys school came up. See Bird et al, 2005, pp 205-206

[4] See Michel Callon in John Law (Ed.), Power, Action and Belief: A New Sociology of Knowledge? London: Routledge 196-233. Latour, Bruno (1988). The Pasteurization of France. Harvard University Press: Cambridge Mass.

[5] See References section.

[6] J. Robert Oppenheimer is often described as being socially, theoretically, linguistically, and mathematically elegant. See The Day After Trinity (1980),

[7] See Daston, L. Galison, P. (2007) for a close examination of the history of objectivity. Of note for this paper are the ways in which the concept of objectivity as subject-denying and its connection with scientific practice are relatively new inventions in the history of science. Indeed, a suggestion that runs very close to the argument made about the place of identity

[8] See Clark, W (2008)

[9] Danaher is speaking on behalf of Michel Foucault here who outlined the concept of “power-knowledge” (pouvoir-savoir) in numerous texts; see Foucault (1963, 1966, 1975, and 1976-1984). The concept has also found expression in the field of sociology of science; see Reardon, J. (2004) Race to the Finish: Identity and Governance in an Age of Genomics. Princeton University Press, where her use of the “co-production of knowledge and power” is a structuring theme throughout the book. The anthropology of science has also found the concept useful; Joe Dumit, in his book Picturing Personhood: Brain Scans and Biomedical Identity (2003), implicitly engages this Foucauldian theme in his sketch of the phrase “Objective self fashioning” (see pp. 160-168 for example)

[10] Paxson’s paper makes important use of the Foucauldian term “biopolitics”. See Foucault, M. (1978) The History of Sexuality, vol. 1. Hurley, R. (trans) New York: Vintage


Bird, Kai (2005). American Prometheus: The Triumph and Tragedy of J. Robert Oppenheimer.

Brier, S., Brown, J., Lichenstein, N., Strasser, S., Rosenzweig, R. Ed. (2002). Who Built America? New York: American Social History Productions, Inc

Clark, W., (2006) Academic Charisma and the Origins of the Research University. Chicago: University of Chicago Press

Danaher, G., Schirato, T., Webb, J. (2000) Understanding Foucault. Sage Publications

Daston, L., Galison, P., (2007) Objectivity. Cambridge: MIT Press

Haraway, D. (1988) Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspectives. Feminist Studies 14/3: 575-99

Hunner, J. (2009) J. Robert Oppenheimer, The Cold War, and the Atomic West. University of Oklahoma Press

Paxson, H. (2008) “Post-Pasteurian Cultures: The Microbiopolitics

of Raw-Milk Cheese in the United States” Cultural Anthropology, Vol. 23, Issue 1, pp. 15–47

Rhodes, R. (1986) The Making of the Atomic Bomb. New York : Simon & Schuster

Shapin, S. (2008) The Scientific Life: A Moral History of a Late Modern Vocation. Chicago: University of Chicago Press

The Day After Trinity. Dir. Jon Else. Pyramid Films (1981) DVD.

Much of the analytic discourse found within Science and Technology Studies (STS) literature can be seen as reactionary to perceived flaws within older sociologies of science. STS visionaries, like David Bloor and scholars at the University of Edinburgh, have developed entire systems of analysis in hopes of departing from the scientific positivism that had previously held sway over sociology. The Strong Program, of David Bloor’s design, is one such system[1]. Those scholars that could be seen as reactionary to the Strong Program have, in turn, generated systems that seek to eliminate a new host of perceived problems in the sociological study of science[2]. One of the most important developments that helped shape these, and many other social theories of science, is the turn away from asymmetrical analyses. An asymmetrical explanation is one that attributes different types of explanations to true and false knowledge claims[3]. For contemporary STS work it is clear that failed knowledge claims can, and should, hold just as much curiosity for the analyst as validated ones. It is with this important development in mind then, that this paper takes its direction; technological failures are to be granted the same consideration as would technological successes. This paper will examine a large scale technological failure. Using the sociological resources of various authors, an effort will be made to demonstrate the normal, and even expected, nature of failures within complex technological systems. The analysis to be presented will attempt to engage the system, rather than the actors involved, in social inquiry, and will do away with some of the more commonplace assumptions associated with system accidents. By normalizing technological system accidents, and by localizing those systems within a social context, alternative explanations for failures, blame, and system reliability can be more readily accessed.


The Day the Internet Stood Still


While January 25th of 2003 fails to stand out as an important day for many, it was host to one of the largest technological accidents the internet has ever seen. Internet monitoring stations, like that of Akamai’s Network Operations Control Center, would be some of the first to witness the immense power of what would come to be called the “SQL Slammer” computer worm. Starting at 12:30 am EST, large display screens jumped to life as a storm of information was raging across the internet. One of the first traceable hosts to display signs of infection was a server running Microsoft’s SQL software; the computer was sending millions of copies of infectious code to randomly generated destinations (computers) all over the internet. Just three minutes after the first signs of infection the Slammer worm had already acquired a doubling rate of 8.5 seconds. The immediate reaction of many high level Internet Service Providers (ISPs) and network stations was to send out distress calls to the world. It is a testament to the efficiency of the Slammer worm that, just 15 minutes after the initial infection, many of the same ISPs and network stations calling for help would have their connection to the outside world cut off. Net Access Corporation, a massive Northeastern ISP, reported with a panic that “Nearly half our ports are in delta alarm right now” (as cited in Boutin, 2003, p147), indicating that the traffic was so great that half of their available ports were being clogged up in last ditch rerouting efforts. Routers that attempted to send the massive volume of packets to their generated destinations were finding less and less routes to do so. In Portugal, three hundred thousand cable modems became non-responsive. South Korea’s internet access was wiped out almost entirely. On paper these end-user failures, in South Korea for example, do no justice to the reality of the communications loss being experienced. Within 15 minutes, 27 million people were left without cell phone or internet access, having to rely solely on archaic communications systems that were ill equipped to handle the load. Everything from corporate email systems to Continental Airlines’ ticket processing networks went dead; by the time anyone had realized the problem it was already too late, no business was getting done and flights were being cancelled. In a 15 minute window the Slammer worm had managed to cost more than $1 billion in damage mitigation over several weeks, with the only warnings buzzing in after the fact. Indeed, the only redundancies that had any effect were the physical limitations of the networks being attacked, and the fact that the worm was released over a weekend; Slammer had done all it could do in one swift blow[4].


To err is human; to analyze is divine…

In such an unprecedented event, with no one to blame, the costs of the accident became absorbed by independent parties and individuals. Many media sources reported the negligence of Microsoft in their handling of the already documented exploit in the SQL software. Similarly, the media found an easy target in the counter-culture of hackers and computer experts[5]. However, to look for blame in this accident becomes an exercise in futility. Would one look to the flawed design of the SQL software that was exploited? Perhaps the network designers and engineers that failed to apply the proper redundancies to the networks under attack? Or should the blame be set upon the shoulders of the unknown individual responsible for the attack? No, rather, it is the system and its social contingencies that must be examined if one is to gain any comprehensive understanding of the Slammer accident. Individual local factors and actors are merely the parts that make up system accidents, not the causes of them.


The desire to attribute blame in large system accidents is to some degree natural, but as the number of accusable parties grows, interpretive flexibility takes hold and uncertainty sets in[6]. Charles Perrow, in his book “Normal Accidents”, makes a compelling argument for an alternative method of attributing blame and for conceptualizing of system accidents in general. For Perrow, “None of the above” (Perrow, 1999, p7) would be the best answer to the question of blame in system accidents. Perrow notes that in systems of great complexity, like the internet, it is not a single failure that breeds catastrophe; on the contrary, “the interaction of …multiple failures …explains the accident” (Perrow, 1999, p7). Perrow is interested in the normal aspects of system accidents and seeks to conceptualize of them in such a way that operator error, one of the most frequently cited causes for system failure, should take a backseat to the very nature of complex systems. To understand the approach Perrow uses and how it can be made relevant to the Slammer worm, a brief discussion of some key terms will be useful.


Accidents, in complex systems, need to be understood as being contingent upon multiple failures. Multiple failures are themselves contingent upon the level to which parts of the system are tightly coupled. Tight coupling describes a dependence of one part of a system upon another (Perrow, 1999, p8). In the case of Slammer, tight coupling could be seen in the degree to which Internet traffic depends upon the paths a router chooses/has available for packets. Tight coupling is often difficult to identify before an accident has occurred as there is a tendency to ignore, or expect, it when the system is “working”. It would be a mistake, however, to dismiss the relevance of coupling in an analysis of system accidents.


The tight coupling of parts alone does not hold the key to system accidents, important as well is the interactive complexity that becomes apparent during analysis. Interactivity suggests the tendency for failures, or otherwise loosely coupled parts, to behave or work together in largely unexpected ways (Perrow, 1999, p7-8)[7]. The way in which the single failure of Microsoft, to account for the exploit in their software, interacted with the basic function of routers, packet routing, to produce a normal accident, is an example of interactive complexity at work during the Slammer accident. Tight coupling and interactivity make up the foundation for why system accidents take place; when parts of a system rely upon one another to function, and systems are so complex that unexpected interactions can take place, normal accidents will occur. That an accident can be normal does not suggest that systems are built for that function, rather it suggests that, given the tightly coupled design, “an inherent property of the system [is to] occasionally experience [complex] interaction”. (Perrow, 1999, p8)


Perrow also makes sure to address the issue of incomprehensibility, as it relates directly to his goal of forgoing operator error as a cause for system failure; what is not understood can not be expected to be accurately acted upon. As has been mentioned, the media largely attacked the individual failures of hackers, corporations, and to some extent, users. The level of incomprehensibility, together with the relatively unique temporal qualities of computer/communication networks help to understand the question of blame more accurately. Incomprehensibility suggests that “for some critical period of time…interactions are not only unexpected, [they] are incomprehensible” (Perrow, 1999, p9). In the case of Slammer, while the life of the worm was very short, the interactions and coupling that became apparent just 15 minutes after “patient 0” was infected were not understood until the damage was done. For Slammer, it is not the actions of operators alone that this paper will defend, but all prosaic arguments for blame that fail to take into account the sheer complexity of large technological systems. Incomprehensibility, then, speaks also to the degree to which engineers, like the software engineers at Microsoft, are unable to predict the interactivity that their products will engage in. These ideas will be addressed further at the end of this paper, where their implications can be fully drawn out.


Do artifacts have politics?

In her book “Inventing the Internet” (2000), Jane Abbate explores the history of the internet as a social construction. Abbate’s socio-historical discourse on the internet’s foundation provides a canvas upon which a systems approach to the “Slammer” computer worm can be painted. Her first chapter, “White Heat and Cold War: The Origins and Meanings of Packet Switching”, is especially telling of the Cold War ideologies with which the fledgling communication network of the internet engaged. These ideologies need to be explored as they relate directly to the discussion of a normalized Slammer accident, as well as to the implications that the accident has for victims and indeed the internet its self.


The unease that World War II had stirred up, along with the fear that the “Cold War” was bringing about, put technological advancement at the top of the list for national security. For Abbate, the Cold War fear that spread across the US, fostered by suspicions of a science gap between themselves and the USSR, called for a communications system that would facilitate war efforts and provide security from nuclear attack. In the early years of the internet, development of a packet switching system was already underway; the United Kingdom and the United States, however, had competing ideas of what packet switching should be defined as[8]. Abbate notes that the adoption of one definition to the global level was not dependant on “purely technical criteria, but always because it fit into a broader socio-technical understanding of how data networks could and should be used.” (Abbate, 2000, p8). This idea is paramount for a discussion of the Slammer worm as it points not only to the way in which technologies are socially constructed, but by the same logic, identifies the dynamic nature of technology and technological use. The RAND Corporation had a large hand in developing the version of packet switching that eventually did make it to the global stage[9]. This corporate influence reinforces Abbate’s assertion that “the [internet] evolved through an unusual (and sometimes uneasy) alliance between military and civilian interests” (Abbate, 2000, p2). RAND was especially interested in what Abbate calls “survivable communication”, or communication that could withstand and operate through attacks (Abbate, 2000, p10). Paul Baran came to work for RAND during the development of survivable communications systems and had a focus on system reliability. Reliability is implicit in the need for a survivable system and Abbate helps to identify the focus on reliability when she quotes Baran, who notes that communication was “extremely vulnerable and not able to survive attack. That was the issue.” (as cited in Abbate, 2000, p10, emphasis added). Baran eventually presented the fruit of his research in what he called a “distributed communications” system. The project meant a communications system where users were not connected en masse to local switching offices, there would, on the contrary, be many switching nodes on the network. This constituted a redundancy for the communications system; the ability to mitigate damage to mere parts and units. If one switching office was disabled by an attack it would no longer constitute a large system failure; routing could be completed by any number of the remaining nodes on the network. System complexity, however, increased under this paradigm and the interactive complexity seen in system accidents, like the SQL “Slammer” worm, become defining features of the computer networks we have today (Abbate, p 12). It is not important to go into great technical detail about the nature, or foundations of packet switching; the goal is to identify packet switching as a response to a social factor, namely a need for reliable communication systems in the face of perceived Soviet threat. The ability of this network to use complexity and size as dynamic redundancies must be understood from the frame of reference of that networks creators, like Baran. This was not complex in any pejorative sense; it is a complexity that assured reliability and national safety, and one that is deeply bound in the context of its time.


Equally important to the standard of reliability and complexity, is the historical presence of unforeseen use in communications technologies. The internet, and especially computer virus’ that make use of it, demonstrate a good deal of emergent use. Users of computers and the internet find themselves free to engage with the technology as they see fit, effectively changing the machine from a static artifact, to a social system with which humans can engage and be engaged. Abbate notes that the computer its self was “originally conceived as an isolated calculating device, [through the advent of the internet, the computer] was reborn as a means of communication” (Abbate, p1). From the very outset, then, the internet has been a force that interacts with its physical components in original and unpredicted ways. In the case of Slammer, the internet has worked to redefine otherwise connective technologies as weapons, through which an attack on the internet could be leveled. This is a normal aspect of Slammer’s effect; emergent use has always been a crucial part of communication technology[10].


Getting inside Slammer and outside the Black Box

To understand how Slammer was able to initiate the accident, some technical discussion will first be necessary[11]. Slammer, as has been mentioned, is a computer worm; a program, or piece of code that is built to replicate its self across nodes connected to a network. This aspect of Slammer, its ability to replicate, is merely the type of virus; the true power of Slammer lies in its use of the network it was attacking, and the non-malicious nature of the attack. Slammer is a surprisingly simple virus that was able to make use of a single software exploit in order to launch an attack on the entire internet. The first aspect of Slammer that deserves attention is the way in which it makes use of a UDP packet for the attack, rather than a TCP packet. UDP is an internet protocol which can operate much faster than the traditional TCP protocol. The TCP packet, which runs a majority of the internets applications and functions, contains a “header” and a “body” which tell the recipient where the packet is coming from, headed to, and what information is contained. In order to make a connection, the client sending information to a server must first send a “SYN”, or synchronization, request. Once the server has received the SYN, it returns a “SYN-ACK”, or synchronization-acknowledgement, to the client. Finally, the client sends an “ACK”, or acknowledgement, notification and the connection is established. The UDP protocol, by contrast, is referred to as “connectionless”, because it does not require the SYN-ACK connection process; the UDP packet is a one way piece of instructive information. In the case of Slammer, the UDP packet contained a very simple instruction set that it was able to send off to computers running the Microsoft SQL software and then forget about. Traditionally, a UDP packet will contain a “source” field in its header, this source address was fraudulent in the case of the Slammer worm, ensuring anonymity for the initial, and subsequent attacks. The initial packet sent to the Microsoft server would have looked like nothing more than another database query request.


The actual attack is staggeringly simple. First, the packet tells the server that it has a request for an online database which follows its header information. The Slammer packet contains a string of “01” digits that are, collectively, too large for the computer to handle. Usually a “00” value would be an indicator to the machine that the whole value of the desired name has been transmitted; Slammer omits this last “00” in order to create a buffer overflow[12]. Microsoft designed the software so that the desired database name can be no longer than 16 bytes, its end designated by a “00”. As was mentioned, the “00” value is left out, allowing the “01” values to run on past the 16 byte requirement. The allotted space for a database request soon fills up and the extra information falls into the computers “stack”[13]. The computers stack is replaced by Slammers own instructions, which are disguised as the UDP request; herein lies the software vulnerability, the ability to use a simple UDP request process to feed instructions to any computer running the SQL software. These instructions are again, simplistic. The computer is told to pick a random target by running the computer system clock through an algorithm and generating an IP address. This target is put in the header of a new UDP packet, and is filled with Slammers own code. The SQL software is told to send the new packet off to the destination, again disguised as a harmless database query request, and the process is started again.


What must be understood about this process, while technically challenging, is the speed with which it takes place. With no SYN-ACK to wait for, and with the infection process being limited only by the available bandwidth of the machine sending the packets, Slammer is able to replicate its self from 4000 to 26,000 times per second. There is no need to wait for responses from any of the thousands of machines being scanned for the appropriate SQL port, as Slammer relies on the UDP protocol. Finally, it should be noted that Slammer has no malicious code! Slammer simply creates so much network congestion from its UDP requests that computers are unable to get responses from routers, servers, and machines, all trying to process the vast amount of information being sent around[14].


The Normal Accident

With an outline of the normal approach to system accidents, an understanding of the historical ideologies present in the system, and a technical understanding of the accident, it is now possible to move past some of the more bromidic interpretations of failure, toward more sociologically satisfying conclusions.


As was mentioned, tight coupling and interactive complexity are to be seen as necessary for a normal accident to take place. Throughout the technical story told about Slammer, tight coupling is a prominent feature. The internet, as a physical network, is extremely reliant on coupling, and the authors of Slammer are all too aware of this. The design of the worm demonstrates an understanding of the nature of cause and effect, the very basis of coupling, in communications technology. The Slammer worm capitalizes on the fact that hardware, one half of the technological system, is tightly coupled with code, foreign or otherwise. Code is interpreted and acted upon by a machine according to strict principles; when the SQL server receives a UDP packet, the actions taken and the code present are tightly coupled. From the immediate reaction of the machine, to look for a query from the UDP packet, to the instructed actions of Slammers design; the entire process of connectionless, one way code transfer is an extremely coupled system. This point, for malicious computer software, is extremely important; being a tightly coupled system, the internet and computer technology become predictable and ordered. This predictability helps to understand another tightly coupled aspect of the “Slammer system”; the degree to which individual nodes on the internet are tightly coupled with one another. Slammer exploits not only the SQL software, but the tightly coupled nature of network design. The historical prescriptions of survivability put on the internet during the Cold War, which once served as a security measure, have come to represent a weakness in the system. Cause and effect is now a global factor. Computers in North America, thanks to the blinding speed of contemporary communication, have become tightly coupled with computers all over the globe; this is where the system has become complex, and where accidents are to be expected.


The amount of coupling present in the modern internet is almost unfathomable, and such complexity does it breed that interactivity is sure to follow. Indeed, many processes in the Slammer accident can be seen as interactively complex due to their unexpected nature. Microsoft did not intentionally construct their SQL software to interact with the entire bandwidth of the internet, and yet this is exactly what happened. A single UDP database request, an otherwise local process of the SQL subsystem, begins to interact with the bandwidth of home computer users, email servers, and airline systems. This unexpected interactivity was incomprehensible to not only system designers, like Microsoft, and major ISP’s, but to home users who had no idea they were themselves interacting in unforeseen ways. Home routers and users aided in the interactive process by performing in expected ways instead of closing down ports, or disabling the Microsoft SQL Server Desktop Engine (MSDE) client service; incomprehensibility implicated them in the normal accident.


The coupling that is necessary for a system like the internet to run and the incomprehensible nature of the interactions that are possible, should give the reader perspective on blame in such normal accidents. To blame Microsoft, for instance, is to attack their inability to predict all possible interactions within a system they did not design. In a work by Donald MacKenzie entitled “A Worm in the Bud? Computers, Systems, and the Safety-Case Problem” specific reference is made to the difficulty of designing flawless software. MacKenzie notes that even engineers predominantly involved with software design describe catastrophic consequences as a possible outcome of their work. One engineer at a conference on “Software Engineering” pointed out the “seemingly unavoidable fallibility of large software” (as cited in Hughes&Hughes, p166). While the individual is speaking directly about software for transportation industries, the exploration of tight coupling and interactive complexity should by now alert the reader to the rather obvious implications.


Another popular media target for blame is the hacker, or individual, responsible for creating Slammer. However, to make such claims is to ignore the vulnerabilities already present in the system. System reliability once rested upon the ability for individual nodes to act as switching offices; now, the ability for nodes to route information based on packets can be used to destroy reliability and disable communication. The system has a flaw that is so well engrained it may never be able to be undone; the system has our trust. Human trust in technology is a powerful force and exhibits a sort of “reverse dynamic nominalism”[15]. For the internet then, to regard it as safe and reliable is to make it unsafe and unreliable through our own complacency. In MacKenzie’s words, “a computer system that errs frequently (and is therefore distrusted) is, under some circumstances, less dangerous than one that almost never errs” (Hughes&Hughes, 2000, p183).


Can we re-scale it? Do we have the prescriptions?

The normalized aspects of the Slammer accident help to redefine exactly what is meant by blame; if systems are far too complex and incomprehensible to be understood, surely no single party could be held accountable for system failure. Similarly a socio-historical account the creation of the internet helps to redefine reliability as being contingent upon a particular frame of reference. Surely, if the root of the problem is complexity then the logical solution should be simplicity, but is this realistic?


In a 1983 paper entitled “Give Me a Laboratory and I will Raise the World” Bruno Latour examined what qualities the laboratory has that allow it to gift to science the autonomy it currently enjoys. Latour states confidently about his method that:


It is not only the key to a sociological understanding of science that is to be found in lab studies, it is also, I believe, the key to a sociological understanding of society itself, since it is in laboratories that the most new sources of power are generated. (as cited in K. Knorr and M. Mulkay, eds., 1983, p159-160).


If Mr. Latour is right, and indeed many parties disagree on if he is, then perhaps it is to the “laboratories of technology” that answers about complex systems might be gleaned. Latour attributes the power of laboratories to two major capacities found therein; the ability to control/eliminate variables and the ability to control scale[16]. To call attention to the controlled environments that exist within laboratories is to call attention to what Latour calls “Prescriptions”; those local variables that exist within the laboratory, which must be moved into society first, if the scientific product is to follow successfully (K. Knorr and M. Mulkay, eds., 1983, p151-153). Crucial for Latour as well is the capacity for the laboratory to control scale. Speaking of Louis Pasteur’s work on anthrax vaccines, Latour notes that “A few people much weaker than epidemics can become stronger if they change the scale of the two actors – making the microbes big, and the epizootic small” (K. Knorr and M. Mulkay, eds., 1983, p163). Pasteur successfully shrunk an epidemic to fit within a Petri dish and made his microbes visually accessible for onlookers; for Latour, this was his success.


An important paradox presents its self with these local laboratory capacities in mind. With a radically less vibrant and altogether smaller control environment, is it even possible to reconcile these differences in the mess of the real world? It is in this discrepancy, between the controlled environment and the real world that this paper would suggest analysts look for answers to these questions. Perhaps Latour was providing some foreshadowing in the still unfinished book of social inquiry when he stated proudly that “We are only just starting to take up the challenge that laboratory practices present for the study of society” (K. Knorr and M. Mulkay, eds., 1983, p169)


Keep it Simple, Stupid!


With respect to the discussion above, the system of internet mediated communication is vastly complex. The Slammer accident, as a failure of that system, should be considered normal given the incomprehensible nature of the interactions that took place, and of the system. The burning question that still remains is ‘What can we do?’ It must be understood that the complexity and coupling present in large scale technological systems is there by our design and undoubtedly reflects some compromise between system safety/reliability and practicality. If indeed accidents like the Slammer worm are to be seen as truly normal, then there seems there is little that can be done. Technology, much like science, is not a perfect discipline; perhaps it is from this vantage point that decisions about technological implementation can be made. Like any application of human ingenuity, technology suffers from but one inadequacy; its designers. Similarly it is an equally human shortcoming that leads to hastened accusations of blame. If the question is indeed ‘What can we do?’ perhaps the best answer is: We can learn from our misplaced accusations, from our devotion to scientific positivism and strive instead to mitigate damage and costs without further increasing complexity. Were it not for the misgivings of alternative explanations about technological failures, a paper that points out the seemingly obvious problem in complex systems, their complexity, might not have any merit. It is clear that questions about blame and system reliability are more thoroughly answered by understanding the system failure as a social construction and by illustrating its normative aspects.

[1] David Blood, it should be noted, has collaborated with many colleagues in order to fully formalize his Strong Program. For further understanding of the challenges the Strong Program poses toward positivist scientific literature the reader could consider both Knowledge and Social Imagery (Routledge, 1976), by David Bloor, and Science in society (Palgrave Macmillan, 2005) by Matthew David. Positivism can be understood, for the STS scholar, as a problematic understanding of knowledge building. For reading on positivist assertions and assumptions Auguste Comte, Ian Hacking, and Karl Popper can provide a sociologically relevant and well rounded account of positivist thought.

[2] An obvious example would be the Actor-Network Theory (ANT), proposed by Bruno Latour. This system is, however, in no way representative of all the reactionary approaches to SSK/STS. For further reading on ANT the reader is encouraged to examine Bruno Latour’s Reassembling the Social: An Introduction to Actor-Network-Theory (Oxford University Press, 2005).

[3] See David, M. Science in Society, 2005, Palgrave Macmillan, pp. 61-64 for descriptions of symmetry and types of explanations within various analytic camps.

[4] Figures and statistics in this account come from the popular media outlet, Wired Magazine. It is appropriate that a media source was used as it provides a more common explanation of the accident. Readers should see Boutin, P. (2003). SLAMMED! An inside view of the worm that crashed the Internet in 15 minutes. Wired Magazine. 11(07), 146-149 for the full article, and for more illustrations of popular conceptions of accidents.

[5] For examples of media representations of this accident, and the subsequent blame put on individual actors and failures, any major media outlet could be examined. A specific reference to the blame put on the hacker community can be found at (accessed Thursday, March 13, 2008). Similarly, technical documents focusing on the negligence of Microsoft’s engineers are plentiful, one online document at (accessed Thursday, March 13, 2008) gives the reader some idea as to the degree to which individual failures are blamed, but more importantly how seductive of an explanation they are.

[6] For definitions and understandings of interpretive flexibility, as conceived of by H.M. Collins, the reader could address the following section of Matthew David’s work: David, M. Science in Society, 2005, Palgrave Macmillan, pp. 61-64.

[7] For an explanation of the differences between parts, units, subsystems and systems, the reader is encouraged to review the brief section in Perrow, C. (1999). Normal Accidents: Living with High-Risk Technologies. New Jersey: Princeton University Press, p65, as they are subtle, but important distinctions.

[8] This paper will not deal with the competing issues as they are largely irrelevant. It is enough for this piece to point out that the American definition of the packet switching paradigm won out, it is the focus of Abbate’s piece, and the focus of this paper. It should be noted that packet switching will simply refer to the paradigm within computer networking in which smaller, discrete piece of information are transmitted with individual transfer prescriptions and contents.

[9] Ideologies similar to Baran’s decentralized systems were present before, and beyond, his work; the case is merely of sociological importance for illustrating the social contingency with which technological systems are constructed.

[10] For further reading on emergent use the collective body of work known as Ludology, or Video Game Studies, are especially instructive of the ways in which use is dynamic, not static.

[11] In the interest of space, and of simplicity, the following technical discussion will not contain specific references for information that has been paraphrased. Instead, information will be presented in paraphrased format, with a footnote at the end of the discussion indicating the source material, see Footnote 14. All information comes from University websites and includes source material.

[12] A buffer overflow occurs when any process attempts to store information that exceed the parameters set; the buffer. Overflows can result in many anomalous behaviors and are common in programming. Further understanding of buffer overflows can be found at (accessed March, 12, 2008)

[13] A stack is essentially a list of things to do that are not being worked on at the present time; it operates on a first-in-first-out basis, and is common across all software/computer operation. For more in depth reading on the nature of stacks the reader is encouraged to see Knuth, D. (1997). The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third Edition. Addison-Wesley. The section 2.2.1: Stacks, Queues, and Deques, pp. 238–243, has the specific information.

[14] (Please see footnote 10) Information regarding the nature of the actual Slammer attack is available in Boutin, P. (2003). SLAMMED! An inside view of the worm that crashed the Internet in 15 minutes. Wired Magazine. 11(07), 146-149. Information regarding the structure of  UDP protocols and subsequent speed of Slammer’s infection, as well as technical workups of the worms impact, are available through Moore, D., Paxson, V., Savage, S., Shannon, C., Staniford, S., Weaver, N. (2003). Inside the Slammer Worm. Retrieved March, 13, 2008, from More technical discussion on the nature of TCP and UDP protocols can be found at Kristoff, J. The Transmission Control Protocol. Retrieved March, 13, 2008, from and (Retrieved March 13, 2008) respectively.



[15] As discussed in Jones-Imhotep (2008). This doctrine, a play on Ian Hacking’s “dynamic nominalsm” (see Historical Ontology by Ian Hacking), is meant to imply, with regard to technology, that by regarding something as safe or reliable we make it quite the opposite by our complacency. A full reading of MacKenzie’s piece, previously mentioned, and Ian Hacking’s Historical Ontology should give the reader an understanding of the term “reverse dynamic nominalism” as it is applied here, if the lecture is not available.

[16] Without a thorough ethno-methodological inquiry into the places where technologies are being designed it is difficult to determine the potency of the allegory; it is sufficient here, however, to pose some questions based on Latour’s depiction of the laboratory.




Agatha C. Hughes & Thomas P. Hughes, eds., (2000) Systems, Experts, and Computers: The Systems Approach in Management and Engineering, World War II and After. MIT press, pp. 161-190


Boutin, P. (2003). SLAMMED! An inside view of the worm that crashed the Internet in 15 minutes. Wired Magazine. 11(07), 146-149.


Latour, B. (1983). Give me a Laboratory and I will Raise the World. In K. Knorr & M. Mulkay (Eds.), Science Observerd: Perspectives on the Social Study of Science. (141-169). London: SAGE Publications.


Moore, D., Paxson, V., Savage, S., Shannon, C., Staniford, S., Weaver, N. (2003). Inside the Slammer Worm. Retrieved March, 13, 2008, from


Perrow, C. (1999). Normal Accidents: Living with High-Risk Technologies. New Jersey: Princeton University Press.


Among the bewildering array of theories, the longstanding debates over the definitions and implications of man made artifacts, and the level to which our lives make use of such artifacts, one thing remains abundantly clear; technology has become a powerful but ambiguous entity. To discuss technology, or technological failure, as this paper would seek to do, is to essentially play a game of semantics. It is to discuss equally the meaning of the terms we seek to employ, as well as the implications of their meanings. It is for the sake of clarity that these two interrelated pieces of the technological puzzle must be addressed. Without an understanding of the terms with which we use to describe a process, we are left with nothing but arbitrary conjecture about a subject we have failed to approach in its entirety. It will be the purpose of this paper then to move first from common definitions of technology and of failure, to a more complete framework with which to determine what it is we mean when we say that a technology has failed. It is important to distinguish the type of technological failure to be discussed; it will not be the goal of this paper to discuss technologies that fail in-use, or technologies that are seen to cause accidents. This paper will deal with the type of technological failure that is seen to cause one manifestation of a technological artifact to ‘win out’ over another in common use. Technological failure of this sort, once more inclusive definitions of ‘technology’ and ‘failure’ are employed, is seen to be not the success or failure of one technology over another, but a necessary and inevitable stage in the development of technological artifacts.


To ask the question, ‘What is technology?’ is to open the door of a seemingly simple concept only to find, upon walking through the door, a good deal confusion. To effectively clear the air, it seems logical to look at what technology is not. A common and long held belief is that technology is simply the act of applying science to a physical creation. Seen in this light, the term becomes overly exclusive, it would seem to imply that for a technology to exist the creator of the technology must be a practitioner of science (Lecture, Oct. 11, 2007). Harry Collins and Trevor Pinch are quick to point out that the “…examples of Cumbrian sheepfarmers and AIDS patients…” serve to illustrate the “…vital contributions to technical decisions…” (Collins & Pinch, 1998, p.5) laypersons can make[1]. It is clear then to see why such a limited definition of technology is not worth considering. Another misguided conception of technology is that its definition is limited to the physical artifact that comes to represent it. This is a very deeply engrained idea in modern society, and it is a mistake that is sometimes referred to as “distance lends enchantment” (Collins & Pinch, 1998, p.2). The idea is that the more detached from a technology one is, the simpler the technology seems. It is no wonder then, with the overwhelming amount of technological artifacts that present themselves to our daily experience that people often seem to view technology in such a simple and limited light. It is important to raise this point as it leads us to examine the polar opposite of this enchanted view, that is, a close observation of technology in an effort to bring to light the complex interrelations technology has with the world around it. A beautiful exploration of this concept can be seen in the work of Leo Marx, who espouses a view of technology quite similar to that of Heidegger; that “the essence of technology is by no means anything technological,” (Marx, 1997). This simple statement serves to illustrate exactly what we should be implying when we talk about technology, that technology is not simply the physical extension of a system of utility, indeed it is the system itself. Marx suggests that the creation of the term ‘technology’ was a result of the inadequacies of alternative terminology to describe the “…power—of progress—that far exceeded, in degree, scope, and scale, the relatively limited capacity of the merely useful (or mechanic or practical or industrial) arts to generate social change.” (Marx, 1997). The emphasis for Marx is on the vastness of the ‘sociotechnical’ system, in which the physical component “…constitutes a relatively small part of the whole.” (Marx, 1997). However, Marx does note that out of this definition came the hazardous attribution of agency to technology, a problem that seems to manifest itself in other aspects of technology; aspects which will be dealt with in later portions of this paper. These are important refinements to include in a working definition of technology as they will be the basis from which our concept of ‘technological failure’ shall be revised.


Having a working definition of technology, it is now important to examine the use of the term ‘failure’. One of the more common definitions of failure suggests that something, or someone, is unable to function well or meet needs. This is described by Kenneth Lipartito in his article “Picturephone and the Information Age”, to be most pronounced in “…traditional Whig histories” (Lipartito, 2003, pp. 53). Lipartito points out that “Failures may result not from poor design or malfunction but from the resistance of social groups…” (Lipartito, 2003, pp. 53), yet the Whig ideal is held strong in common discourse. The problem with the Whig conception of failure is that it comes to support the same hazardous concept of agency, mentioned earlier, when used to describe the supposed ‘actions’ of technology. This mode of thinking tends to ignore the vast web of socially relevant factors we have seen to be involved in technology and disregards the very foundations upon which technology rests. In the prologue to his book “Born Losers: A history of Failure in America”, Scott Sandage (2005) illustrates the shifting nature of the term failure. What was once a term used to imply an institutional failing became the failing of the individual. According to Sandage (2005) social forces, most notably the individual freedom afforded to people throughout the 20th century that was absent in the 19th century, lead to not only the freedom of the individual to succeed, but also to fail. This put the responsibility for success on the intentioned-actor’s shoulders. This responsibility implied that if one did not succeed it was due to ones own inadequacies (Lecture, Sep. 27, 2007)[2]. There is a severe flaw in this line of thought when discussing technology. If we suggest a technology can fail we are also suggesting that a technology is an actor with intention. This is simply not the case, technologies do not engage in the act of failure, they are subjected to our conception of it. It is from this conception of failure that one technology is seen to be ‘better’ than another, and it is a conception that we must do away with if we are to understand what ‘technological failure’ is. More striking yet is the inability of this notion of failure to account for social factors that play a part in the relative ‘successes’ of actors. Sandage (2005) relates the case of Chauncey Moore, a person who, despite overwhelming financial success afforded to him by his individual merits, was ruined by social factors beyond his control; most notably the impact of war on his patrons. This serves to illustrate the seemingly disconnected social factors that can indeed have an influence on the perceived success of actors in a network[3]. A final point of importance is the fundamental nature of technological and scientific development, as Collins and Pinch point out “Both science and technology are creatures of our art and our craft, and both are as perfectible or imperfectible as our skill allows them to be.” (Collins & Pinch, 1998, p.3). It is unreasonable to conceive of the production of technology without taking into account the process of trial and error[4]; a process that is facilitated and hampered by our inabilities, not those of the technologies being created. After all, the implementation of technology is not done in the controlled environment of a lab; it is in the real world that these tests come determine the fate of a technology. The inability of technologies to act, the social influence put on them, and the chaotic environment technologies are put to the test in (the real world), should all serve to illustrate failure a result of a process, that is not only unavoidable, but when seen through the methodological lens of science and engineering, necessary.


With the meaning of ‘technology’ and ‘failure’ firmly laid out, the question of what technological failure constitutes is easily answered. Technological failure becomes the end result of a process of trial and error, in an environment that encourages dramatic and spontaneous change, with sociotechnical systems as the focal point. As Kenneth Lipartito observes, “…success of failure is more a matter of social values and expectations than performance or function.” (Lipartito, 2003, pp. 54). The fundamental problem in conceiving of technological failure in the more common manner is the ever present human urge to define things as we tend to see them and not as they actually are. This point is nicely addressed in Jane Jacobs’ book “The Death and Life of Great American Cities” in which she criticizes urban planning theories[5]. Jacobs stresses the need for learning to happen from the observation of the real world. Her point about city planning is that “Cities are an immense laboratory of trial and error, failure and success…” and as such should be the sites for “…learning and forming and testing its theories.” (Jacobs, 1961, p.6). Much in the same way that city planners have “…ignored the study of success and failure in real life…” and “…have been incurious about the reasons for unexpected success…” (Jacobs, 1961, p.6), so too has the general public ignored the study of success and failure of technology in real life, and simply taken for granted the success of one technology as the outright failure of another. One must keep in mind what Kenneth Lipartito continually points in to in his article “Picturephone and the Information Age”, that “…the same combination of institutions and values that shapes presumably successful technologies can operate on failed ones as well.” (Lipartito, 2003, pp.54). It is this view of technological failure that must be stressed, or that must be pushed to hold sway over public consciousness, in an effort to broaden the public knowledge of technology and indeed of the natural world.


To conceive of technological failure in this way is, as was stated, simply a game of semantics. It is not a trivial pursuit however, it is a game intended to rework meaning; to better understand the real nature of technology and indeed human social interaction. The ability to draw technology out into the light, and illustrate the shared nature of what we have come to call failure between social actors, human and non-human alike, serves to help us depart from radical misconceptions in the realm of science and technology. Technology is no determining factor on society, nor is it strictly determined by society, it is part of a vast web of interactions that are quite simply put: always present. It is the recognition of this web that will help us reorganize the public image of science and technology and allow those actors to do what they must, without social constraint[6]. With a thorough understanding of the terms involved, technological failure is nothing more than a link in a chain that leads from nature, to our manipulation of nature, a link in the chain that structures and completes it as much as any other does.

[1] It is important to note that Collins & Pinch (1998) do not suggest that being a layperson is an indication of a lack of expertise, rather in most cases they are experts in their given fields of work, it is simply accreditation that is lacking.

[2] Whether this is or is not where the application of this concept of failure to technology comes from is, it is the case that the public can be seen to favor the notion that technologies are responsible for their own failure. (See a Google search or a newspaper headline)

[3] It is of crucial importance to note that it is not the intention of the author to imply that there is a direct relationship between the biological actors Sandage (2005) is talking about and the vast sociotechnical networks (also actors) that are being compared. However it is the intention of the author to suggest that the definition of failure that has come out of the processes Sandage (2005) has discussed, so too can be applied to all actors in any network, for the sake of a symmetrical comparison.

[4] This concept is of one the fundamental building blocks of Jane Jacobs’ argument for the analysis of real world events in her book “The Death and Life of Great American Cities.”, which is adopted as a means to an end, in knowledge building, later in this paper.

[5] While the focus is on city planning, the relation of such a social endeavor to the technological realm should not be a large leap given our revised definition of technology; indeed the move should be seamless to the reader at this point.

[6] This is not to imply that the conceptual practices outlined will necessarily eliminate, or want to eliminate the social presence in the networks of actors described. It is meant to imply that the social factors which are a part of science and technology can be understood by the public in a more complete way. This understanding can then be used to promote acceptance as part of the process, or removal if the social factors endanger the ability of that process to do what we need it to do.




  • Collins, H., & Pinch, T. (1998). The Golem at Large: What You Should Know about Technology. Cambridge: Cambridge University Press.
  • Jacobs, J. (1961). The Death and Life of Great American Cities. New York: Random House
  • Sandage, S. “Lives of Quiet Desperation.” Prologue. Born Losers: A History of Failure in America. By Scott A. Sandage. Cambridge: Harvard University Press, 2005. pp. 1-21
  • Marx, L. “Technology: The Emergence of a Hazardous Concept.” Social Research v64 (1997): pp.965-988
  • Lipartito, K. “Picturephone and the Information Age.” Technology and Culture v44 (2003): pp.50-81
  • * Gignac, R. (2007) In class lecture notes, STS3600, Edward Jones-Imhotep


Playing with history

May 5, 2010

The history of science is a particularly intriguing discipline through which numerous, often contentious, retellings of the same stories get brought to life. This short essay will present a case from the history of science that embodies the difficulties inherent in reconstructing events that are temporally, culturally and socially distinct from contemporary perspectives. As a case study Albert Einstein, and subsequently special relativity theory (SRT), are particularly entangled in a web of varying interpretations within the history of science. This might be the case precisely because of the impact of both the man, and the theories that are so intimately connected to his name. However, the purpose of this paper is not to explore the reasons for the importance of SRT as much as it is to question the grounds on which scholars might chosen to situate SRT and Albert Einstein in a scientific history of continuity or revolution. Important to questions of revolution and continuity in the case of SRT are, ironically enough, questions of relativity. If SRT represents a revolution or continuity in the history of science, it must be in reference to some other period which is more or less distinct from it, respectively. This paper then, will explore the metaphysical, epistemological, and ethical nature of so called ‘classical physics’ as it had manifested in the late 19th century, as well as that of ‘relativity physics’, or the physics of Einstein, during a large portion of its lifespan. Through these descriptions it will be seen that raising questions of revolution or continuity necessarily embody contradictions in what it means to write history from either perspective. The question of revolution or continuity, in the case of SRT, is incomplete without a frame of reference. From whose perspective is it revolutionary? And in what way is it revolutionary from that perspective? These are the questions to which this paper turns its focus. Historical storytelling is, just as historical actors themselves are, situated within a diverse range of perspectives, each suggestive and deceptive in important ways.


Classically contradictory understandings of nature


The science of the 19th century gave birth to an entirely new description of physics. Once an umbrella term that encompassed the great breadth of natural science generally, physics was becoming “the study of mechanics, electricity, and optics, employing a mathematical and experimental methodology” (Harman, 1982, p1). Particularly important for the purposes of this paper is the emphasis that was coming to be placed on the mechanical and the experimental. The physical world picture in the 19th century began to call under its mechanical purview the energies of light, heat, electricity, and magnetism. Particular theories were developed and represented in a distinctly mechanical way; “matter in motion was the basis of all physical phenomena” (Harman, 1982, p2). Joseph Fourier’s rigorous mathematical treatment of heat added support for the mechanical program; heat was now to be understood as “the motions of the particles of bodies” (Harman, 1982, p3). So too was optics caught up in the mechanical world view with A. J. Fresnel’s wave theory of light. Conceiving of light as a wave necessitated a medium through which that wave could propagate, and conceptions of a mechanical “ether” were the answer; indeed, “by the 1830s the wave theory of light was generally accepted” (Harman, 1982, p3). But the reach of the mechanical world view was not confined to particular theories and their development. The law of the conservation of energy seemed to have a unifying effect on energy based understandings of mechanics, heat, light, electricity, and magnetism. In developing the law of conservation of energy, Hermann von Helmholtz illustrated the “unifying role of the energy concept as an expression of the mechanical view of nature” (Harman, 1982, p4).


These mechanical views should be understood, however, as contrasting both Newtonian and electromagnetic views, and the term ‘classical physics’ should not be understood as a single, cohesive program. Where Newton had provided the impetus for the unification of physical theories, with his gravitational theory connecting earthly and cosmic phenomena, his theory also contradicted a number of mechanical understandings. If gravity was mechanical force that acted on bodies, exactly what did it act through? Newton’s explanation of ‘action at a distance’ seemed to be inconsistent with magnetic phenomena that demonstrated a field through which action took place. Where Newton saw action at a distance, late 19th century physicists saw action through a field.


Similarly there is a separation of the understanding 19th century mechanical physicists had from electromagnetic world views. While wave theories of light have been seen to be situated within mechanical world views, the equations Maxwell developed to handle electromagnetic phenomena bare a striking resemblance to Newton’s gravitational inverse-square law. While Maxwell himself proposed a mechanical either, there was also an attraction to the possibility of electromagnetic unification. This attraction is best captured in the work of H. A. Lorentz, who envisioned the electromagnetic field as primary in his theories surrounding electrons. This was the subtle difference in the electromagnetic world view; where once permutations in the mechanical ether were responsible for electromagnetic phenomena, Lorentz conceived of the “mechanical properties of matter…as being grounded on the properties of the electromagnetic ether” (Harman, 1982, p151). By 1900 this electromagnetic world view would gain ground, and was capable of a unified explanation while “[avoiding] the difficulties faced by mechanical theories of the elastic solid ether” (Harman, 1982, p152).


The metaphysical understanding of the world then, was two fold. On one hand their was the mechanical world view; inherited from Newtonian mechanics, capable of explaining those ambiguous entities lost on Newton, and presenting the promise of further unification through a growing body of physical theory. There was however, as has been seen, also an attraction to the possibility of electromagnetic unification. Two inverse representations of nature, both in their form and their relation to one another, and a commitment to some ether through which the propagation of light waves was possible, paint a picture of the metaphysical landscape of 19th century physics.


How though, can the classical view of the doing of science be characterized? This is where the importance of the experimental component P. M. Harman described can be seen to have influence. Indeed, the classical physics took as its epistemological baseline an empiricism that echoed the praise positivist thinkers had for the observational foundations of real knowledge. Experimentation was the order of the day. Building mechanical models of the ether that showed accordance with observable phenomena was one of Maxwell’s great preferences. Construction of mechanical models at this time “[were] the criterion of the intelligibility of [a] phenomenon” (Harman, 1982, p149). Similarly, the Michelson-Morley experiment to detect ether drift was seen as the answer to vacuist/plenist debates over the ether. Observational accounts then, had the capacity to be the inspiration for theories, as in magnetic field theories, the validation of theories, as is exemplified by model building, and the refutation of theories, as in the Michelson-Morley experiment. However, this empiricism should be written along side an account of the perception of those theories. It is important to understand that while 19th century physicists could be described as metaphysical realists, in so far as their theories appealed to a real world, their epistemology was couched by contrasting views over the realism of theories. Maxwell himself “had drawn attention to the dangers of confusing representation and reality”, implying an epistemological anti-realism (Harman, 1982, p149). The epistemological outlook then was about building knowledge from phenomena to which our senses had access to; this implied realism to some, but a skeptical anti-realism to others. The objects of study for physicists were only those things to which our senses had access; metaphysics was, in the positivist tradition, not an appropriate part of scientific study.


Special Relativity, three historical Albert Einstein’s


History is often written in a very matter of fact way. That is to say that actors, practices, and beliefs get retrospectively squashed into particular categories. For a discussion of Einstein’s own scientific commitments, it will be important to understand exactly how those commitments have developed and changed over time. Important for this paper, is that there is no one Einstein to examine; only a particular person at a particular time. The implications for questions of revolution and continuity change significantly when discussing different time periods in Einstein life and different understandings of those time periods. Gerald Holton, using personal communications between Einstein and his friends and colleagues, has explored the ways in which Einstein’s changing metaphysical and epistemological views stretch from his professional infancy in physics through to the culmination of his career. Important for this discussion then, as a personal account of his beliefs, will be the way in which the reified, atemporal Einstein hardly exists at all. To understand Einstein’s changing epistemological position is to understand the process that culminates in a world view that found a middle ground between the mechanical and the electromagnetic.


The influence of Ersnt Mach’s particular brand of positivism is vast in a young Einstein. Holton writes, “from the outset, the instrumentalist, and hence sensationist, views of measurement and of the concepts of space and time are strikingly evident” (Holton, 1988, p242). Holton is pointing to the way in which Einstein’s focus on “events” of simultaneity “overlaps almost entirely Mach’s basic ‘elements’” (Holton, 1988, p242). This is the positivist seen in Einstein, with his focus on real observable events, measurements, and the Machist view that “nothing is real except the perceptions” (Holton, 1988, p245). Einstein’s 1905 paper on relativity suggests, by its exploration of the categories of space and time, that “the fundamental problems of physics cannot be understood until an epistemological analysis is carried out” (Holton, 1988, p242). This view, given the correspondence with Mach, was Einstein’s own; “[he] thought so himself, and confessed as much in letters to Ernst Mach” (Holton, 1988, 244). We might firmly associate Einstein with the empiricist and positivist tradition at this point; Einstein believes that before, during, and shortly after publication of his relativity paper, the real world is precisely that which is accessible to the senses. Also of note is his epistemological position which seems to consist in part of exploring the capacity to know about the world under investigation; a kind of epistemological skepticism, much in the tradition of Mach.


A significant divergence from Mach comes in a  1918 letter to Einstein’s good friend Michelange Besso where he chastises Besso for misunderstanding the primacy of speculation over empiricism by calling attention to the “facts” upon which theory is built (Holton, 1988, p247). Holton’s point is that in this letter Einstein’s conception of fact, the consistency of light velocity for instance, is a radical departure from the Machian conception of fact. For Mach, considering these concepts as facts would be “evidence of ‘dogmatism’” and not empiricism (Holton, 1988, p247). Already a significant change in Einstein’s epistemology has occurred, where once he was aligned with positivist thinkers in his conception of, and self avowed connection to, a metaphysical anti-realism, now Einstein begins to adopt a more familiar role, elevating the lower Machian notion of conceptual objects to the level of fact. This is a kind of relaxed empirical position in which imaginative leaps are beginning to have primacy in Einstein’s epistemological hierarchy.


Long after Mach had denounced relativity theory, and once positivist thinkers had taken up relativity theory as a flagship example of the triumph of experience in scientific knowledge, Einstein appears to have made a final significant change in his position. In writing to the famous positivist Moritz Schlick in 1930 Einstein informs him that his “whole orientation [is] so to speak too positivistic…Physics is the attempt at the conceptual construction of a model of the real world”, referring to himself at the end of the letter as “the ‘metaphysicist’ Einstein” (Einstein in Holton, 1988, p261). This is a strikingly different Einstein than had been originally encountered, whose metaphysics now is most aptly described as a kind of rational realism. This is the Einstein that is found in particular brands of histories; the Einstein whose markers for good theoretical work are first inner perfection and then external validation. Indeed, Einstein himself once responded that if experimental confirmation had not arrived regarding the equivalence of inertial and gravitational mass “I would have been sorry for the dear Lord-the theory is correct” (Einstein in Holton, 1988, p255). Where once Einstein’s epistemological position, holding observation above all and leading to a metaphysical anti-realism, now there is left the “metaphysicist” Einstein, who favors the “intuitive leap, one that is only guided by experience” in constructing his metaphysical and epistemological realism (Holton, 1988, p263, emphasis added).



Answering a question


Does the special theory of relativity constitute a revolution against, or continuity with, nineteenth century classical physics? The question seems lacking, for the answer necessarily depends on the frame of reference from which it is answered.


Take, for example, Einstein’s own frame of reference before the formalization of SRT. He is by his own admission a Machian; his position is subsequently that of a metaphysical anti-realist, or at the very least skeptical of any real world. Similarly he is, in the Machian tradition, skeptical of those conceptual creations that, by way of dogmatism, come to avoid criticism and reworking; Einstein the epistemological anti-realist. In terms of the classical and Einsteinian metaphysical position one might be compelled to come down on the side of revolution. Certainly the classical physicists were metaphysical realists in so far as their empiricism was held to gain access to the real world. However, in terms of the classical and Einsteinian epistemological position one is confronted with two contradictory cases. On one hand continuity can be seen in the anti-realism espoused by both Einstein and contemporaries like Lorentz, who warned about the reality of models and of Mach’s detested conceptual creations. While on the other, one might read revolution into the story in so far as Einstein’s anti-realism at this point, as an analogue of Mach’s, departs vastly from epistemological realists of the same period.


What, then, to make of the metaphysically and epistemologically mature Einstein? If one was to consider the metaphysical changes, certainly revolution might be inferred by the departure from mechanical and electromagnetic world views. Again, the case is not so simple; one might wonder about the continuity to be found in the metaphysical realism that both Einstein and classical physicists engaged in.


Perhaps one might inquire as to the level of revolution or continuity found in ethical considerations of scientists. If the question was asked from a mature Einstein’s perspective the answer might come back firmly, continuity. Indeed Einstein himself described SRT as a “natural development of a line which can be pursued through centuries” (Einstein in Holton, 1988). The same question asked from Ernst Mach’s frame of reference would no doubt yield a different answer. For Mach the goal of the scientist was “economic and descriptive” but for Einstein it was “speculative-constructive and intuitive”, each taking as their object of study fundamentally different objects. For Mach, elements or observable reality was the focus, but for Einstein it was nothing short of the true and real nature of the universe. Revolution would be the only answer one might expect from Mach as he certainly would not put the place of science in the analysis of, and deduction from, potentially misleading concepts.


In any case, it might be prudent to take a step back from philosophically grounded categories and ask instead after the overall form of scientific development. Did previous understandings of the physical world influence the work Einstein presented in 1905? Yes; the development of SRT has been seen to be entangled in debates between mechanical and electromagnetic world views and it is precisely the asymmetry of Maxwell’s equations that prompts Einstein to develop SRT. Did Einstein’s 1905 paper come to influence subsequent theories? Again, yes. Certainly general relativity, at the very least, can be seen to have developed out of concerns that SRT raised. Is this not the very definition of continuity? Does the succession of theories that develop and grow by making the journey out from under the shadow of previous theories and into the light of day not meet the mark of continuity?


Where then, is the value of a question about revolution or continuity in the history of science? The quick answer might be that of the nihilist; such questions are of no use and any discourse is bound to end in contradiction. Surely if further investigation into third party receptions of Einstein’s work were to be taken into account, the contradictions would multiply, necessitating a much larger paper to fully explore. So what is it that can be gained from a view in which the multiplication of perspective seems only to yield problematic answers? Perhaps the answers that come from differing perspectives might serve to inform and enrich one another. One might find new and nuanced histories, or perhaps give life to otherwise dead or potentially non-existent historical actors; this would surely be preferable to not telling those stories at the very least. The question then, answers its self by remaining open ended. In some ways there is revolution to be found, and in some ways continuity. It is only by paying attention to the subtleties of varying histories that one can arrive at any complete understanding of an object of study; at the very least we may have commented upon a revolution in analysis.




Harman, P. M., (1982). Energy, Force and Matter: The Conceptual Development of Nineteenth-Century Physics. Cambridge: Cambridge University Press


Holton, G. (1973; rev. ed., 1988) Thematic Origins of Scientific Thought: Kepler to Einstein. Cambridge: Harvard University Press


*STS/3775 Lecture notes provide unofficial conceptual foundations for this paper. While no direct quotations are used, it would be dishonest to describe my own thoughts as a revolution rather than a continuity that begins with the class instruction and discussion in 3775.



Saccharomyces cerevisiae, a solid drinking buddy on St. Patty’s day.

Being St. Patrick’s day I thought it appropriate to reflect some on one of my favorite beverages. The history of beer is a long and complicated one that cant be done justice here. However, one interesting interaction beer and the ocean have, other than many yeasts populating oceanic sites of fungal growth, is that between Captain James Cook and the biotechnology he brought with him to New Zealand. Cook visited the islands a number of times throughout his career but it is his second visit in 1773 for which he is best remembered for his biotechnological contributions to New Zealand. It was “on Thursday 1April, 1773 on the south-east shore of Dusky Sound, near his camping point in Pickersgill Harbour”, that “the first New Zealand beer was brewed” (Kennedy). Regardless of what pub goers the world over will tell you on St. Patrick’s day, beer isn’t just good for getting inebriated, nor was it for Cook. Cook, like so many maritime explorers of his day, had to fight a constant battle against the constantly fluctuating moral of his crew due to unfavorable living conditions on ships. One important factor was disease, something, even in its simpler manifestations, not so easily overcome in the 18th century. Scurvy was one such affliction. At the time, the best preventative practice for avoiding scurvy was to bring loads of fresh fruit and vegetables with you on a voyage. However, being that fruits rot and voyages could be particularly long, the Admiralty was interested in other solutions as “it was normal to loose a quarter or more of the crew to scurvy” (Kennedy). Max Kennedy, an author writing in Australasian Biotechnology, described Cook and his crews role as that of “guinea pigs in a grand experiment against scurvy” (Kennedy). Their efforts included more than just beer, and their biotechnologies ranged from yeast fermentation to rendered animal fats for lamp oils. In any event, Cook’s efforts were so effective that on his second voyage “he did not loose a single crew member to scurvy” (Kennedy).

Cook and his crew found themselves at a time when attitudes toward the ocean and sea going were under rapid change. Kennedy makes some rather large claims about the colonization of New Zealand and biotechnologies role in it, but the grand history he narrates is not really of interest to me. What is of interest is the emergent culture that circumscribed Cook’s interaction with New Zealand, the ocean, and the micro-organic assistants he had with him. In particular, Cook’s journey presents a great case for examining the interplay between what Stefan Helmreich called the “double discourse on the sea, as alternately amiable strange and enemy other” (Helmreich, 9).

Helen Rozwadowski has worked hard to illustrate the ways in which the “cultural and social experience of the sea” is intimately connected with the “intellectual content of science” (Rozwadowski, 409). Rozwadowski’s project is particularly relevant to Helmreich’s work as she understands marine and scientific worlds to look on the ocean with that same “double vision”. Where Helmreich is interested in questions of “how sentiment and science about the sea inflect one another” in “contemporary biology”, Rozwadowski is interested in how that sentiment manifested at a time when the sea was new to science and oceanography was a messy, muddy ordeal.

Naturalists, indeed the lay public, observed the popular speculation of the period, that “life at sea was uncomfortable for polite, upright people” (Rozwadowski, 412). Technological improvements like “navigational advances”, “healthier provisioning”, and “steam-powered vessels” were only beginning to add an sense of safety to the practice (Rozwadowski, 413). Prominent writers of the time started to imbued the sea-going life with a kind of heroism that bolstered the image of the naturalist to the height of mid 19th century images of “bravery and manly sport” (Rozwadowski, 414). Taking on the potentially deadly elements of life at sea, especially “for the sake of science”, was slowly becoming a desirable project of the gentleman scientist (Rozwadowski, 414).

But naturalists also entered a foreign cultural and social space when they began to walk on board ships. The existing hierarchy of the naval world positioned naturalists in an awkward place and sailors often greeted the oceanic newcomers with a measure of contempt. Science on board meant not only more people but more work for the whole ship. Collection, dredging in particular, was an important part of oceanographic work, but often fell on the shoulders of the common sailor to perform; leaving the more sophisticated analysis to newly arrived, but hierarchically superior, naturalists on board. This kind of tension, between sailors and naturalists, was a particular boundary that needed overcoming and constant contact and dialog between naval and scientific worlds would eventually foster a cultural middle ground in which “relations between the two groups became noticeably more restrained and polite” (Rozwadowski, 418). Arguments, social out-casting, elitism, and all manner of conflict erupted in the development of a working relationship between naval and scientific men and stands as an important part of the wider appreciation of what it meant to be a naturalist and to do scientific work on board ships.

There was no ready made scientific life on board ships in the 18th and 19th century and a great amount of work had to be done to produce scientific knowledge in that setting. In particular, a number of land based traditions made their way on to ships during the contestations; co-operative work, musical instruments, shared leisure activity, and comfortable surroundings all reconfigured the material and cultural environment of ships in the interest of domesticating sea-life (Rozwadowski, 421-25). Out of the mess and seeming disunity of early ship life, came the somewhat stable entity known now as science-at-sea. The ways of talking and acting on board ships is an important part of what it was, and is like, to have a life in oceanographic work. This is the import of Rozwadowski’s story; to remind us that even science-at-sea is not something that pops up some time in the 18th century, ready made and easily performed. Rather, rendering life at sea hospitable, engaging that double vision of Helmreich’s, was a turbulent practice for naturalists and naval actors alike.

I think Rozwadowski’s article is a particularly fun read and, while she is not an anthropologist, certainly seems to tackle the same questions about how “sentiment and science about the sea” are intimately entangled.  In any event, it is nice to have a historical account to work along side Helmreich’s more contemporary research.

Helmreich, Stefan (2008) Alien Ocean: Anthropological Voyages in Microbial Seas(Berkeley: University of California Press).

Kennedy, M. J. (1996) Biotechnology Brought to New Zealand by Captain James Cook aboard the Endeavor, Resolution, Adventure and Discovery. Australasian Biotechnology. vol. 6, no. 3, pp. 156-160

Rozwadowski, H.M. (1996). Small World: Forging a Scientific Maritime Culture for Oceanography. Isis. vol. 33, no. 1, pp. 23-36

I love to cook. More than that, I love to eat. While I was reading Cooper’s text this week, I happened to have the food channel on and caught an episode of “The F Word”. This show is essentially 1 part cooking show to 2 parts excuse to curse on tv. Hosted by the notoriously boisterous Gordon Ramsey, the show sometimes has some relatively interesting culinary expeditions. This post wont be a terribly long or involved one, but I think the video really gets at a back-and-forth I have been wrestling with ever since I read Cooper’s first chapter and Ali’s post.

In this episode, the goal is to get a restaurant full of vegetarians to enjoy a meat based meal enough to willingly pay for it. One of the ingredients is caviar. Ramsey speaks about the problems of overfishing of beluga, their status as endangered, and the need for a replacement. Immediately I was taken with the application Cooper’s work had in the realm of fish eggs. Scarcity is the name of the game in caviar and is one of the major reasons that the price is so unbelievably high. Ramsey is on a mission to find an alternative source of caviar that is as equally appealing on the capitalist market. Ramsey visits a fish farm in Spain where sturgeon are farmed on mass for their eggs, which are supposed to be just as tasty as even the most scarce of caviar One of the things I was taken with in Cooper’s book was the idea of the devaluing of life:

“As long as life science production is subject to the imperatives of capitalist accumulation, the promise of a surplus of life will be predicated on a corresponding move to devaluate life. The two sides of the capitalist delirium – the drive to push beyond limits and the need to reimpose them, in the form of scarcity – must be understood as mutually constitutive.” (Cooper, 49)

The move to devalue for the purpose of generating a surplus, and the limits imposed by scarcity are well drawn out in debates over the status of endangered animals that are used in wide scale production of some of the most expensive foods on the planet. I was struck, in a relatively offsetting way, by a number of things in this video that relate to the devaluation of life. The opposition between near maternal nurturing and impersonal surgical egg extraction, if the flavor wasn’t enough, is disturbing. As Ramsey lifts the sturgeon up he is instructed to be gentle with her; he describes her as beautiful; rubs her stomach, feeling for life – these all feel like relatively nurturing gentures. While ultrasound is a rendering technology useful in many applications, the popular connection with human child birth is hard to ignore. The guide Ramsey has at this sturgeon farm in Spain uses an ultrasound machine to determine the progress of the female’s eggs; the images suggest the particular fish they are looking at is not quite ready. Eventually a female is selected, killed by injection, and sent off for surgery. Ramsey and the surgeon proceed to gut the fish and rend her eggs from her stomach with their bare hands, snacking on clumps of egg as they go.

It isn’t that I’m squeamish, or even that I disagree with harming animals for food; its simply the casual move from gentle ultra-sounding to ripping unborn life from an animals stomach that is striking to me. One of the questions I kept asking myself, especially after reading Ali’s post this week, is to what degree we are acting ethically when we are capitalizing on surplus life? Where do we draw the line between devaluing and valuing life? The argument could be made that this is a wholly ethical act that is predicated on the valuing of life…particularly that of the beluga and other marine life that are becoming endangered. However it could also be reasonably argued that, along some universal axis of morality, life in either case is devalued under the capitalist impulse to valorize the dollar. I dont have an overarching point for this weeks rendering, I just wanted to share the video and express this back and forth I’m caught in when reading Cooper. In a lot of cases I think we could find a way that the capitalizing of life is caught between these two poles, between the ethical and unethical, between valuing and devaluing.

PS. I cant stand caviar….

So as I was siting patiently awaiting the inebriation that comes with my birthday weekend (woo me) a friend directed me toward a site called Code Organ that turns part of a web page’s code into a sound . From their “About” section:



I thought it would be fun to give our thoughts some aural expression. Its a shame you can only share it on facebook/twitter…I had to manually record the sound and embed it in a video for youtube. Check it out some time.

These are Little Tools of Knowledge’s first words:

After making this post my site has somewhat less to say:

In appreciation of some of the tremendously interesting work William Turkle ( and Edward Jones-Imhotep have done (, and in honor of Turkle’s talk at the salon Natasha is an organizer of (which, thanks to the susceptibility of my body to virus’, I have missed) here is a rendering I thought we all might have fun with. Specifically inspired by Brian’s discussion about inter-species communication, here is Botanicalls:

A little expensive, but no entirely non-DIY. The arduino micro-controller ( with some soft and hard mods can produce the desired effect, and much much more…do some searching

Sam Dehaeck, Christophe Wylock, and Pierre Colinet

Phys. Fluids 21, 091108 (2009)

For my rendering(s) this week, I wanted to present something that I found immensely interesting in Peter Galison and Lorraine Daston’s book “Objectivity” that was not present in their 1992 article (as might be expected after 15 years).  In the article, we are given something of an archaeology of “objectivity”; a way of approaching the concept that highlights its particular historical contexts and meanings:

The period of “truth to nature”, localized in those rendering practices of 16th, 17th and 18th century atlas making, is characterized as having gotten at “what really is”. Atlas makers, scientists, and artists were dedicated to a kind of Platonic realism that they felt was captured in the typified renderings drawn by hand. Partly a dedication to a particularly ontological understanding of the objective (that ideal types of objects really are out there), and partly a measure of pragmatics (the need to select and use renderings for a community of naturalists), objectivity lacked the mechanical and aperspectival components that are so bound up with the way the term is now casually deployed.  During the 19th century the devious and ever present “subject” began to threaten the very foundation of nature’s truth. By virtue of our individual perspectives and fallible senses, objectivity began to take on new meaning; to be objective was to get away from the influence of the subject. Photography, x-rays, lithographs, photoengravings, camera obscura drawings were all deployed as “mechanical or procedural safeguards” against “subjective temptations”.

M. S. Sakar, C. Lee, and P. E. Arratia

Phys. Fluids 21, 091107 (2009)

Important for Daston and Galison are the ways in which objectivity constituted a philosophical and moral category. Mechanical objectivity could be read as a reaction to the kinds of anxieties Descartes presented about the senses and simultaneously as a moral imperative imposed on the self. The scientific self was being crafted as much as the images of science. To be a scientist during the “truth to nature” period required a substantially different moral disposition that during that of “mechanical objectivity”, the former demanding the sage-scientist to distill a reasoned image of universal forms from the variation nature presents, the later requiring the worker-scientist to reproduce mechanically the particulars of nature.

Lammert Heijnen, Pedro Antonio Quinto-Su, Xue Zhao, and Claus Dieter Ohl

Phys. Fluids 21, 091102 (2009)

There is another mode of scientific rendering that gets addressed in the book version of this article that Galison and Daston call “trained judgement”. The 20th century practice of scientific image making, according to Galison and Daston, is no longer solely about the particulars of nature being mechanically represented free from the influence of the subject, but rather concerns the interpretation of nature’s patterns. The examples given of trained judgement concern images of a galaxy to be used in “schooling the reader’s judgement” (Objectivity, 366). Trained judgement is a type of image making concerned at once with the threat of variability (the sheer amount of variation in nature might make impractical mechanical image making), and with pedagogical concerns (that renderings ought to train the novice to recognize families or patterns in nature). The concluding point the authors make about the nature of objectivity is that, as a category, objectivity is neither stable nor singular in its conception or reception; the scientific persona, the particular image, the ontological commitments, and the rendering practices all get bound up in continually shifting anxieties about possible mis-interpretation or mis-representation of nature.

Sunghwan Jung, Pedro M. Reis, Jillian James, Christophe Clanet, and John W. M. Bush

Phys. Fluids 21, 091110 (2009)

The images I have placed throughout this post relate to a forward looking portion of Objectivity. In a chapter titled “Representation to Presentation”, Galison and Daston investigate late 20th century imaging practices. To make their case Galison and Daston examine a “successor to the atlas…that still aims to organize scientific images systematically for many kinds of uses, but in which images are, to a certain degree, interactive, not fixed.” (Objectivity, 383). One example is the Visible Human Project (, which allows users, through a collection of digital archives, to “zoom, excise, rotate, or fly through the images” (My first post about the “cell scale” flash animation might figure in this type of manipulative project, see (Objectivity, 383). The second type of modern imaging explored is distinctly “nanomanipulative”, “the original of this genre” being a 1990 IBM rendering of the company logo on an atomic scale ( produced using a scanning probe microscope “both to image and to manipulate individual xenon atoms” (Objectivity, 399). These images position themselves as presentations more than representations in that they are no longer representations of what is, but rather a presentation of “coming-into-existence”, (Objectivity, 383).

Henri Lhuissier and Emmanuel Villermaux

Phys. Fluids 21, 091111 (2009)

Presentation, in the Galisonian and Dastonian sense, relates at once to the coming-into-existence aspect of this type of image making, but also to the way in which the objects “really are being presented like wares in a shop window.” (Objectivity, 383). The effects of the World Wars on science are well noted and often discussed; it is in this period that we see the birth of truly big science, of scientists as structured workers that straddle pure and applied scientific saddles atop great industrial horses. This fusion of worlds is depicted, by Galison and Daston, as occurring along side changes in the place of art and aesthetics in scientific imaging (see Objectivity, pages 398-402).

Enrique Soto and Andrew Belmonte

Phys. Fluids 21, 091112 (2009)

The crucial example, and one that spawned my interest in the renderings I am presenting for you this week, is of the physicist Milton Van Dyke’s atlas of fluid mechanics and the American Physical Society’s photographic contest for publication in Physics of Fluid. The atlas that Daston and Galison see as having been the impetus for the contest’s conception is An Album of Fluid Motion by Van Dyke (1982 The atlas includes photographed black and white images whose “beautiful and revealing” nature “represent a valuable resource for …research and teaching” (Van Dyke in Objectivity, 403). The aesthetic focus that Van Dyke employed when publishing his “beautiful” renderings was taken up by the American Physical Society when they introduced a competition in 1983, judging both the “scientific merit” and the “artistry/aesthetic appeal” of the submitted photographs (see Physics of Fluid’s “Gallery of Fluid Motion” URL at bottom of page or click any of the images). The reason I find these renderings so interesting is that they really hit home the point Galison and Daston make about the shifting and mutually informed nature of all three forms of objective seeing:

“[Objectivity] has traced how epistemology and ethos emerged and merged over time and in context, one epistemic virtue often in point-counterpoint opposition to the others. But althought they may sometimes collide, epistemic virtues do not annihilate one another like rival armies. Rather, they accumulate: truth-to-nature, objectivity, and trained judgment are all still available as ways of image making and ways of life in the sciences today.” (Objectivity, 363)

With the final chapter of the full book in mind, it seems we have more evidence of the “accumulation” and making of new “ways of life” than was offered in the article. Color theory, aesthetics, engineering, “pure” science, and many other worlds reveal the coming together of ways of life and ways of knowing. We move full circle, from art being an integral component to the accurate representation of nature, to subjective interpretation being cast out of representation, to trained judgement constraining perspective, and finally to art engaging the “coming-into-existence” of scientifically presented reality. I think there are particularly strong resonances with arguments for “situated knowledges”, “performativity”, “immutable mobiles”, and more in these types of renderings, but I will leave that for discussion (if I manage to procure any).

Some sources:

Daston, L., Galison, P. (2007) Objectivity. New York: Zone Books

Daston, L., Galison, P. (1992) “The Image of Objectivity”, Representations 40: 81-128

“At the beginning of the seventeenth century…thought ceases to move in the element of resemblance. Similitude is no longer the form of knowledge but rather the occasion of error, the danger to which one exposes oneself when one does not examine the obscure region of confusions.” (The Order of Things, 56)

I wanted to make a brief post about a textual rendering that I was reminded of when reading chapter 3 of The Order of Things. Foucault, in tracing the extension and limits of the classical period’s epiteme, reminds us of the force with which the new conditions of knowledge were emerging. In the section titled “Order”, Foucault quotes from Sir Francis Bacon’s Novum Organum:

“The human Intellect, from its peculiar nature, easily supposes a greater order and equality in things than it actaully finds; and, while there are many things in Nature unique, and quite irregular, still it feigns parallels, correspondents, and relations that have no existence. Hence that fiction, ‘that among the heavenly bodies all motion takes place by perfect circles’.”

The interest I had in the passage came from a reading of Bacon’s “New Atlantis”. In the 1623 publication (and I believe other dates in other languages), Bacon envisions a utopian world not hampered by the idols that Foucault points to. A place where “generosity and enlightenment, dignity and splendour, piety and public spirit” characterize the inhabitants of Bensalem, an island upon which the narrator of the story come across by chance (The New Atlantis, Introductory Note).  The island is the product of all that can be accomplished by throwing to the wind what Bacon described in the Novum Organum as “empty dogmas”.

The reason I used the word “force” when describing Foucault’s characterization of the death of resemblance is that New Atlantis, the Baconian method, and a casting off of the Bacon’s Idols (of the mind, not the divine) have an immense impact on the history of the scientific institution. Indeed, Robert Boyle, Robert Hooke, and the numerous other eminent scientists involved in the establishment of that greatly revered scientific institution, The Royal Society, took much inspiration from the utopian vision Bacon presented in The New Atlantis. Similarly, the modern research university, modern scientific inquiry, and the pragmatic direction that many scientific endeavors appeal to (human comfort, improvement of life, etc…) all seem to have interesting parallels to the world envisioned in The New Atlantis.

The rendering above is of an anonymous source (or at the very least one I don’t know), depicting all the scientific marvels, wonders of nature, and general curiosities found on Bensalem. The importance of the rendering is, for me, the relation it shares with the text of The New Atlantis, and as such I have linked the image to Project Guttenberg’s HTML copy of the work. It is an interesting read for anyone interested in Bacon, the history of science broadly, or, as was the case for me, Foucault’s characterization of the Baroque period.

My rendering this week is at once a rendering of life (or death), a way into a discussion about reductionist/physicalist conceptions of the subjective sense of sensing, and a very well written and rendered comic book.

Robert Kirkman and Tony Moore began publishing “The Walking Dead” in 2003 and have received praise for the piece from much of the comic world. The Walking Dead is a “zombie survival epic” in its theme and execution. Thematically, the story follows a central character attempting to survive in a world filled with zombies; creatures of physical existence that lack all subjective experience or qualia. A strictly mechanical drive animates the flesh of these once human creatures in their robotic search for brains and blood. With regard to its execution, Kirkman professes a want to explore a zombie story that goes beyond gore and scare tactics, to a place where the human is ever for grounded in the story. What is particularly epic about the story is that Kirkman doesn’t envision an end for his tale; the idea is to explore the world in its entirety, regardless of our attachment to characters that might, and will, die horrible deaths.

Perhaps a measure of my obsession with the comic medium, perhaps an ode to all my friends that told me zombies have no place in the academic world, or perhaps a backdoor into ‘life’s renderings’, whatever the case may be, reading this comic and Heller-Roazen’s Inner Touch over the last week (the latest trade paper back of Walking Dead was recently released) got me thinking about physicalism, reductionism, and zombies.

In particular, I was enthralled as I read over Heller-Roazen’s discussion of Aristotle and the classical conception, misconception, or lack of a conception of consciousness. The interesting part, for me, was that Aristotle depicts the subjective activity of sensation in physical terms, before moving on to more ethereal problems. It is with the eye that we see, the nose that we smell, and the hand that we touch. This is not to suggest that the Aristotelian idea of sensation was necessarily physicalist in nature, but rather to point out that the line of reasoning begins with concrete material terms, like noses and eyes, and then moves to wholly immaterial, mental notions of ‘common sense’ (see Heller-Roazen, chapters 2 and 3). The difficulties in equating this common sense with modern notions of ‘consciousness’ or an ‘inner touch’ aside, I think it is interesting to position this phenomena within a dichotomy that I think shapes much of our discussions about life, and indeed frames the scientific engagement with life; reductionist vs. holistic accounts of life.

Clearly we can take the mechanistic and reductionist account from one of its strongest advocates, Richard Dawkins (see Dawkins, chapter 2 through 4). Holistic accounts of life also abound in our class, from Jacob’s “milieu” to some of the beautiful renderings our peers have provided, for instance. My question, then, is how Dawkins might rationalize this particularly elusive category of the experience of life; in what way can the “inner touch” be something wholly physical, as the rest of life is for Dawkins?

Physicalism enjoys a particularly comfortable seat at scientific round tables. Within the scientific domain this privileged position is most evident in physics, and specifically within the particular brands of science that profess a reductionist epistemology and metaphysics. I find it difficult to determine in what sense the “inner touch” could be rendered physical. The phrase its self carries a material connotation of coarse; it is bound up in touching, metaphorically or otherwise. Similarly, Heller-Roazen’s discussion of Avicenna, Condillac, and Maine de Biran is cast in material terms. It is through Condillac and Maine de Biran that we get the phrase “the inner touch (le tact interieur) in the first place (Heller-Roazen 231). Condillac’s statue is a query into how one can come to know ones own body, ones extension in space, or to know more than Avicenna’s flying creature. It is only through touch, that terribly confusing Aristotelian sensation, that the statue might gain awareness of its body. For Condillac it is resistance to an outside object that builds up the fundamental components of spatial extension in the mind of his statue, but for Maine de Biran the category of resistance had further implications. If an individual merely “shakes his limbs” he will be acutely aware that his muscles resist him to a degree (Heller-Roazen, 229). On Maine de Biran’s account, that the statue could move its hand at all demonstrated that a “fundamental feeling” has already afforded it some perception of its body (Heller-Roazen, 230-231). This is the “absolute sense of existence”, the “affective touch”; it is something that Murr the cat recognized and Aristotle wrestled with, some immaterial, subjective “inner touch” (Heller-Roazen, 231).

Dawkins would seem to align himself with Condillac more than Maine de Biran; knowledge about the self, no matter how mystical its character, comes from mechanical interaction with the world rather than from some ethereal tact interieur. But my query, as I see it, is not answered on those grounds. How the physicalist might conceive of the inner touch, of that entirely personal and universal sensation of sensing is still problematic; the challenge Maine de Biran brings to the table must be addressed. How might Dawkins explain the curious self awareness of animals that Ali pointed to in his rendering, for instance ( )?

Physicalism is not without its challenges in the philosophy of mind and its opposition has garnered some help from zombies (yes!). As Heller-Roazen points out in chapter 21, philosophy has a history of adopting an outsider view in order to “bring into focus that which tends to be too near to be seen with any clarity” (Heller-Roazen, 219). In that same tradition, I would like to offer a final digression in the form of some very alienated (zombified?) philosophy; enter David Chalmers. Chalmers is an Australian philosopher known mostly within the realm of ‘philosophy of mind’. The reason for his appearance in my post is because of his affection for zombies. While I fear Chalmers might be into zombies for entirely philosophical reasons, any discussion of my favorite class of the not-quite-dead is welcome on my blog. In all seriousness though, Chalmers’ 1996 publication “The Conscious Mind” has some interesting things to say about, well….zombies. In a thought experiment that has angered as much as entertained the world of philosophy, Chalmers asked after the same kind of question I think I am; the extensions of the physicalist account of sensation, consciousness, qualia, or an ‘inner touch’. The zombie thought experiment runs something like this (see Chalmers for a better representation):

What if there was a world in which all physical facts are the same as our world except that everyone lacks all sensation, qualia, or whatever mental states you like (they lack a “what it is like”). All the people there are effectively zombies; they act as we do but feel nothing of what we do, they respond to the same physical stimuli as we do, but do not experience them as we do.

The thought experiment is a logical refutation of physicalism by modus tollens:

If P, then Q

Not Q

Therefore, not P

If physicalism is true (all physical facts determine all other facts…like sensation, emotion, etc…) (P) – there is no conceivable way for there to be a world in which all the physical facts are the same as our world and for their to be additional facts (Q)

There is a conceivable world in which all physical facts are the same as our world but in which there are additional facts (i.e. our thought experiment) (Not Q)

Therefore, physicalism is not true.

Now, a wave of skepticism no doubt just rushed over you…and rightly so. Zombies, even the philosophical brand, are an odd way to refute such a robust perspective, not least of all because of the presumption that zombies are conceivable/possible. The point is not to picture brain-thirsty undead walking about, but rather to focus on the logical consequence of Chalmers’ thought experiment. If zombies are at least conceivable (see: toxoplasmosa gondii, the case of Clairvius Narcisse from Haiti, Chalmers, etc…) physicalism is false and some sort of dualism is required. If zombies are not conceivable, and there is certainly a great amount of work that holds this position (see Cottrell, Harnad, Marcus) the point is mute. Whatever position you take, at the very least the idea opens up an interesting dialog (and who doesn’t want to talk about zombies?).

All the physical explanation, all the reduction in the world might help to quantify the sense of being, help to bring us closer to understanding under what conditions we sense that we are sensing, but there is still a lack of explanation for the raw, personal, ‘inner touch’. If we are mere automata, just zombies responding to physical stimuli, how can we account for the distinction between, for instance, being told in vast detail about the smell of a flower (the physical components that come together to produce and receive smell), and the “what it is like” to smell a flower, its qualia? This is what Chalmers called one of the “hard questions of consciousness” (the easy question being how does it all function); why do physical conditions give rise to rich experiential “inner” life? Why do qualia exist in the first place? Why does life have a subjective component? And of coarse….why aren’t we all philosophical zombies?

I think this resonates well with the concerns the less reductionist/materialist/physicalist among us have, as well as addresses the question I raised earlier, how exactly can physicalism deal with the experiential aspect of Heller-Roazen’s ‘inner touch’?

On a side note, I would like to take the opportunity to peddle some of Kirkman’s wares. I would highly recommend the Walking Dead series to all of you; it is a zombie story that is delivered with a refreshingly humanist tone and is rendered beautifully by Tony Moore, Charlie Adlard, Cliff Rathburn, and Rus Wooton. The first rendering at the top of the page is linked to the official website, it is well worth your time.

Chalmers, D. (1997) The Conscious Mind: In Search of  a Fundamental Theory. New York: Oxford University Press

Cottrell, A., 1999, ‘Sniffing the Camembert: on the Conceivability of Zombies’, Journal of Consciousness Studies, 6: 4-12

Dawkins, R. (1976) The Selfish Gene. Oxford: Oxford University Press

Heller-Roazen, D. (2007) The Inner Touch: Archaeology of Sensation. New York: Zone Books

Harnad, S., 1995, ‘Why and How We Are Not Zombies’, Journal of Consciousness Studies 1: 164-167

Jacob, F. (1993) The Logic of Life. Trans Betty Spillmann. New Jersey: Princeton University Press

[Kirkman, R. (c, w), Adlard, C. (p), Rathburn, C. (g, i, p) and Wooten, R (l).] “The Walking Dead Trade Paperback Collection”. v1-11 (2003-2009), [Image Comics]

Marcus, E., 2004. ‘Why Zombies are Inconceivable’, Australasian Journal of Philosophy 82: 477-90

My rendering comes in the form of a flash application hosted on the University of Utah’s servers. Please click the image below, you will be redirected:

I chose this rendering in particular due to the interactive aspect it has. This is more than something to view, its an interesting little tool of knowledge. The educational applications are obvious, and indeed I was alerted to this fun flash application through a friend currently studying biology at UBC. This rendering presents a series of increasingly smaller objects, each made visible through the microscopy of the flash animation. The message a student reads here is that each item that our physical sciences determine to be real has its place in some wider milieu. The reality or meaning of one of the constituant parts of life, be it a coffee bean or a ribosome, being determined through its relation to other parts expresses the Comptian notion of a milieu as “the sum total of outside circumstances necessary to the existence of each organism” that Canguilhem explored (10). The place of the cell is linked to the place of the rice grain, their existence is a function of their place.  Similarly the heritage of the mechanical in the expression of the place of life is present in the rendering; this is a concept of life devoid of any liveliness, a concept that plays to the reduction of life from a confluence of meanings to “an intersection of influences” (Canguilhem, 27).  But there is also something particularly interesting going on when I interact with this rendering, something that positions me in a milieu that invites life, my body and mind, to be defined by meanings and not strictly by physical relation…at least, I think it does…

As I sit sliding the control bar back and forth I start to realize that I have a great deal of faith in this representation of life. I started to look at biology and physics books from my high school days and breeze over the flat, isolated images of cells and atoms. There is something else to this little tool of knowlege, something greater than the offerings my high school books presented. Recently I have been reading Lorraine Daston and Peter Galison’s “Objectivity”, a book that sheds light on not only the historical discrepancy in the varying uses of the word objectivity, but also on how objectivity came to be bound up with reality and authoritative claims about reality. One of the reasons I thought that Canguilhem’s piece and Objectivity ressonate so well together, is that they both deploy a distinctly Wittgensteinian reading of the meaning of “milieu” and “objectivity”; the historical use of the word is of primary importance in creating a map of meaning. Where Canguilhem traces the use of milieu from the mechanics of Newton through to its modern use in biological sciences, Galison and Daston begin with Kantian philosophy and move through to the modern authoritative flavor objectivity enjoys in scientific discourse (see Galison & Daston, p31). Its more than the method, though. The feeling I got after reading Canguilhem was one of continuity. The modern sense of milieu is one that embodies an entire history, its use is linked as much to Lamarckian understandings as it is to the relation between the modern milieu and biological precision (Canguilhem, 26). The same is true in the case of objectivity, objectivity brings with it the “truth-to-nature” guarentee that was so common in the mid-18th century, as much as it does the 20th century reassurance that objectivity is a result of “trained judgement” (Galison&Daston, pages 20-21 show the visual heritage of these three entangled aspects of objectivity). The constant negotiation between truth-to-nature, mechanical certainty, and trained judgement is what, on Daston and Galison’s account, make up our current understanding of the objectivity of science. Like Canguilhem’s “milieu”, “objectivity” is a term whose meaning is constantly negotiated and realigned. But there is something outside this framework going on when I slide this bar around and rend layer of scale from layer of scale. It isn’t that the cartoonish aproximations of amino acid’s are convincingly true to nature, nor do I have great faith in the trained judgement that went into the creation of the approximations; it is the form of the rendering that I find convincing. I look at my dinner plate and I see a grain of rice, moments later my body and mind become microscopic with a glace up to the computer screen; I fly from scales of meters to picometers in seconds, and all the while I am sure of my place in this milieu. This is the feeling I had. This is why I am more convinced, on some level, to believe in the ontological reality of these entities. It is a feeling that I think resonates well with both Canguilhem’s piece and the nature of objective authority. When I control time and scale in real-time I blind myself to the homogeneity of the image, to its well rehearsed and quantified relationships, to the “man the scholar” milieu. I see my body, my plate of rice, the tiny adenine nucleotide, and I am convinced.

Techno-art has the capacity to rend time, scale, and matter from their natural(?) place in a way pre-digital scientific renderings could not. Just as photography brought a new kind of mechanical guarantee to objective accounts of the world, digital art and science seem to provide new, convincing modalities to engage scientific authority. In this case, a tactile level of control I simply couldn’t experience in an 18th century botanical atlas or the flattened pages of a high school text book. Methods of convincing and representing change, as do their medium, but the efforts to maintain a rationalized authority over the constituent parts of life remain intact, perhaps even bolstered, in these manipulable atlases of the scientific present. Canguilhem concludes his paper by condemning biological conceptions of life that eliminate meaning in favor of mechanical relations. Perhaps this is the new battlefront in the war for authority over life; mechanical relation hidden beneath a layer of participation and tactile meaning. ActionScript taking action against a perceived loss of faith in the physicochemical milieu, or just an easier way to explain variations of scale….either way, I do love sliding that bar back and forth.

Canguilhem, Georges (2001) ‘The Living and Its Milieu”, Grey Room 03: 7-31

Daston, L., Galison, P. (2007) Objectivity. Cambridge: MIT Press