Patrick Finn [1]
University of Victoria, Canada
People who say that tomorrow belongs to them are
usually angling for a piece of today.
- Geoffrey Nunberg
Few who read this article will feel they are not currently getting enough information in their daily diet. People may want better information, but they are not suffering from lack of quantity. In the past, readers faced with multiple options turned to the critic for reading recommendations. In academic terms, this came to mean that most colleges and universities taught a list of authors, from Chaucer to T.S. Eliot, who fitted certain aesthetic criteria. This group came to be known as the literary canon. Over the past three decades, there has been a shift away from this evaluative model and towards one that is more broadly political or cultural. A crucial part of this move has been an attack on the very foundations of canonicity and aesthetic judgment. While I would not seek to deny the benefits of opening the canon, I would suggest that in our current information environment readers could benefit from a new generation of editors acting as information guides through our literary and cultural heritage.
Corporate content providers that answer the desires of readers increasingly speak to the paucity of evaluative comment that was in the past provided by large-scale editing projects. The reading public, abandoned by the academy, continues to read, though it does so in the absence of an engaged academia. I would like to suggest that intellectuals, whether aesthetically or politically minded, can learn from the traditions of hermeneutics and philology, which offer aspects of formalism that can provide content while maintaining an historical context. This reenacted formalism can allow us a set of shared terms by means of which we can engage with the texts from our past and the readers of our present.
The quotation from Geoffrey Nunberg at the top of this essay highlights what I see as a trend toward capitalizing on the information glut. In order to argue about who owns the future, prognosticators have very often shown a world that is much shinier for those who arrive technologically prepared. TV news like CNN Headline News and CTV News Net make manifest a world unfolding against the backdrop of a new epistemology that this technology necessitates. Failure to appreciate the challenges of the "new e-economy" will result in our being left behind.[2] This trend is apparent in a number of ways but makes itself most readily known in contemporary advertising for business solutions, up-to-the-minute news, and bandwidth provision. In the language put forth by companies like IBM, CNN, Shaw, Rogers, and AT&T, a new vision of the multi-tasking human is forming. The implication is that those who think along linear lines will be replaced by a new group capable of monitoring the NASDAQ, current headlines and the weather simultaneously. A direct target of this message is the Baby Boom market. Fortunately for the cash-flush Boomer, this problem fades after the purchase of a technological supplement. The digital solution overcomes human shortcomings with bandwidth, memory and central processing.
I would like to argue that the current transition from print-based communication to a form that is primarily digital will be well served by a new application of philology and hermeneutics. The hermeneutics I will argue for falls between the old Heideggerian/Gadamerian "philosophical hermeneutics" and the "philological hermeneutics" that started roughly with Augustine and culminated with Schleiermacher, von Humboldt and Boeck.[3] While I maintain the hermeneutic concern with the pursuit of meaning, what separates a hermeneutic examination of textual processes from current poststructural and postmodern investigations is the realization that text is at all points caught up in context. Thus, a new formalism does not necessarily have to attempt to pull textuality free of the world of its readers. It does however remain committed to notions of meaning and of engaged critique. I hope to show that power, as it relates to the web, bears a strong relation to the presentation, packaging and selling of understanding. As such, I believe that a critique of retail meaning is warranted.
As a site for examination, I offer the locus between the material textual and the allegedly hyperreal world of the digital - a geography that has heard calls for the death of the book in order to allow for the birth of hyperspace.[4] Working with the theme of the current volume of Mots Pluriels, I suggest that we should consider the old-style textual Editor and philologist as an "Old Master" and the current Web Designer as a "New Apprentice." In so doing, I hope to illustrate the ways in which past methods can be radicalized for use in new media. I would like to look first at the ways in which the textual has been moving into the digital age.
Many early projects in computer-mediated editing experienced delays as standard forms of tagging and storage were tested.[5] Currently, we seem to be settling into a mélange of SGML, HTML, and XML markup,[6] with the latter poised to dominate the field.[7] While electronic encoding started out cataloguing and storing information, it now focuses on the speed and strength of searches through databases and archives. It is at this point that we confront meaning. Editors who create textual databases must turn to notions of meaning if they hope to provide usable structures in their interface.
The structure of these databases and archives should relate to the relevant text. This would mean for example that a site dedicated to Hamlet would foreground a notion that the individual data in most records of the play mention that there is some form of narrative based on revenge. For this narrative to exist a particular output string needs to be acknowledged. This may seem obvious, but if we create databases that view each word or each line as separate and then allow for any output, we end up forgetting that in order for Hamlet to seek revenge, there must first be a murder. This need not be a rigid linearity, but must be something more than just a word list. Beyond this primary appreciation of a text's particular hermeneutic circle, there are three main areas of concern for new editors. Each of these involves human participation in the creation of and interaction with the database. The first of these is the data-entry interface, the second is the process of creating search-enabling algorithms and the third is the output decision or reader interaction.
Primary interfaces, such as those using basic text editing programs, are problematic because they involve long hours of repetitious work, usually run on a volunteer or near-volunteer basis. (We would be on a dramatically different path if we had a large supply of adequately trained and funded workers to input data at the primary stage.) Currently some large companies employ great numbers of non-experts to create digital texts, the most famous of these being the Chadwyck-Healey group.[8] In all of these cases, there is a distance between those working with the text and the actual content.
Secondary interfaces involve the implementation of various software and hardware solutions necessary to run an editorial project. This level of interface is changing rapidly as researchers test new ways to encode their work and as software and hardware companies attempt to tap into the funds available in the academic marketplace.[9] Thus, editors can now use prefabricated Document Type Definitions (DTDs) when marking up a text for publication. This saves time, but sacrifices variation that is part of text-specific editing practices. It is not that DTDs can never be reused, but the decision to do so should never be seen as final even after an edition is complete. This is particularly important in humanistic studies, which are predicated on the ability to grow and change with research rather than remaining relentlessly committed to a preordained method.
The third point involves the growing number of texts that incorporate the needs of researchers, students and general readers by providing a greater breadth and depth of information than has normally been available in scholarly monographs or trade paperbacks. Readers may now expect a rich background of texts and materials to complement each text they consult. In each of these areas, editors have a role to play in the brokerage of content.
It would be insufficient to merely arrange these materials along a formal line - for example all texts and graphics supporting the theme of revenge in Hamlet. Here the sociohistorical should compliment the formal. Once we admit forms of cultural history, we need to ensure that some time goes to contemporary (in this case digital) history. The medium is only part of the message. For example, primary textual content (an edited version of Hamlet) is complemented by external information (Shakespeare's biography, English theatre history, discussions of iambic pentameter, transcriptions of other versions, etc.) and by the new notion of the simultaneous experience of each (in a multimedia presentation) thereby creating an expression not only of one specific Shakespearean tragedy, but of a form of scholarship and a time and place of production and consumption. The very environment in which this operates is one that is currently defined by corporate marketing strategies. Recalling Foucault's thesis that meaning fundamentally connects to power, we should realize that the main source of information for college and university students is a form of media developed by researchers in conjunction with the US military.[10] Further, even though its chief overlay (the World Wide Web) was developed by an independent researcher (Tim Berners-Lee) it is increasingly in the hands of corporate capital and government regulators.[11] I am not advocating a sadomasochistic quest for a conspiracy theory, but merely pointing out that scholarship which fails to consider these material conditions is inadequate.
As I mentioned above, the marketing of new technology is currently making use of a notion of simultaneity, which it suggests is inherent in digital thought. Hi-tech marketing strategies foreground this notion, implying there is a new epistemology based on an ability to experience multiple streams of information concurrently. The only way to ensure simultaneity is to keep up with the newest releases of computer hard- and software. Conveniently, this message is most often marketed to the Baby Boom Generation who are now reaching the age at which they are beginning to feel threatened by "the new" and can afford to address these fears with expensive machinic remedies. While the fifty-plus demographic has ensured the rapid uptake of computer technology, it is their progeny, the so-called Echo Generation, who have cemented the digital age. Nowhere is the techno-historical transition more clear than in the move from the book to the digital text. To highlight this change, it is useful to draw comparisons to a similar shift that occurred when the West moved from a manuscript- to a print-based culture.
In her 1997 book Hamlet on the Holodeck: The Future of Narrative in Cyberspace, Janet Murray usefully describes our current state as "the second incunabulum." She argues that present conditions mirror those that occurred during the transition from manuscript to print culture. At that time, many felt that the advent of the printed book would erase manuscript culture. Murray argues that during a technological shift of this nature we must go through a manifold transformation, which will ultimately lead to a richer textual condition.[12] During this transformation, we must endure the radical successionist rhetoric of the fetishizers of technology,[13] but in the end, the technology finds its own uses and can even coexist with previous modes and methods.[14] In both incunabular periods, there is a widespread belief that new technology entails a radical epistemological shift that affects our self-conception and our ability to make long-term plans.
For Murray, the complexity inherent in the transition makes prediction difficult. I would agree with this, but offer that forward-thinking editing and encoding may counter fetishists who are poised to make us look foolish in retrospect. I would argue that editors and textual scholars versed in new technologies can offer useful heuristics in order to help us navigate the data-drenched desktops of the future by doing what they have always done - applying rigorous scholarship, based on a pursuit of contextualized meaning, to various modes of information. This strategy has the added effect of offering a counterbalance to the mercenary approach to technology, which preys on a desire for a return to meaning by providing data run through inadequate filters. This results when decisions are based on cost-effectiveness rather than sound research.
Against the multi-faceted change inherent in shifting technologies, Murray's argument focuses on the constancy of a human need for interaction with narrative. It seems that if this interaction is to be shared or discussed, there must be some common ground or meeting place. Even if this ground must remain overtly provisional, we must defer to some aspects of linear construction. Works (be they sites or books) that fail to consider this ignore the history of texts as communicative events and foreclose areas of debate that could otherwise remain open. The need for narrative, whether from manuscript, codex or database is a critical point. No matter how contingent one person's reading may be, it is better than no reading at all - we simply cannot communicate with one another without beginning somewhere. In terms of the current digital shift, this entails a respect for our textual and cultural traditions, as well as an appreciation of the exigencies of a technology far too often rendered transparent by lack of critique.
Allow me to present an example. As I have argued above, the current technological shift involves the delivery of texts as well as their technologies of composition. Thus, an editor will approach the hypertextual piece Afternoon by Michael Joyce differently than he or she would Langland's Piers Plowman or James Joyce's Ulysses.[15] Using these latter texts as examples, I would like to spend a moment looking at works created in the manuscript and print book tradition that are now crossing over to the electronic (as opposed to those documents created primarily in and for the digital realm). A good deal of recent editorial theory has focused on the computer's ability to incorporate the multiplicity inherent in poststructural textuality. For example, Jerome McGann calls the Gabler Ulysses created using the TUSTEP program[16] "the first truly postmodern Ulysses."[17] While I find some very real strengths in the Gabler text, I would like to suggest that rigorously antifoundationalist theory (such as deconstruction or constructivism) based on infinite contingency is anathema for editing. This does not mean that the New Apprentices who edit must surrender the successful points of theoretical critique, but rather that they may have to acknowledge a split that sees them as socially progressive, but textually conservative.
For the purposes of editorial theory, and for reading in general, it seems that whether Ulysses was written by a fully constituted subject named James Joyce or constructed by an inter-war European society - it has had some existence as a linear, spatiotemporal product. One aspect of its study must take this into account. Failure to admit at least a provisional point of discussion acts as a barrier between a people and their culture.[18]
I am speaking of the strategic implementation of structure. Consider a limit case: a great deal of attention was paid to the Taliban's destruction of cultural artifacts in Afghanistan. Implicit in Western condemnation of these acts is the notion that whether a ruling group agrees or disagrees with the views of the past, there is some responsibility to maintain cultural heritage. I would argue that this same idea is at play when we edit texts. If we disassemble our past in order to provide a certain kind of reading but in the process leave only rubble behind, we perform an act that is similar in results to that of the Taliban. Consider Article 27 of the "Universal Declaration of Human Rights":
(2) Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.[19]
In terms of the first point in Article 27, I would argue that actively working to obscure the documents of our past (whether or not we approve of their content) denies citizens their right to "participate in the cultural life of the community," and "enjoy the arts." Further, failure to afford some position to the author of a given text denies that person's right to the protections outlined in the second point. Let me be perfectly clear - while I would never argue for any form of teaching that seeks to gain ascendancy at the expense of other voices, I believe that we must recognize that impeding the transmission of documents and artifacts from our past is not only poor scholarship, it comes dangerously close to criminal negligence.[20]
One solution is the separation of the contemplative from the more productive aspects of editorial theory. This is the strategic implementation of structure - a radicalized version of philology and hermeneutics. It seems clear that even the most anti-canonical thinkers will be convinced of the efficacy of guerilla textuality once we examine the power at play in current content management.
There are various names for this type of information architecture in other areas of digital technology and communications. In advertising, it is the "relationship" and involves "convergence," and in marketing it surfaces as "branding."[21] What commercial groups have found is that shortly after readers gain limitless amounts of text they begin asking for filters.[22] The result is marketing strategies based on broad-spectrum campaigns that sell a unified identity offering individuals a more focused level of choice. Thus, the technology is used as a tool to help create the illness related to the decentring effects of information overload and it is then offered as the cure. We see evidence of the reconciliation in sites that specialize in personalized, channeled information, presented as "portals," which are rapidly multiplying on the web.[23]
It is important to highlight the differences and similarities between content provision and editing in terms of materials related to edited or marked-up texts. An inherent part of the function of editors in the past was a curatorial assumption of responsibility for all texts in the tradition of the given work. This involved a power that was taken up or borrowed in the name of either improving or protecting the author's message and conveying it to a reading audience. Regardless of the tradition, we notice that a de facto condition of the editor's power involves a responsibility to state one's methodology and to then adhere rigorously to its dictates. Failure to do so can negate an entire edition.[24] This is not so in terms of content provision, whose only real goal is provision. By utilizing a broad spectrum of editorial skills, the web's New Apprentices can reward the reader/viewer with a compilation that is more aware of its roots in the codex, the critical/variorum edition and current portal sites. Those who fear an ardent, imposed linearity should be reassured that one of the strengths of the digital work is that testing its underlying methodologies is much easier.[25] This is only possible, however, if we discontinue the trend towards the image of the computer as an invisible, non-historical device. Herein lies the primacy of letting "Old Masters" teach "New Apprentices." Far from arguing for an ethereal or exclusionary notion of organicism and/or unity that gets formalism into trouble with cultural critics - this is an appeal for a set of terms that can lead to a shared discussion. The digital editor becomes a host, viscerally linked to his/her methodologies and scholarship.[26] To realize this goal it will not be sufficient to merely map the current practices of textual scholarship onto the web. The Old Masters must learn new tricks. Let us turn to these Old Masters for a moment.
Textual scholars in the intentionalist or Anglo-American empiricist tradition[27] have argued that a good editor can find and publish a unified text where one did not previously exist. Others, such as Jerome McGann,[28] see this notion as untenable and argue that documents in existence should be viewed as artifacts possessing an authority or validity only in terms of their given historical contexts. This approach generally questions the authority that intentionalist editing assumes. The first group is looking for a needle in a haystack; the second is more interested in the working conditions on the farm.
Poststructural literary theory and its cultural antecedents have changed the nature of the debate between these traditions. The current blend of textual scholarship and literary theory that underwrites editorial theory has moved away from notions of a fixed text and toward Jerome McGann's sociohistorical approach. At the same time, computer-mediated editing has seemed to fit most readily with the McGannian critique. This is perhaps due to the difficulties involved in having a computer decide between substantive variants, and the simplicity with which those same machines can pictorially record historical texts such as medieval manuscripts or first edition print books. The debate then is not so much resolved as elided or delayed by our current transition. As I have argued, into this vacuum has come the "content provider," effectively marketing what its cultural counterparts no longer provide - meaning through unified texts. Examples of this are the aforementioned Chadwyck-Healey group, which makes its money through subscription; bibliomania, which attempts to sell related books that it links to public domain texts; and xrefer, which profits from both subscription and advertising.[29] While such operations have taken over the provision of texts, scholarship has developed a more abstract form of analysis.
George Landow argues that hypertext offers the perfect arena to test the assertions of poststructural theory.[30] He envisions a world where the reader is free to explore a limitless array of pathways through computer libraries. Building on this argument, liberationist theorists such as Nicholas Negroponte have argued that new electronic webs of communication will supersede the book. This line of thinking fails to address the bottleneck that occurs when limitless information meets limited time. Whether general reader, scholar or student, we must assume that someone wanting to read Piers Plowman has a finite amount of time in which to do so. Rarely will students go to class or readers to the library asking if the text that they are reading has another four dozen instantiations they can peruse. Given a limited amount of time in which to read, there is a desire for effective textual pathways - and if there is a desire for an effective textual pathway, there will be a desire for a particularized pathway. In the simplest possible terms, people do not have the time to read everything. One of the fundamental jobs of editors, then, is to make recommendations that readers can evaluate and use or reject.
We need to return to this work. These recommendations need not be intentionalist paths, but could be a series of informed, documented hypotheses. Hypotheses in this strain bear a striking resemblance to what we may in the past have called "books" and in the end satisfy the desire for a text that arises when we ask for advice about which edition of a particular work to read.
In any case, much is made of the ability of advanced technology to solve long-standing debates over complex textual traditions like Piers Plowman and Ulysses. In both of these cases, arguments are made that a definitive text will never be found. Some suggest that digitizing and representing all relevant material solves this problem. The seeming response to this would involve editions that no longer have to follow a tradition of paradigmatic shifts from one to another in an adversarial loop. However, until intellectual property laws can satisfy the requirements for production placed upon scholars and creators there is likely to be little in the way of true cooperation.
In the meantime, the "New Masters" of the World Wide Web are those who will solve this equation by providing today's most desired commodity: meaning.
Many difficulties trouble an easy movement from critical paper to critical electronic editions. Most of the challenges that have focused on "hands-on" editing, such as that from the Anglo-American line, accuse editors of fetishizing a fixed text with a single-author function.[31] Yet, foreclosing on earlier projects due to their lack of poststructural codes does a disservice to scholarship. Moreover, those who espouse the virtues of computerized diplomatic editions (that is, editions presenting photographic reproductions of all pertinent texts) often fail to fully analyze the material nature of their interface.[32] This situation becomes even more complicated once we add markup, metadata, search terms and links. As Donna Haraway notes, no link or path is or can be neutral.[33] Thus, the digital computer interface is not a McLuhanesque "hot medium."[34] Each time we click on a link, we are making a selection from a solution set that has been predefined by the designer - this is controlled choice dictated by the interpreter or provider. The power inherent in creating these links requires clarification.
Issues of access, raised by people like Carla Hesse and Paul Deguid,[35] as well as the theorization of the various aspects of media that create value-added editions, also need to be factored into the changing landscape of textual authority. Critics of the visual components of text such as Jerome McGann and Randall McLeod[36] have forcefully argued for the intricate presence of information in cultural artifacts, whether these are pieces of parchment, papers or geographic locations. Each of these elements challenges the architects of information in their quest for relevance. The obvious outcome is that editing is becoming more challenging, more necessary and yet is receiving less attention. This dearth has provided the perfect site for a commercial response.
This is not likely to change any time soon. As stores of information grow, editing will become increasingly important. If taken seriously, an examination of philology, hermeneutics and hands-on manuscript/book study could present interesting challenges to the more abstract aspects of poststructural theory. At the same time, research along these lines might confront the techno-liberalism that has been at the heart of some of the more confused digital projects to date[37] - and all of this while creating new offerings for online readers. In this sense, I would argue that electronic editors should become expert guides through information. A critical edition in these terms is an archive or library that allows for a potentially infinite variety of testable pathways, all of which can be supplemented with the advice of those who have gone before.
The digital edition can serve as an expandable, participatory field with posited pathways offering downloadable/printable trade editions or relevant copy-texts as reference points. Having single critical texts with links to electronic versions should address even the most rigorous challenges from the new diplomatic editors, while at the same time presenting works that allow access to a variety of versions and histories of texts. These editions would provide testable editorial methodologies, which could afford textual scholars and theorists an opportunity to pursue the long-standing debates over intentionality, authority and textuality. Perhaps most importantly, these theoretical frameworks could shed light on the concerted effort that has gone into creating the image of the computer as an invisible medium by techno-fetishists who seek to benefit from the unimpeded realization of their vision of the future.[38]
Within this new medium, we can provide access to relevant data, while ensuring that decisions on how and where to link become available for examination. Thus, a reader would be able to follow a posited pathway through a textual tradition, web site, archive or corpus, and at the same time examine the construction of the path itself. As a whole, the project becomes an exercise in public hermeneutics or a more participatory engagement with culture. In this mode, the best electronic texts will be those that can posit and maintain a viable hermeneutic circle.
Finally, allow me to revisit the notion of language as artifact and event in the context we have created. If any methodology is worthy of consideration, and all methodologies are testable, then the Net's New Apprentices have a role to play at each of the three interface points mentioned earlier. At the primary, or data-entry, level, editors can assist in finding changes that occur in transmission, just as has been done with the work of medieval scribes and modern day typesetters. As an important function of this process, we should realize that the workers involved need more purchase over their work, if not for ideological reasons, then for aesthetics - in order to make better webs. At the secondary level, they must help analyze the way a narrative changes in its interaction with databases and algorithms. Is our culture changing in the transformation from print to digital text transmission? What are the material ramifications of the physical presence of computers? Cultural critics and editors of all stripes must record the relationships of corporations such as AOL Time-Warner, Bell and Howell, Oracle, and Microsoft, which are assuming a role very different from that of the publishing houses of the past.[39] Finally, at the third level, the output or decision-making stage, the New Apprentices need to work to ensure that the electronic field of exchange is as open and welcoming as possible. Somewhere along this path, we hope that the Apprentice becomes a Master, or at least that the two trade places for a time.
I will finish by referencing the quote that appears at the head of this essay. Geoffrey Nunberg states, "[p]eople who say that tomorrow belongs to them are usually angling for a piece of today."[40] My argument is that too much of the Net is currently run by anglers. It is my belief that a large part of tomorrow should belong to the public - to everyone. This will not happen without responsible editors. Though we may not know the future of the labyrinth we are creating, we should do our best to leave a discernable string behind us. This may just mean paying a little more attention to the best of the Old Masters and rejecting the work of some of the more haphazard New Apprentices.
Notes
[1] My special thanks go to Alan Galey for his keen insights and comments on the final drafts of this piece. I would also like to acknowledge the helpful critiques that I received from the Mots Pluriels readers, who greatly improved this article. All errors in judgment and construction are mine, and do not belong to those who worked diligently to rescue me from my grander mistakes.
[2] So entrenched is the idea that new is better, that current television ads are attempting to erase the memories of the high-tech bubble burst by speaking of the "new new economy."
[3] Very briefly, the first relates to Heidegger's and then Gadamer's attempts to establish a hermeneutics that would escape the teleology of the ontotheologic tradition. The second is more directly historical and more specifically concerned with defining understanding and quantifying its pursuit. I would not seek to undermine either critique, but rather to distance my work from the potentials for circularity in the Heideggerian-Gadamerian tradition and the need to incorporate the "voice of God" in terms of textual authority in the second. For English-speaking readers interested in pursuing the work of Friedrich D.E. Schleiermacher, Wilhelm von Humboldt and Philip August Boeck, a good beginning point is Mueller-Vollmer (1985).
[4] For more on this debate see the series of essays for and against this proposal in Nunberg (1996).
[5] One of the best places to pursue this debate online is Humanist, an online computer seminar devoted to all aspects of Humanities computing.
[6] I will for the remainder of the essay refer to marked-up text in general as text that has had tags added to it in order to allow it to be presented in a web browser or similar reader.
[7] The move towards XML goes well beyond the use of computing in textual and cultural studies; Microsoft President Steven Ballmer has claimed that the future of the company is being staked on XML. There is a myriad of articles on this topic; for a general account, see Greene (2000).
[8] The Chadwyck-Healey group is part of the larger Bell and Howell Company that trades in a variety of different forms of information products.
[9] Two recent though methodologically different developments in this area are worth mentioning. The first is the move from the web site to the web portal. For a useful critique of this move, see Miller (2000). The second involves Microsoft's recent commitment to back XML as the primary force behind its software. XML, which is (almost) as readable by humans as by computers, allows for a level of document description that can greatly enhance some forms of document searching.
[10] I would suggest here not only the early structural analysis of power in Foucault's work on prisons and mental hospitals, but also the later formulations involving identity politics and power on an incrementally smaller basis.
[11] See also ISOC's Internet Histories site, and the homepage of Berners-Lee.
[12] Murray (1997): 28.
[13] Perhaps the leading force in this movement in the second incunabulum is Wired Magazine and its most colorful proponent Nicholas Negroponte. Negroponte's "Gumball Theory" of information distribution is typical of the flamboyance that surrounded early expectations of hi-tech. The method, which was first outlined in Negroponte's Being Digital (1995): 13, argues that all publishing will be on a pay-as-you-go model similar to purchasing gum from coin-operated gumball machines. One of my concerns for the model can make use of the same metaphor - those machines have more gum than I need and I don't have any control over which gum I get.
[14] For an in-depth analysis of the cross-fertilization of media, see Grusin & Bolter (1999).
[15] The current developments around the editing of Piers Plowman and Ulysses offer useful examples of the transition from manuscript to print to digital form. Both are among a handful of works considered by editorial theorists to be the most challenging in terms of editing. For further information on the Piers Project, see Creating an Electronic Archive of Piers Plowman. For recent work on the digitization of James Joyce's novel, see James Joyce's Ulysses in Hypermedia or Archiving the Ephemeral: The James Joyce Collection at Buffalo.
[17] McGann (1985): 283ff.
[18] The "people" here are the speakers, readers, interpreters, and/or victims of the English language.
[19] United Nations (1948).
[20] Of course, we cannot possibly maintain all artifacts, but this is exactly the reason why content provision and editing are so important to the operation of our society.
[21] Relationship theory has developed out of the work of Charles Berger (in particular see Berger & Bradac 1982) and involves a bond that is created through communicative exchange. In its simplest form, relationship theory argues that you can offer customers/clients or targets a feeling of comfort through relentless data exchange. Convergence is a broader notion that relates the ways in which divergent aspects of a business can benefit from alignment. A useful example here would be the AOL/Time-Warner merger, which seeks to benefit from the convergence of print and digital media. Branding is the notion that companies can benefit from the retailing of a name and image rather than a specific product. The most famous examination of this field is Naomi Klein's No Logo (2000).
[22] For example, a recent television advertisement for the investment firm Merrill Lynch told its viewers: "The average online investment company has over 8,500 investment reports online ... which begs the question, who has the time to read 8,500 reports?"
[23] Consider the related field of portal browsers like AOL and the recently updated version of MSN Explorer. The trend in this sector, which was led by AOL, is particularly telling. Its browser, famous for being cumbersome, was predicted by the techno-literate to fail. Yet its massive success lies in its ability to filter information, to limit rather than expand choice. Microsoft, manufacturer of the leading stand-alone browser (Internet Explorer), has countered the AOL portal model with its own MSN Explorer.
[24] A poignant example of this was the ill-fated Garland Press edition of James Joyce's Ulysses. Edited by Hans Walter Gabler, this text was the first major editorial project to use a primarily computerized form of editing. Unfortunately, as was discovered by the scholar John Kidd, reported in the Washington Post and later discussed in the Papers of the Bibliographic Society of America, the Gabler project did not maintain its strict commitment to extant documents and instead took a series of shortcuts that led to the cancellation of its publication. For a general introduction to what is now referred to as "The Scandal of Ulysses" see Arnold (1992) and Klyn (1997).
[25] This is of course particularly so in the case of editions that make all relevant materials available through links.
[26] As an example, one would find entirely different textual parties at the "homes" of the self-professed Bardolator Harold Bloom and the renegade editor of Shakespeare, Stanley Wells.
[27] This is the "New Bibliographic School" usually associated with W.W. Greg, Fredson Bowers and now G. Thomas Tanselle, and which works from a chosen copy-text towards an eclectic edition. The work of the New Bibliographers can be usefully traced in the University of Virginia journal, Studies in Bibliography.
[28] McGann is chief among the "sociohistorical" editors. In A Critique of Modern Textual Criticism (1983) he provides a rigorous critique of the New Bibliographic School's belief in a unified notion of the author and argues for a more detailed account of the social practices surrounding textual production. McGann's work has led to widespread change in editorial theory. The major journal in this tradition is TEXT: An Interdisciplinary Annual of Textual Scholarship.
[29] See bibliomania and xrefer.
[30] Landow (1997).
[31] The critics most often associated with the Anglo-American tradition are W.W. Greg, Fredson Bowers and G. Thomas Tanselle. The so-called Greg-Bowers-Tanselle line has come under fire from the sociohistorical school that is headed by Jerome McGann. Digital editing finds its strongest critique in the work of Peter Shillingsburg, whose views run close to those of McGann.
[32] This debate is currently playing itself out in the study of James Joyce's Ulysses. John Kidd has edited the text following the dictates of the Greg-Bowers-Tanselle line; Michael Groden is working on the aforementioned Hypermedia Ulysses Project that develops the work of Gabler and McGann; and Sam Slote, at the State University of New York, Buffalo, is currently undertaking a program of digitally imaging all relevant documents.
[33] Haraway (1996): esp. 125-130 and 230-231.
[34] McLuhan (1997): esp. Chapter 2: "Media Hot and Cold."
[35] See Hesse (1996) and Deguid (1996).
[36] In his discussion of "bibliographic codes" McGann argues convincingly for the meaning imparted by the physical aspects of a text. Among a variety of examples, he discusses F. Scott Fitzgerald's worry that the illustration for the cover of The Great Gatsby was so good it rendered the text redundant. McLeod has taken this one step further, attempting to read bibliographic codes in isolation. Also operating under the pseudonyms Random Cloud and Random Clod, McLeod is perhaps the most creative textual scholar working today. He teaches at the University of Toronto and is said to be currently at work "studying the shape of sonnets." See the Department of English at the University of Toronto.
[37] Two examples come immediately to mind. The first was a presentation at the 2000 MLA in Washington, DC, that dealt with the new frontiers of editorial theory. In a tribute to a new vision of "conjectural emendation," two scholars involved with the Blake Digital Text Project took turns reading aphorisms that hinted at the freedom inherent in no longer being bound by meaning. This series of attacks on Fredson Bowers and linearity, which were followed by idolatrous descriptions of McGann, managed to fully throw off the shackles of relevance. A second and more tangible example is the online Canterbury Tales Project, which has gained fame for its use of algorithms developed in evolutionary biology. What is compelling about this project is its ability to use computer technology to impose a normative reading that would seem little short of textual eugenics if carried out by human hands.
[38] It has been suggested that it is equally important to consider the opposite of the techno-fetishists - that is to say, those who vilify new technology. I am not at this time convinced that this raises the same level of concern in terms of the topic that I am discussing.
[39] It is not that we have not had monopolistic or corporate publishing before, but we have to experience the particular types of interactions that involve issues as divergent as author/publisher copyright and proprietary user interfaces.
[40] Nunberg (1996): 11.
Bibliography
Arnold, B. The Scandal of Ulysses: The Sensational Life of a Twentieth-Century Masterpiece. New York: St Martin's Press, 1992.
Berger, C. & Bradac, J. Language and Social Knowledge: Uncertainty in Interpersonal Relations. London: E. Arnold, 1982.
Bolter, J.D. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, New Jersey: L. Erlbaum Associates, 1991.
Bowers, F. Essays in Bibliography, Text, and Editing. Charlottesville, Virginia: University Press of Virginia, 1975.
Deguid, P. "Material Matters, the Past and Futurology of the Book." The Future of the Book. G. Nunberg, ed. Berkeley, California: University of California Press, 1996. 63-102.
DeRose, S.J., Durand, D.G., Mylonas, E. & Renear, A.H. "What is Text, Really?" Journal of Computing in Higher Education 1:2, Winter 1990. 3-26.
Doss, P.E. "Traditional Theory and Innovative Practice: The Electronic Editor as Poststructuralist Reader." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 213-224.
Duggan, H.N. "Some Unrevolutionary Aspects of Computer Editing." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 77-98.
Flanders, J. "Trusting the Electronic Edition." Computers and the Humanities 31:4, 1997/1998. 301-310.
Greetham, D.C. Scholarly Editing: A Guide To Research. New York: Modern Language Association of New York, 1995.
Greg, W.W. "The Rationale of Copy-Text." Studies in Bibliography 3, 1950/1951. 19-36.
Greene, J. "Microsoft's Big Bet." Business Week. October 30, 2000. https://www.businessweek.com/2000/00_44/b3705001.htm. Accessed: July 11, 2001.
Grusin, R. & Bolter, J.D. Remediation: Understanding New Media. Cambridge, Massachusetts: MIT Press, 1999.
Haraway, D. Modest_Witness @ Second_Millennium: FemaleMan© meets Oncomouse™: Feminism and Technoscience. New York: Routledge, 1996.
Hesse, C. "Books in Time." The Future of the Book. G. Nunberg, ed. Berkeley, California: University of California Press, 1996. 21-36.
Hockey, S. "Creating and Using Electronic Editions." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 1-22.
Klein, N. No Logo: Taking Aim at the Brand Bullies. Toronto: Knopf, 2000.
Klyn, D. "Haveth Versions Everywhere, or, Here Comes Everybody's Edition(s) of Ulysses." 1997. https://members.tripod.com/~fn0rd/Joyce.htm. Accessed: July 11, 2001.
Lancashire, I. "Editing English Renaissance Electronic Texts." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 117-143.
Landow, G.P. Hypertext 2.0: The Convergence of Contemporary Critical Theory and Technology. 2nd ed. Baltimore: Johns Hopkins University Press, 1997. See also: https://65.107.211.207/ht/jhup/contents2.html. Accessed: July 11, 2001.
Lavagnino, J. "Completeness and Adequacy in Text Encoding." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 63-76.
McGann, J.J. A Critique of Modern Textual Criticism. Charlottesville, Virginia: University Press of Virginia, 1983.
McGann, J.J. "Ulysses as a Postmodern Text: The Gabler Edition." Criticism 27, Summer 1985. 283-305.
McGann, J.J. The Textual Condition. Princeton: Princeton University Press, 1991.
McLuhan, M. The Gutenberg Galaxy: The Making of Typographic Man. Toronto: University of Toronto Press, 1962.
McLuhan, M. Understanding Media: The Extensions of Man. Cambridge, Massachusetts: MIT Press, 1997.
Miller, V. "Search Engines, Portals and Global Capitalism." Web.Studies: Rewiring Media Studies for the Digital Age. D. Gauntlett, ed. Oxford: Oxford University Press, 2000. 113-121.
Mueller-Vollmer, K., ed. The Hermeneutics Reader. New York: Continuum Publishing, 1985.
Murray, J.H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: Free Press, 1997.
Negroponte, N. Being Digital. Knopf, 1995.
Nunberg, G., ed. The Future of the Book. Berkeley, California: University of California Press, 1996.
Patterson, L. Negotiating the Past: The Historical Understanding of Medieval Literature. Madison, Wisconsin: University of Wisconsin Press, 1987.
Robinson, P.M.W. "Is There a Text in These Variants?" The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 99-115.
Ross, C.L. "The Electronic Text and the Death of the Critical Edition." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 225-231.
Shillingsburg, P.L. "Polymorphic, Polysemic, Protean, Reliable, Electronic Texts." Palimpsest: Editorial Theory in the Humanities. G. Bornstein & R.G. Williams, eds. Ann Arbor, Michigan: University of Michigan Press, 1993. 29-43.
Shillingsburg, P.L. "Principles for Electronic Archives, Scholarly Editions, and Tutorials." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 23-35.
Shillingsburg, P.L. Scholarly Editing in the Computer Age: Theory and Practice. Ann Arbor, Michigan: University of Michigan Press, 1996.
Sperberg-McQueen, C.M. "Textual Criticism and the Text Encoding Initiative." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 37-61.
Sutherland, K. & Deegan, M., eds. Electronic Text: Investigations in Method and Theory. Oxford: Oxford University Press, 1997.
Tanselle, G.T. Textual Criticism and Scholarly Editing. Charlottesville, Virginia: University Press of Virginia, 1990.
Thaler, M., Buzzetti, D. & Aumann, S. "Digital Manuscripts: Editions v. Archives" (conference session). Session site: https://gonzo.hd.uib.no/allc-ach96/Panels/Thaller/Thaller1.html. Accessed: December, 1999.
United Nations. "Universal Declaration of Human Rights." 1948. https://www.un.org/Overview/rights.html. Accessed: July 1, 2001.
Unsworth, J. "Electronic Scholarship; or, Scholarly Publishing and the Public." The Literary Text in the Digital Age. R.J. Finneran, ed. Ann Arbor, Michigan: University of Michigan Press, 1996. 233-243.
Back to [the top of the page] [the contents of this issue of MOTS PLURIELS]