January 24, 2008

EP 1.3: Interpreting Processes

by Noah Wardrip-Fruin · , 6:14 am

My second meaning for “expressive processing” is rather different — and itself has two elements.

First, it encompasses the fact that the internal processes of digital media are designed artifacts, like buildings, transportation systems, or music players. As with other designed mechanisms, processes can be seen in terms of their efficiency, their aesthetics, their points of failure, or their (lack of) suitability for particular purposes. Their design can be typical, or unusual, for their era and context. The parts and their arrangement may express kinship with, and points of divergence from, design movements and schools of thought. They can be progressively redesigned, repurposed, or used as the foundation for new systems — by their original designers or others — all while retaining traces and characteristics from prior uses.

Second, unlike many other designed mechanisms, the processes of digital media operate on, and in terms of, humanly meaningful elements and structures. For example, a natural language processing system (for understanding or generating human language) expresses a miniature philosophy of language in its universe of interpretation or expression. When such a system is incorporated into a work of digital media — such as an interactive fiction — its structures and operations are invoked whenever the work is experienced. This invocation selects, as it were, a particular constellation from among the system’s universe of possibilities. In a natural language generation system, this might be a particular sentence to be shown to the audience in the system output. From the output sentence it is not possible to see where the individual elements (e.g., words, phrases, sentence templates, or statistical language structures) once resided in the larger system. It is not possible to see how the movements of the model universe resulted in this constellation becoming possible — and becoming more apparent than other possible ones.

To put it another way, in the world of digital media, and perhaps especially for digital fictions, we have as much to learn by examining the model that drives the planetarium as by looking at a particular image of stars (or even the animation of their movement). This is because the model universes of digital fictions are built of rules for character behavior, structures for virtual worlds, techniques for assembling human language, and so on. They express the meanings of their fictional worlds through the design of every structure, the arc of every internal movement, and the elegance or difficulty with which the elements interact with one another.

Trying to interpret a work of digital media by looking only at the output is like interpreting a model solar system by looking only at the planets. If the accuracy of the texture of the surface of Mars is in question, this is fine. But it won’t suffice if we want to know if the model embodies and carries out a Copernican theory — or, instead, places the earth at the center of its simulated solar system. Both types of theories could produce models that currently place the planets in appropriate locations, but examining the models’ wires and gears will reveal critical differences, probably the most telling differences.

I express this view of digital media visually by complicating the picture presented in figure 1.1. A new figure, 1.3, adds a layer called “surface” over the initial data and process. The surface of a work of digital media is what the audience experiences: the output of the processes operating on the data, in the context of the physical hardware and setting, through which any audience interaction takes place. For example, when playing a console game the surface includes the console and any indicator lights or other information it provides, the television or monitor and any image it displays, the sound hardware (e.g., television speakers, stereo, headphones) and any sound produced, and the controller(s) with their buttons, lights, and perhaps vibrations.4 The audience experience of digital media is that of being connected to, and in some cases through, the surface.

Figure 1.3: Adding surface to data and process.PIC

The surface of a work of digital media is not transparent — it does not allow for direct observation of the data and process elements created and selected by the work’s author(s), or of the technical foundations on which they rest. Given this, adopting only the audience’s perspective makes full engagement with the work’s processes impossible. Some systems, through interaction, may make it possible to develop relatively accurate hypothesis of how the internal systems operate (in fact, some works require this on the part of the audience). But this is a complement to critical engagement with the operations of the work’s processes, rather than a substitute.

This, then, is the second meaning of “expressive processing” at work in this book: The processes of digital media, themselves, can be examined for what is expressed through their selection, arrangement, and operation. I have discussed, above, how a system operating on language (or other humanly-meaningful elements) can be interpreted for what its design expresses. But expressive processing also includes considering how the use of a particular process may express connection with a particular school of cognitive science or software engineering. Or how the arrangement of processes in a system may express a very different set of priorities or capabilities from authorial descriptions of the system. Or how understanding the operations of several systems may reveal previously unrecognized kinships (or disparities) between them. Recognizing such things can open up important new interpretations for a digital media system, with aesthetic, theoretical, and political consequences, which are not considered for the first time here. Some early work in this direction, especially important for digital fiction, was undertaken by Espen Aarseth, in his book Cybertext (1997).

Traversal functions

While much 1990s work on digital literature was focused on the audience experience of works — often with the project of comparing this experience to that of postmodern fiction or other lauded non-digital forms — Espen Aarseth’s Cybertext took the unusual step of considering such works as machines. In the book’s opening chapter Aarseth writes that the concept of cybertext “focuses on the mechanical organization of the text, by positing the intricacies of the medium as an integral part of the literary exchange.” His “traversal function” model for understanding such intricacies of literary media, and the audience’s role in operating them, has been widely influential.

In this model Aarseth refers to texts visible on the work’s surface as “scriptons,” textual data as “textons,” and the mechanisms by which scriptons are revealed or generated from textons and presented to the user as “traversal functions.” Aarseth’s model includes seven traversal functions, which are present for any work, and each of which can have a number of values. These functions and values range from the very specific (e.g., whether the work includes explicit links) to the very broad (e.g., whether the audience is somehow involved in selecting or creating surface texts).

Cybertext has influenced thinking about digital literature in a number of positive ways. Most importantly, it has made it commonplace to consider such works in terms of their mechanisms. In addition, because Aarseth’s traversal functions are crafted so as to be applicable both to digital and non-digital works, Cybertext has encouraged comparisons that reach back toward the history of process-oriented literary works (including many works created or researched by the Oulipo, a group of writers and mathematicians whose membership includes Raymond Queneau, George Perec, and Italo Calvino). Also, with the breadth of possibilities taken in by the traversal function model, Cybertext has encouraged attention to areas digital literature other than those most widely discussed (especially those with innovative processes) and presented something of a productive challenge to authors of digital literature (given that only a small fraction of the possible types of work described by the model have yet been created). Finally, Aarseth’s outline of the traversal function model, and Cybertext as a whole, considers things most literary scholars were, at that time, content to ignore: computer games. In addition to rather obviously literary games, such as Infocom’s interactive fiction Deadline, Aarseth also went so far as to consider largely text-free games such as Psygnosis’s animated puzzle-solving game Lemmings. Altogether, the result of Cybertext’s influence has been to help create the conditions of possibility for a book such as this one.

However, while consideration of Aarseth’s Cybertext volume and traversal function model has had these important influences, my impression is that the model itself has been more often cited than employed. This book will continue in that tradition, for two reasons — both linked to the model’s focus on the generation of scriptons (surface texts) from textons (textual data). First, many textual systems are difficult to describe in these terms. For example, the natural language generation system Mumble (described in a later chapter of this book, as a companion to the story generation system Tale-Spin) does not contain any easily-identified textons. Certainly surface texts are produced, but it is hard to see the process of their production as one of being revealed or generated from underlying textons (or, as Aarseth puts it “strings as they exist in the text”). This, in turn, points toward the second, more fundamental reason that this book will not employ Aarseth’s model: many of digital media’s (and digital fiction’s) most important processes are not well described by the process of revealing or generating scriptons from textons. To return to the example of Tale-Spin/Mumble, many of the work’s processes are focused on the simulation of character behavior (e.g., making plans to satisfy needs such as hunger, deciding how to act when another character is noticed nearby, or moving appropriately from one location to another). This simulation is carried out entirely independently from any natural language generation, and it is best examined as a process of simulation rather than as a means of generating or revealing scriptons. In fact, very few of the processes considered in this book (and, arguably, few of those considered in Cybertext itself) are fruitfully understood in such terms.5

All that said, however, it is worth noting that the traversal function model foregrounds something that has so far, in this book, been given short shrift: the role of the audience in operating a work’s mechanisms. This is a topic that I will consider in more detail later in this chapter. In the meantime, this discussion will turn to a larger intellectual movement for which Aarseth’s work could be seen as something of a precursor.

Software studies

Above, I discussed what a computer isn’t. It’s not an interactive movie projector, nor an expensive typewriter, nor a giant encyclopedia. Instead, it’s a machine for running software. That software can enact processes, access data, communicate across networks … and, as a result, emulate a movie projector, typewriter, encyclopedia, and many other things.

Most studies of software (from outside the disciplines of engineering and mathematics) have considered software in terms of what it emulates and how that emulation is experienced from outside the system. But a minority of authors have consistently, instead, written about software as software. This includes considering software’s internal operations (as this book does), examining its constituent elements (e.g., the different levels, modules, and even lines of code at work), studying its context and material traces of production (e.g., how the workings of money, labor, technology, and the market can be traced through whitepapers, specification documents, CVS archives, beta tests, patches, and so on), observing the transformations of work and its results (from celebrated cases such as architecture to the everyday ordering and movement of auto parts), and, as the foregoing implies, a broadening of the types of software considered worthy of study (not just media software, but design software, logistics software, databases, office tools, and so on).

These investigations form a part of the larger field of “software studies” — which includes all work that examines contemporary society through the lens of the specifics of software. For example, while there are many perspectives from which one might examine the phenomena of Wal-Mart, those who interpret the retail giant with attention to the specifics to the software that provides the foundation for many of its operations (from store restocking to work with far-flung supplier networks) are engaged in software studies. On the other hand, those who study Microsoft without any attention to the specifics of software are not part of the software studies field.

The phrase “software studies” was coined by Lev Manovich, in his widely-read book The Language of New Media (2001, 48). Manovich characterized software studies as a “turn to computer science” — perhaps analogous to the “linguistic turn” of an earlier era. In his book software studies takes the form of a turn toward analysis that operates in terms of the structures and concepts of computer science, toward analysis founded in terms of programmability (rather than, say, in terms of signification).6 In this way, Manovich’s book also helped create the conditions of possibility for this book, which I see as an example of software studies.

To avoid confusion, however, I should point out that this book is not an example of one particular area of software studies: code studies. A number of software studies scholars are interesting in interpreting programming language code7 but examining code and examining processes are not the same thing. If we think of software as like a simulated machine, examining the specific text of code (e.g., a piece of software’s particular variable names or a language’s idiosyncratic structures) is like studying the material properties of the steel that makes up the parts of a mechanism. Studying processes, on the other hand, focuses on the design and operation of the parts of the mechanism. These activities are not mutually exclusive, nor does one subsume the other. Rather, they complement one another — and some investigations may require undertaking both simultaneously.

Notes

4Of course, many console games also have more complicated surfaces, often in the form of additional controllers such as dance mats, simulated musical instruments, or cameras.

5As it happens, Tale-Spin is a major topic for one of Cybertext’s chapters.

6In 2003 Matthew Kirschenbaum offered his own expansion of Manovich’s term, one influenced by Kirschenbaum’s background in bibliography (the study of books as physical objects) and textual criticism (the reconstruction and representation of texts from multiple versions and witnesses). Kirschenbaum argued that in a field of software studies — as opposed to the rather loose, early “new media” field — “the deployment of critical terms like ‘virtuality’ must be balanced by a commitment to meticulous documentary research to recover and stabilize the material traces.” Kirschenbaum’s Mechanisms made good this assertion in 2008, which also saw the publication of the field’s first edited volume Software Studies: A Lexicon (Fuller, 2008).

7Quite a bit of interesting work has already asserted or demonstrated the importance of interpreting code. For example, work of this sort that emerges from a humanities background includes Maurice Black’s The Art of Code (2002), John Cayley’s “The Code is Not the Text (unless it is the text)” (2002), Rita Raley’s “Interferences: [Net.Writing] and the Practice of Codework” (2002), the second chapter of N. Katherine Hayles’s My Mother Was a Computer: Digital Subjects and Literary Texts (2005), Michael Mateas and Nick Montfort’s “A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics” (2005), and Mark C. Marino’s “Critical Code Studies” (2006).

Continue reading…