January 24, 2008

EP 1.3: Interpreting Processes

by Noah Wardrip-Fruin · , 6:14 am

My second meaning for “expressive processing” is rather different — and itself has two elements.

First, it encompasses the fact that the internal processes of digital media are designed artifacts, like buildings, transportation systems, or music players. As with other designed mechanisms, processes can be seen in terms of their efficiency, their aesthetics, their points of failure, or their (lack of) suitability for particular purposes. Their design can be typical, or unusual, for their era and context. The parts and their arrangement may express kinship with, and points of divergence from, design movements and schools of thought. They can be progressively redesigned, repurposed, or used as the foundation for new systems — by their original designers or others — all while retaining traces and characteristics from prior uses.

Second, unlike many other designed mechanisms, the processes of digital media operate on, and in terms of, humanly meaningful elements and structures. For example, a natural language processing system (for understanding or generating human language) expresses a miniature philosophy of language in its universe of interpretation or expression. When such a system is incorporated into a work of digital media — such as an interactive fiction — its structures and operations are invoked whenever the work is experienced. This invocation selects, as it were, a particular constellation from among the system’s universe of possibilities. In a natural language generation system, this might be a particular sentence to be shown to the audience in the system output. From the output sentence it is not possible to see where the individual elements (e.g., words, phrases, sentence templates, or statistical language structures) once resided in the larger system. It is not possible to see how the movements of the model universe resulted in this constellation becoming possible — and becoming more apparent than other possible ones.

To put it another way, in the world of digital media, and perhaps especially for digital fictions, we have as much to learn by examining the model that drives the planetarium as by looking at a particular image of stars (or even the animation of their movement). This is because the model universes of digital fictions are built of rules for character behavior, structures for virtual worlds, techniques for assembling human language, and so on. They express the meanings of their fictional worlds through the design of every structure, the arc of every internal movement, and the elegance or difficulty with which the elements interact with one another.

Trying to interpret a work of digital media by looking only at the output is like interpreting a model solar system by looking only at the planets. If the accuracy of the texture of the surface of Mars is in question, this is fine. But it won’t suffice if we want to know if the model embodies and carries out a Copernican theory — or, instead, places the earth at the center of its simulated solar system. Both types of theories could produce models that currently place the planets in appropriate locations, but examining the models’ wires and gears will reveal critical differences, probably the most telling differences.

I express this view of digital media visually by complicating the picture presented in figure 1.1. A new figure, 1.3, adds a layer called “surface” over the initial data and process. The surface of a work of digital media is what the audience experiences: the output of the processes operating on the data, in the context of the physical hardware and setting, through which any audience interaction takes place. For example, when playing a console game the surface includes the console and any indicator lights or other information it provides, the television or monitor and any image it displays, the sound hardware (e.g., television speakers, stereo, headphones) and any sound produced, and the controller(s) with their buttons, lights, and perhaps vibrations.4 The audience experience of digital media is that of being connected to, and in some cases through, the surface.

Figure 1.3: Adding surface to data and process.PIC

The surface of a work of digital media is not transparent — it does not allow for direct observation of the data and process elements created and selected by the work’s author(s), or of the technical foundations on which they rest. Given this, adopting only the audience’s perspective makes full engagement with the work’s processes impossible. Some systems, through interaction, may make it possible to develop relatively accurate hypothesis of how the internal systems operate (in fact, some works require this on the part of the audience). But this is a complement to critical engagement with the operations of the work’s processes, rather than a substitute.

This, then, is the second meaning of “expressive processing” at work in this book: The processes of digital media, themselves, can be examined for what is expressed through their selection, arrangement, and operation. I have discussed, above, how a system operating on language (or other humanly-meaningful elements) can be interpreted for what its design expresses. But expressive processing also includes considering how the use of a particular process may express connection with a particular school of cognitive science or software engineering. Or how the arrangement of processes in a system may express a very different set of priorities or capabilities from authorial descriptions of the system. Or how understanding the operations of several systems may reveal previously unrecognized kinships (or disparities) between them. Recognizing such things can open up important new interpretations for a digital media system, with aesthetic, theoretical, and political consequences, which are not considered for the first time here. Some early work in this direction, especially important for digital fiction, was undertaken by Espen Aarseth, in his book Cybertext (1997).

Traversal functions

While much 1990s work on digital literature was focused on the audience experience of works — often with the project of comparing this experience to that of postmodern fiction or other lauded non-digital forms — Espen Aarseth’s Cybertext took the unusual step of considering such works as machines. In the book’s opening chapter Aarseth writes that the concept of cybertext “focuses on the mechanical organization of the text, by positing the intricacies of the medium as an integral part of the literary exchange.” His “traversal function” model for understanding such intricacies of literary media, and the audience’s role in operating them, has been widely influential.

In this model Aarseth refers to texts visible on the work’s surface as “scriptons,” textual data as “textons,” and the mechanisms by which scriptons are revealed or generated from textons and presented to the user as “traversal functions.” Aarseth’s model includes seven traversal functions, which are present for any work, and each of which can have a number of values. These functions and values range from the very specific (e.g., whether the work includes explicit links) to the very broad (e.g., whether the audience is somehow involved in selecting or creating surface texts).

Cybertext has influenced thinking about digital literature in a number of positive ways. Most importantly, it has made it commonplace to consider such works in terms of their mechanisms. In addition, because Aarseth’s traversal functions are crafted so as to be applicable both to digital and non-digital works, Cybertext has encouraged comparisons that reach back toward the history of process-oriented literary works (including many works created or researched by the Oulipo, a group of writers and mathematicians whose membership includes Raymond Queneau, George Perec, and Italo Calvino). Also, with the breadth of possibilities taken in by the traversal function model, Cybertext has encouraged attention to areas digital literature other than those most widely discussed (especially those with innovative processes) and presented something of a productive challenge to authors of digital literature (given that only a small fraction of the possible types of work described by the model have yet been created). Finally, Aarseth’s outline of the traversal function model, and Cybertext as a whole, considers things most literary scholars were, at that time, content to ignore: computer games. In addition to rather obviously literary games, such as Infocom’s interactive fiction Deadline, Aarseth also went so far as to consider largely text-free games such as Psygnosis’s animated puzzle-solving game Lemmings. Altogether, the result of Cybertext’s influence has been to help create the conditions of possibility for a book such as this one.

However, while consideration of Aarseth’s Cybertext volume and traversal function model has had these important influences, my impression is that the model itself has been more often cited than employed. This book will continue in that tradition, for two reasons — both linked to the model’s focus on the generation of scriptons (surface texts) from textons (textual data). First, many textual systems are difficult to describe in these terms. For example, the natural language generation system Mumble (described in a later chapter of this book, as a companion to the story generation system Tale-Spin) does not contain any easily-identified textons. Certainly surface texts are produced, but it is hard to see the process of their production as one of being revealed or generated from underlying textons (or, as Aarseth puts it “strings as they exist in the text”). This, in turn, points toward the second, more fundamental reason that this book will not employ Aarseth’s model: many of digital media’s (and digital fiction’s) most important processes are not well described by the process of revealing or generating scriptons from textons. To return to the example of Tale-Spin/Mumble, many of the work’s processes are focused on the simulation of character behavior (e.g., making plans to satisfy needs such as hunger, deciding how to act when another character is noticed nearby, or moving appropriately from one location to another). This simulation is carried out entirely independently from any natural language generation, and it is best examined as a process of simulation rather than as a means of generating or revealing scriptons. In fact, very few of the processes considered in this book (and, arguably, few of those considered in Cybertext itself) are fruitfully understood in such terms.5

All that said, however, it is worth noting that the traversal function model foregrounds something that has so far, in this book, been given short shrift: the role of the audience in operating a work’s mechanisms. This is a topic that I will consider in more detail later in this chapter. In the meantime, this discussion will turn to a larger intellectual movement for which Aarseth’s work could be seen as something of a precursor.

Software studies

Above, I discussed what a computer isn’t. It’s not an interactive movie projector, nor an expensive typewriter, nor a giant encyclopedia. Instead, it’s a machine for running software. That software can enact processes, access data, communicate across networks … and, as a result, emulate a movie projector, typewriter, encyclopedia, and many other things.

Most studies of software (from outside the disciplines of engineering and mathematics) have considered software in terms of what it emulates and how that emulation is experienced from outside the system. But a minority of authors have consistently, instead, written about software as software. This includes considering software’s internal operations (as this book does), examining its constituent elements (e.g., the different levels, modules, and even lines of code at work), studying its context and material traces of production (e.g., how the workings of money, labor, technology, and the market can be traced through whitepapers, specification documents, CVS archives, beta tests, patches, and so on), observing the transformations of work and its results (from celebrated cases such as architecture to the everyday ordering and movement of auto parts), and, as the foregoing implies, a broadening of the types of software considered worthy of study (not just media software, but design software, logistics software, databases, office tools, and so on).

These investigations form a part of the larger field of “software studies” — which includes all work that examines contemporary society through the lens of the specifics of software. For example, while there are many perspectives from which one might examine the phenomena of Wal-Mart, those who interpret the retail giant with attention to the specifics to the software that provides the foundation for many of its operations (from store restocking to work with far-flung supplier networks) are engaged in software studies. On the other hand, those who study Microsoft without any attention to the specifics of software are not part of the software studies field.

The phrase “software studies” was coined by Lev Manovich, in his widely-read book The Language of New Media (2001, 48). Manovich characterized software studies as a “turn to computer science” — perhaps analogous to the “linguistic turn” of an earlier era. In his book software studies takes the form of a turn toward analysis that operates in terms of the structures and concepts of computer science, toward analysis founded in terms of programmability (rather than, say, in terms of signification).6 In this way, Manovich’s book also helped create the conditions of possibility for this book, which I see as an example of software studies.

To avoid confusion, however, I should point out that this book is not an example of one particular area of software studies: code studies. A number of software studies scholars are interesting in interpreting programming language code7 but examining code and examining processes are not the same thing. If we think of software as like a simulated machine, examining the specific text of code (e.g., a piece of software’s particular variable names or a language’s idiosyncratic structures) is like studying the material properties of the steel that makes up the parts of a mechanism. Studying processes, on the other hand, focuses on the design and operation of the parts of the mechanism. These activities are not mutually exclusive, nor does one subsume the other. Rather, they complement one another — and some investigations may require undertaking both simultaneously.

Notes

4Of course, many console games also have more complicated surfaces, often in the form of additional controllers such as dance mats, simulated musical instruments, or cameras.

5As it happens, Tale-Spin is a major topic for one of Cybertext’s chapters.

6In 2003 Matthew Kirschenbaum offered his own expansion of Manovich’s term, one influenced by Kirschenbaum’s background in bibliography (the study of books as physical objects) and textual criticism (the reconstruction and representation of texts from multiple versions and witnesses). Kirschenbaum argued that in a field of software studies — as opposed to the rather loose, early “new media” field — “the deployment of critical terms like ‘virtuality’ must be balanced by a commitment to meticulous documentary research to recover and stabilize the material traces.” Kirschenbaum’s Mechanisms made good this assertion in 2008, which also saw the publication of the field’s first edited volume Software Studies: A Lexicon (Fuller, 2008).

7Quite a bit of interesting work has already asserted or demonstrated the importance of interpreting code. For example, work of this sort that emerges from a humanities background includes Maurice Black’s The Art of Code (2002), John Cayley’s “The Code is Not the Text (unless it is the text)” (2002), Rita Raley’s “Interferences: [Net.Writing] and the Practice of Codework” (2002), the second chapter of N. Katherine Hayles’s My Mother Was a Computer: Digital Subjects and Literary Texts (2005), Michael Mateas and Nick Montfort’s “A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics” (2005), and Mark C. Marino’s “Critical Code Studies” (2006).

Continue reading…

15 Responses to “EP 1.3: Interpreting Processes”


  1. markku eskelinen Says:

    There are several models in Cybertext. It also includes (pp.103-105) “a schematic model of internal structure” with four groups of components: the data, the processing engines, the front-end medium and the users and some additional feedback loops between them. What is your take on that?

  2. Lev Manovich Says:

    In Language of New Media (completed in 1999), I wrote: “To understand the logic of new media we need to turn to computer science. It is there that we may expect to find the new terms, categories and operations that characterize media that became programmable.” Reading this statement today, I feel that it positions computer science as a kind of absolute truth, a given which can explain to us how culture works in software society. But computer science is itself part of culture. The book, which first demonstrated this in a very comprehensive fashion, is New Media Reader that was put together by (Noah Wardrip-Fruin and Nick Montfort (MIT Press, 2003). The publication of this groundbreaking anthology laid the framework for the historical study of software as it relates to the history of culture. Although the Reader did not explicitly use the term “software studies,” it did propose a new model for how to think about software. By systematically juxtaposing important texts by pioneers of cultural computing and key artists active in the same historical periods, the Reader demonstrated that both belonged to the same larger epistemes. That is, often the same idea was simultaneously articulated in thinking of artists and scientists who were inventing cultural computing. For instance, the book opens with the story by Jorge Borges (1941) and the article by Vannevar Bush (1945) which both contain the idea of a massive branching structure as a better way to organize data and to represent human experience.
    Therefore, I think that software studies has to both investigate the role of software in forming contemporary culture – and to investigate cultural, social, and economic forces which are shaping development of software itself.

  3. noah Says:

    Markku, yes, there are definitely a variety of models in Cybertext. I’ve picked out the traversal function model because it’s the mostly widely-cited (and probably the one Cybertext develops at the most length). The model you mention is closer to my own conception, and so I wish it had received more attention. As far as I know, it is not only little-cited in other writings, but also not used in Cybertext for anything but the discussion of “adventure games” and their ilk.

    The main place I’m going with all this model-building, by the way, is toward the broader notion of “operational logics” that we talked about a bit in Siegen in 2004. When we get there (next week) I’d very much appreciate hearing your thoughts.

  4. Lev Manovich Says:

    To a significant extent, modern thinking about culture can be characterized as “surface studies.” This is true of film studies, media studies, art history, literary studies, etc. Although each of these disciplines produced some work which engages with the production processes which led to the outputs presented to the audiences – films, literature, television programs, etc. – these works are a minority. A great majority of books, articles, and academic papers take these outputs as given; they are then interpreted using different methodologies (Psychoanalysis, Marxism, Feminism, etc.). What is not considered are the theories and concepts of the people involved in production, the technologies involved, and what can be called “cultural logistics” – the organization and consideration of networks of people, machines, media, distribution systems, etc. I think that one of the goals of Software Studies is to focus on all these dimensions and to demonstrate to the rest of humanities why their study is crucial.

  5. Matt K. Says:

    Lev’s comments above remind me that we seem to be layering up the field of academic new media in much the same way as those tedious diagrams in “how computers” work textbooks: platform studies, code studies, software studies. We’ve had what is essentially “screen or interface studies” for a while now. Computers, as we know, are often depicted as stacks or towers of abstractions. And so I wonder a little about unselfconsciously duplicating a metaphor that has its own inherent artifice and limitations.

  6. noah Says:

    Lev and Matt, I think the two of you are hitting on a key issue for our field. I sometimes find myself using a shorthand for software studies that, roughly, works out to “we study the things computer scientists study, but using the approaches of the humanities, arts, and social sciences.” Such an approach would actually leave the constitution and selection of the things studied by computer science completely unexamined. That would be a mistake.

    I’m relatively sure that both Matt’s Mechanisms and Lev’s work in progress avoid that trap. My hope is that Expressive Processing does as well. More particularly, I hope that we manage to avoid the trap while also remaining appropriately aware that the field of digital media is built on the research, results, and tools of computer science.

    Also, Lev, many thanks for the kind words about The New Media Reader!

  7. John Cayley Says:

    On the distinction, made explicit by Noah here, between code studies and (expressive) process(ing) studies, see comment below to note 7 (para 23).

  8. John Cayley Says:

    I take Noah’s point that there is a distinction to made here between an examination of code and practices of coding that are focused on their (technical, CS) specifics: the (programming) language used, affordances of the specific language, appropriate engagement with those affordances, programming style, etc. Mateas and Montfort’s contribution here is within the above purview; Marino is wanting to make interventions on both sides of the distinction. (I don’t know the Black book.) However, the citation of my own essay here, along with Raley’s and Hayles’, should not, I think, be see as only or even chiefly in the realm of such ‘code studies.’ These latter writers all address the cultural and interpretative significances of the fact that the literary objects we’re examining are made, in part, through practices of coding. The code and coding to which we refer is abstracted and the statements we make about code will have a bearing on your address to expressive process(ing). Arguably, expressive process(ing) cannot exist without practices of coding, and so the properties, methods and implications of such coding practices (as distinct, for example from practices of writing or inscription) will speak to the concerns of EP.

  9. serial consign Says:

    large-scale conversations

  10. noah Says:

    John, I think you’re completely right. There are a couple of uses of “code” in play here, and I’m lumping them together. One is basically literal — people like Marino, Black, and Mateas & Montfort are looking closely at source code and the languages in which it is written. The other use of “code” is more like the way I use “process(es/ing)” — a focus on the operations specified through code, rather than the particular text of it. Though, of course, both senses are somewhat in play in all of them. Anyway, I should definitely write something a bit more careful and clear for the final manuscript.

  11. Mark M. Says:

    Is this to go so far as to say that there is an epistemology or ideology implied here?

  12. Mark M. Says:

    On some level, doesn’t the author also interact with the work through a surface? Or is that too fine grained?

  13. Mark M. Says:

    I’m now wondering about this term “expressive.” Does “expressive” require that someone (an author) is “expressing.” I live part-time in the world of composition where words like “expressive” are challenged by words like “construction.” Or is the “expression” a meta-level effect of certain design choices. (I’ll read farther and find out!)

  14. Mark M. Says:

    I’m not sure I would use this analogy. But let me come back to this. The relationship of steel-design do not quite seem to be the same as code-processes — or maybe it’s that the link to steel suggests (or evokes) a kind of a priori status to code which is full of modeling and design choices, too, to a degree I don’t typically associate with steel. The objection is more about evocations perhaps than precision. More on this later.

  15. Matt Barton Says:

    As long as you’re listing people who’ve written on this, I might humbly posit my own article http://www.freesoftwaremagazine.com/articles/focus-software_as_art/

Powered by WordPress