July 14, 2005

Reading Processes

by Noah Wardrip-Fruin · , 6:39 pm

Part of the argument for procedural literacy (Michael’s article, my reply) is that we must learn to “read processes.” That is, we must learn to interpret the operations of systems… not just the outputs. There are a number of reasons for this, a few of which I’ll briefly sketch here.

First, as Ted Nelson began arguing in the 1970s, we’re living in a world increasingly defined by processes — processes designed and implemented by humans. These processes can be designed poorly, or implemented poorly, or designed and implemented to help some people and make life difficult for others… but this is the fault of humans, and it can be corrected (and sooner rather than later, if we can learn to spot bad designs before they’re widely adopted). To put it another way, “the computer just works that way” is a non-argument. The importance of this knowledge lay behind Nelson’s now-famous cry from the front of Computer Lib / Dream Machines: “You can and must understand computers NOW.”

Second, more specifically, we’re entering a period in which the results of computational processes are increasingly used to form assumptions or offered as evidence. This is one thing if we’re forming our assumptions about whether the weekend will be sunny while we’re trying to decide whether to have a picnic — but the results of computer simulations are also increasingly used when we’re in the process of trying to make more weighty decisions about matters such as city planning and greenhouse gas emissions. To take one of my favorite examples, Jay Forrester’s urban dynamics simulations (which inspired SimCity) can be used to try to figure out how to build a healthy city, but we need to view any results from his work through an interpretation of the structures and processes of the simulations — which Garn and others have argued are deeply flawed (for example, by their cities’ lack of dynamic interaction with suburbs). We need to learn to ask questions of the designs of simulations that are analogous to the questions we ask when presented with other forms of evidence (e.g., Was the study double blind? What’s the n?). Unfortunately, because these questions will often have to do with the unexamined assumptions of the simulation’s designers, it may only be possible to pose such questions after literate and informed examination of the simulation’s processes.

Third, and I think this is particularly close to the heart of many of us who read and write at GTxA, we’re increasingly using computational processes as a means of expression — from electronic literature to computer games to net art. Just as careful reading of exemplary literature is central to the development of most writers, so careful reading of processes is important to our development. Those of us with some computer science coursework under our belts have learned something about reading processes in terms of things like computability and efficiency. And those of us who’ve spent some time around computing subcultures have probably learned something about reading them in terms of elegance (see Nick’s writeup of The Art of Code). But we’re only beginning to learn to read processes in terms of what they express.

Which brings me to my topic. I’m looking around for examples of people who have done these sorts of interpretations of processes, and I’m guessing I will find a number of useful examples in writings about process-based arts (which may or may not involve digital computation) — though areas such as system dynamics clearly also have relevant work within them. To start things off, tomorrow I’ll share some of my observations from reading Marjorie Perloff’s Radical Artifice: Writing Poetry in the Age of Media (1991). I’d be happy to hear people’s comments on what I write there, and also hear any suggestions here for other examples of interpreting processes that I might look into.