In 1974, the cover of Ted Nelson’s Computer Lib / Dream Machines proclaimed, “You can and must understand computers NOW.”
Nelson’s book (mentioned earlier in this chapter’s discussion of Expressive AI) wasn’t a response to the ubiquity of computers. In fact, Nelson’s book was published the year before the first personal computer kit — the Altair — became available. At that time the number of computers was tiny by comparison with our present moment. Those that existed were carefully tended by teams of professionals (what Nelson called the computer “priesthood”) and determining their uses was considered the province of specialists. It seemed a long way from everyday life.
But — as Nelson’s book set out to expose — computing was already woven into the fabric of society. He pointed to the signs in everyday life (from computer-generated junk mail to the soon-to-arrive phenomenon of bar codes on consumer products) as well as the less-obvious changes as every large organization switched to computer record keeping (often restructuring it according to the ideas of, and leaving it accessible only to, the computing priesthood), computer simulation began to arise as a decision-making aid, and so on. Further, Nelson could already see in the field of computer graphics that computers would revolutionize our media — and understood that this, too, had political stakes. He wrote, “It matters because we live in media, as fish live in water.... But today, at this moment, we can and must design the media, design the molecules of our new water, and I believe the details of this design matter very deeply” (DM 2).
His response was a program of education, enacted through his book, in what we might today call “procedural literacy.” He sought to teach people to understand how computing works and what its processes make possible. Computer scientists such as Alan Kay and Seymour Papert pursued similar goals by developing new programming languages, designed to make the authoring of computer processes more accessible (especially for children). Nelson’s book, aimed at adults, instead attempted an ambitious survey of then-current programming languages, computing hardware, data structures, input devices, and more. All of this was couched in an argument that the modern computer shouldn’t be thought of as a mathematical device but, rather — as the mathematician and computing pioneer Alan Turing had it — “a single special machine” that “can be made to do the work of all” (2004, 383).8
Nelson’s goal, in part, was to encourage readers to grasp hold of the creative potential of computing toward their own ends. But, just as importantly, the goal was to urge readers to rise up against the computing priesthood, to stop accepting that there are “ways of doing things that computers now make necessary” (8). Stuart Brand has called Nelson “the Tom Paine of the personal-computer revolution.” One difference, of course, is that the revolutionary program for which Nelson agitated — “Computer power to the people! Down with cybercrud!” (3) — remains largely unrealized, though groups of agitators continue to push for it today.
A form of such agitation is, in fact, part of this book’s program. We live in a world in which the politics of computing are more important than ever, and where some of the highest-stakes processes — from those that generate terrorist watch lists to those that operate black-box voting machines — are kept in the hands of even more secretive priesthoods. In such a situation, what are we to do?
One answer, of course, is to continue down the path of procedural literacy. I fully support such efforts. But this book follows a different path.
One of the great difficulties in this area is that computer processes are often presented as, essentially, magical. This deceptive practice ranges from government agencies (such as DARPA’s former office of “Total Information Awareness”) to popular television shows (such as the many crime and terrorism dramas that portray facial recognition software as a reliable tool rather than an unsolved research question). We need a way to dispel the idea of magic and replace it with a basis for thinking critically about these issues. Procedural literacy only gets us part of the way. It may help more people understand how computational processes are authored, but very few people will ever personally write a system that employs techniques similar to those in a terrorist watch list generator.
In my experience, however, people are able to reason about computational processes by analogy. For example, once one understands the strengths and limitations of a set of computational techniques — in whatever context that understanding was gained — it is possible to use that knowledge to understand something of the potential for those techniques in other circumstances. This is true for most technology. As a child I learned that the technology of the wheel, when employed by my little wagon, worked better on hard, smooth surfaces than muddy, uneven ones. It was easy to generalize this knowledge to cars.
Our problem now is, in part, that processes such as those that generate terrorist watch lists are kept secret, making them difficult to evaluate critically. But the deeper problem is that many in our society do not understand a single example of a process anything like those involved and, further, have no experience in interpreting and evaluating computational processes whatsoever. It is my hope that the field of digital media, by providing a set of engaging and legible examples of computational processes, can be a starting point for changing this situation.
Consider the example of watch list generation. The specific processes may be secret, but it can be no secret that these systems rely on the techniques of artificial intelligence. We also know that the AI field’s practical toolkit consists of hand-authored rules and knowledge structures9 (on the one hand) and large-scale statistical operations (on the other).
This provides us with some knowledge, but another route to knowing is cut off. In addition to the specific processes being kept secret, the output of these systems is highly illegible. For the most part (for obvious reasons) the contents of the lists are kept from the public. For those few suspects made public (apart from a few obviously-ridiculous errors) it is hard to say which are and aren’t correctly identified. If the suspect is lucky enough to get to trial (rather than being held in an inaccessible detention facility or secretly transported to another country for torture) even that may not clarify whether their classification as a suspected terrorist was appropriate.
But many processes that employ AI techniques are much more legible, such as those used for media and communication. The developers of such systems commonly publish descriptions of how they operate in conference papers, journal articles, patent applications, doctoral dissertations, technical books, and so on. Further, the outputs of these systems are generally much easier to evaluate. For example, the field of Natural Language Processing (NLP) uses many of the same AI techniques — and it is generally easy to tell when a computer system produces nonsensical language or completely misunderstands the meaning of our utterances.
Also, as we come to understand NLP systems more deeply, we see that they only work in very limited circumstances. Whether creating a computer game or a phone tree, the designers of NLP systems work hard to limit the discourse context in order to improve the chances of the system working appropriately. And all this is necessary in order to improve the chances of carrying off a task much easier than the generation of a terrorist watch list. It is much, much easier to draw the intended meaning out of a well-defined bundle of text (employing knowledge from analyzing a huge amount of correct sample data) than it is to try to draw the intention to commit terrorism out of an ill-defined sea of variably-structured data about human behavior (after analyzing the small amount of available information about the behavior of known terrorists).
This much thinking lets us know that automatic terrorist watch list generation is likely to fail. But that doesn’t mean it won’t be attempted. And so it is politically important to be able to think about the ways it will fail. My hope is that, through knowledge of analogous example processes that have legible intents and outputs, we will be able to dismiss the idea of magical computer systems and instead think meaningfully about the potentials and limitations of such systems.10
This is a project to which I aim to contribute with this book. The coming chapters explore a variety of legible, media-focused examples of processes, including a computer game research project that employs techniques analogous to those promoted by the Total Information Awareness office. My hope is that more work in digital media will turn toward the investigation of processes and the potential for their interpretation — and that this will, together with increasing procedural literacy, help us make more informed decisions at the intersection of processes and politics.
My work here, in turn, is a complement to the exploration of another intersection of politics and processes: the use of computational processes as persuasive speech. While figures like Nelson and Sherry Turkle (1995) have discussed this for several decades, often in the context of environmental and urban planning simulations, the first detailed, full-length study is Ian Bogost’s Persuasive Games: The Expressive Power of Videogames (2007).
As the title indicates, Bogost’s book is about computer games — especially those designed for use in politics, education, and advertising. But the book’s implications are wider. Bogost uses games as a specific set of examples for the development of a larger argument about what he terms “procedural rhetoric, the art of persuasion through rule-based representations and interactions rather than spoken word, writing, images, or moving pictures.”
In particular, Bogost points to how digital media can make powerful arguments about how things work. Games and other procedural systems can use processes to create a representation of something that happens in our world — from the growth of cities to the marketing of cereal to the mechanisms of long-term debt. Playing the game involves interacting with this representation, which uses its internal processes to exhibit different behaviors, making it possible to explore the particular model of how things work.
As educators such as James Paul Gee (2004) have pointed out, successfully playing games involves learning. Prominent game designer Raph Koster has pointed to such learning as central to our experience of fun in games (2004). Bogost draws our attention to how this learning develops some understanding of the structure of the procedural representation at work — but not necessarily any critical approach to the model or the way it is presented. Addressing this is one of the primary goals of procedural rhetoric. Just as oral rhetoric aims to help people persuade through speech and aid audiences to understand and think critically about oral arguments, procedural rhetoric aims to help authors and audiences of process-based arguments.
My concerns here have some connection to those of Bogost and Gee. But to explore them further will require some consideration of the role of the audience.
8Of course, neither Turing nor Nelson is talking about “machines” such as washing machines (though these now commonly contain computers). Rather, Turing is talking about the capacity for universal computing machines, such as today’s personal computers and game consoles, to carry out all the work previously done by special-purpose computers: predicting tides, solving differential equations, playing chess endgames, and so on. Nelson’s point is that this means there is no one “computer way” of doing things. Rather, the design of each piece of software is the result of a series of social choices as much as technological necessities. As Nick Montfort points out in a comment on the Grand Text Auto blog, these features of universal computation are distinct from the idea of the computer as, in Alan Kay’s term, a “metamedium” (2007). While authors such as Nelson and Kay certainly emphasize the importance of universal computation in a computer’s media capabilities, they are also a property of other elements of the system (e.g., speakers for producing sound, a display capable of showing a smooth series of images).
9These knowledge structures are, within the AI field, commonly referred to as “ontologies.”
10This ability to reason by analogy about computational processes that are kept secret is perhaps part of the reason that computer scientists have emerged as some of the most important critical thinkers on these matters. For example, two of the most high-profile leaders of the movement against black box voting machines (computer ballot boxes operating using secret processes and with no paper trail to allow auditing of their outputs) are David Dill (Professor of Computer Science at Stanford University) and Aviel Rubin (Professor of Computer Science at Johns Hopkins University).
Powered by WordPress