February 15, 2008

EP 4.5: Learning from Models

by Noah Wardrip-Fruin · , 7:56 am

How do we learn from making models? Phil Agre (1997) offers part of an answer for the field of artificial intelligence when he writes:

AI’s distinctive activity is building things, specifically computers and computer programs. Building things, like fieldwork and meditation and design, is a way of knowing that cannot be reduced to the reading and writing of books. To the contrary, it is an enterprise grounded in a routine daily practice. Sitting in the lab and working on gadgets or circuits or programs, it is an inescapable fact that some things can be built and other things cannot. (10)

With a less open-ended agenda, the same is, of course, also true of work in digital media. In such engagements with what Agre calls “practical reality” a number of things are learned. This happens not only when answering the question of what can be built, but also in unexpected moments — moments that occur when trying to build systems, when interacting with those systems, when observing others interacting with those systems, and so on.

If AI pursues system building as a way of knowing, if this is one approach to how we learn from models, the question still remains, “What do we want to know?” Weizenbaum’s contemporaries, wherever they fell along the continuum between scruffy and neat, generally assumed the answer to be, “Can we build a system that exhibits genuinely intelligent behavior?”

But this is not all it is possible to learn. This book’s chapter on the Eliza effect was precisely an exploration of some other areas in which we have learned — and can still learn — from Weizenbaum’s early AI system. We don’t learn from Eliza because it is a testable model of how human cognition may actually work (or because it is genuinely intelligent in any way). But one way we do learn from it is by studying how humans interact with its simple model, just as Garfinkle’s yes/no therapy experiment was informative for what it revealed about humans interacting with the system. We have also learned important things in a wide variety of other areas: interpretations of computational systems, the limits of relatively state-free designs for interaction, and what can make a computer character compelling.

On the other hand, the speculative future Eliza discussed in this chapter, imagined as engaging the results of Abelson’s ideology machine, produced nothing unexpected. Since it was never constructed, since it is an AI project never engaged through AI’s way of knowing, nothing was learned. In essence, it is like a chemistry experiment specified only to the point of the substances to place on the lab table. The learning would only begin once, and if, the chemicals were mixed.

In other words, AI’s way of knowing — the building of computational artifacts — is a powerful approach. But it is much less effective when pursued halfway, and the general assumption of what it can help us know has been too limited. I am not the first to make this observation. A number of AI researchers have already turned to results such as Eliza’s as possible answers to the question of what we hope to learn through AI’s way of knowing. But we can also search for such lessons, not yet acknowledged, in AI projects that were pursued with another goal in mind. The coming chapters will take this approach, considering a series of story generation projects in three ways: on their own terms, as telling examples of particular technological approaches and moments in our technological history, and as sources of insight for authors of digital fictions and games.

All of these projects will, by some measures, move beyond the previously-discussed models found in commercial games: quest flags, dialogue trees, and finite state machines. At the same time, they will also be far from engaging another “practical reality” — that of the audience, which must be considered for all digital media. In fact, it is precisely this characteristic of one system that gives rise to the next general effect I wish to discuss: the Tale-Spin effect.

Comments are closed.

Powered by WordPress