February 11, 2008

EP 4.1: Implementable Models

by Noah Wardrip-Fruin · , 7:03 am

Games are systems — and these systems have varying relationships with the everyday world. Hopscotch, for example, is made up of a small number of rules that structure full-body actions in the everyday world. Most of the challenge of play comes from the way the game’s space is demarcated on the ground, the properties of balance of the human body, and the physics of planet Earth. Scrabble, on the other hand, is challenging because of the rules for what happens on the board (rather than being a physical challenge, as we can see by the fact that it would be permissible for another player arrange my tiles on the board for me, under my direction) but the nature of this challenge is shaped by our knowledge of the English language. And Monopoly relates to our everyday world not, primarily, through the motion of our bodies or our knowledge of facts outside the game, but by being a representation — a model — of the economic system under which it was produced: capitalism.

Like traditional games, computer games are also systems. Some are quite close to traditional games — like Dance Dance Revolution, which requires quick movement over a pressure-sensitive surface in time with on-screen instructions, essentially creating a computer-driven version of the traditional sort of full body play found in Hopscotch, Simon Says, and Red Light / Green Light. But most computer games are closer to Monopoly, in that the game play challenges, while they may require physical dexterity, are represented on the work’s surface within a world modeled by the game’s systems.

In constructing the models that make up game worlds, certain approaches are common. As discussed in the introduction, collision detection is a common operational logic for the models of space in games. Similarly, as discussed in the previous chapter, finite-state machines are a common operational logic for non-player character behavior.

These models aren’t selected and constructed in an attempt to capture all the nuance and detail of their counterparts in the everyday world. Rather than fidelity, these models are selected for a number of overlapping practical reasons. First, the models employed must be specific enough to be implemented computationally. Second, the implemented models must operate with acceptable efficiency on the platform(s) targeted by the development team. Third, the development resources (especially programmer time) must be available to perform this implementation. Finally, the overall goal is defined by the fact that the game is attempting to reach an audience — the model must serve the experience of gameplay sought by the game’s authors.

As Phil Agre explains, in Computation and Human Experience (1997), artificial intelligence researchers are also deeply engaged in making systems — but as a way of knowing. This changes a number of things. For example, it is fine for an AI research system to require a massive computer cluster in order to function, rather than a standard game console. Also, it is fine if the results of an AI research project can only be understood by specialists, rather than appreciated by a mainstream audience. But one fundamental thing does not change: AI researchers and game creators are interested in models of the world, and behavior within it, that can be implemented. They require models that can be operationalized computationally, and this creates a bridge between the two groups.

But AI research also has close connections to other disciplines — those that seek to understand, with their own tools, the same topics it investigates with its way of knowing. These include psychology, linguistics, and cognitive science. In this context, the question for AI, and these other disciplines in their connections with AI, is: “What do we hope to learn by making models?”

This question is immense, of course, as is the question to which it leads: “How do we evaluate these computational models?” This chapter will not attempt large-scale answers to these questions. Rather, they will be addressed, partially, through the pursuit of a more focused question: “Faced with the dilemmas of the Eliza effect, what could be a next step?” This, in turn, will be considered through the examination of another influential AI system, one which Weizenbaum saw as representing a possible future direction for Eliza: Robert Abelson’s “ideology machine.” But, first, it is important to understand the view of computational models that Weizenbaum was reacting against.

4 Responses to “EP 4.1: Implementable Models”


  1. Benjamin Grandis Says:

    Another very minor stylistic nit-pick here; starting this paragraph with “But” reads strangely when the preceding paragraph had its second half begun under the same word. It leads to that same strange double-turnaround feeling as when someone starts two consecutive sentences with “However…”

  2. Bryan Says:

    “But most computer games are closer to _Monopoly_, in that the game play challenges, while they may require physical dexterity, are represented on the work’s surface within a world modeled by the game’s systems.”

    Maybe this sentence would be best split into two independent clauses, for clarity’s sake?

    “But most computer games are closer to Monopoly; the game play challenges, while they require physical dexterity, are represented on the work’s surface within a world modeled by the game’s systems.”

  3. noah Says:

    Your version definitely seems more readable. Thanks.

  4. noah Says:

    Yes, and in this case it doesn’t really need to be a word like “but,” “yet,” or “however.” Probably best to replace the “But” with something like “At the same time.”

Powered by WordPress