March 3, 2008

EP 6.5: Beyond Anthropomorphic Intelligence

by Noah Wardrip-Fruin · , 6:12 am

Given the history of AI, it is no surprise that systems such as Tale-Spin and Minstrel were built to embody models of human cognition. The assumption that human and machine processes should — or must — resemble each other runs deep in AI. It continues to this day, despite the counter-example of statistical AI.

With Tale-Spin and Minstrel both emerging from the “scruffy” end of symbolic AI, we might assume that this area of AI was particularly given to building its systems on human models. And perhaps it is true that a neat researcher would not have made Turner’s opening assumption from his description of Minstrel: “To build a computer program to tell stories, we must understand and model the processes an author uses to achieve his goals” (1994, 3). As discussed earlier in connection with Strips, scruffy researchers sought to claim for their approach greater fidelity with the way that humans approach problems, arguing that people think differently in different problem domains. Riesbeck and Schank put it this way:

The problem that general problem solver programs have is that each and every problem presented to them has to be handled the same way. Each problem must be reduced to first principles and logical deductions made from those principles. Even mathematicians do not really behave this way. (Riesbeck and Schank, 1989, 9)

But to view the issue in these terms is to consider only one level. At another level, both scruffy and neat AI are based on the assumptions of cognitive science — especially that effective problem solving for machines should be based on the same model of “symbolic processing” assumed to be at work in humans. As Paul N. Edwards puts it in The Closed World, “cognitive science views problems of thinking, reasoning, and perception as general issues of symbolic processing, transcending distinctions between humans and computers, humans and animals, and living and nonliving systems” (1997, 19). To put it another way, whatever their differences, scruffy and neat researchers still speak of their computer systems in terms of human-derived symbolic formulations such as beliefs, goals, plans, rules, and so on.

And it is on the basis of these formulations that both scruffy and neat AI researchers have been extensively critiqued by more recent thinkers — resulting in their work becoming lumped together using terms such as “classical AI” or “GOFAI” (for “Good Old-Fashioned AI”). But many of the critics who locate themselves within AI view the problem with symbolic techniques as a lack of accuracy regarding how humans operate. For example, Phil Agre, whose critique of Strips was discussed earlier, characterizes symbolic AI’s problems by saying, “Mistaken ideas about human nature lead to recurring patterns of technical difficulty” (1997, xii). His alternative approach to AI is founded on a different view of human activity (7) rather than abandoning the anthropomorphic model for AI processes.

A more radical critique may appear to emerge from the current head of MIT’s venerable AI laboratory, Rodney Brooks. In his widely-cited “Elephants Don’t Play Chess” Brooks writes: “In this paper we argue that the symbol system hypothesis upon which classical AI is based is fundamentally flawed, and as such imposes severe limitations on the fitness of its progeny” (1990, 3). The alternative championed by Brooks is focused on AI that operates through combinations of simple mechanisms embodied in robots in real-world contexts (and focuses on interaction within these contexts). Nevertheless, after building a number of insect-like robots based on this research agenda, Brooks turned — inevitably? — to the creation of humanoid robots meant to eventually exhibit humanoid intelligence. The banner headline on the webpage for the most famous of these projects, Cog, reads, “The motivation behind creating Cog is the hypothesis that: Humanoid intelligence requires humanoid interactions with the world” (MIT Humanoid Robotics Group, 2003). The anthropomorphic view of AI asserts itself again.

Models and authorship

From the point of view of authoring, there is something quite puzzling about all this. To a writer it is obvious that people in stories do not talk the way people do in everyday life, their lives as reported in stories contain only a tiny subset of what would go on if they had real lives, and so on. A system that operated largely like a real person thinks might not be much help in telling a story — most real people aren’t that good at storytelling (so simulating an author is out) and characters shouldn’t act like real people (real people don’t take the stylized, coordinated actions that dramatic characters perform).

Accurate modeling is simply not what authors are after. This is true even in areas such as computer game physics. While first-person 3D games (such as Half-Life 2) may incorporate very natural-seeming models of space, movement, gravity, friction, and so on, each of these elements have been (along with each of the environments in which they are active) carefully shaped for the game experience. Half-Life 2 may be more like the real world than is a game like Super Mario Brothers, but a deep model of the world, operating just like the real world, is (in addition to not being achievable) not an important step along the path to either one.

The question is which models are useful for authoring. The answers are not always obvious. For example, while simple generation from a statistical model — as with Shannon’s bigram-produced sentence — is a very limited approach to linguistic behavior, it is also not the only possible approach to authoring that uses such models. For example, Nitrogen was an ambitious research project (developed by Eduard Hovy’s group at USC) aimed at using n-gram models in combination with logical and syntactic models for generating language (Langkilde and Knight, 1998). Given a logical message to translate into text, it would promiscuously produce a lattice of grammatical structures. Then an n-gram model was used to rate pathways through the lattice. This experiment in combining traditional and statistical AI produced quite promising early results.

Similarly, the game Black & White (Molyneux et al, 2001) employs a hybrid architecture to create one of the most successful character experiences in modern gaming. Each Black & White player is a god, made more powerful by the allegiance of human followers — but with a much closer relationship with a non-human “Creature” who can be a powerful ally, and focus of empathy, but must be properly trained. In designing the AI to drive these creatures, Richard Evans wanted to include both a model of the world and a model of the player’s view of the world (deduced by ascribing goals to the player that explain the player’s actions). Such models are easier to understand and employ if represented in traditional, symbolic (GOFAI) forms. At the same time, Evans wanted a Creature’s learning (from observing the player and NPCs, player feedback after actions, and player commands) to have the “fuzzy” sense, and vast number of possible states, achieved in more recent, numerically-focused AI. As a result, he adopted an approach he calls “Representational Promiscuity” in which “beliefs about individual objects are represented symbolically, as a list of attribute-value pairs; opinions about types of objects are represented as decision-trees; desires are represented as perceptrons; and intentions are represented as plans” (2002, 567, 570). The result is not only an impressive character — but also a system that re-positions the provision of training data for numerically-driven AI. Rather than something that happens during system development, it is something that happens during audience interaction, becoming the very method by which the player’s relationship develops with Black & White’s main on-screen character.

More speculatively, a related kind of hybrid system might address another difficulty identified in this chapter. While Minstrel’s TRAMs proved problematic in the context of a system meant to autonomously produce stories — given problems of common-sense reasoning — we can imagine TRAMs employed in a very different context. For example, consider what Rafael Pérez y Pérez and Mike Sharples write in their evaluation of Minstrel:

[T]he reader can imagine a Knight who is sewing his socks and pricked himself by accident; in this case, because the action of sewing produced an injury to the Knight, Minstrel would treat sewing as a method to kill someone. (2004, 21)

I quoted this on the blog Grand Text Auto, leading Turner to reply:

I actually find this an (unintentionally) wonderful example of creativity, and exactly the sort of thing Minstrel ought to be capable of creating. There’s an Irish folk song in which a woman imprisons her husband by sewing him into the bedsheets while he sleeps. Doesn’t that show exactly the same creative process (magnifying a small effect to create a large one)? (Turner, 2007)

Many people also find Minstrel’s story of a hungry knight killing and eating a princess (adapting a story about a dragon) a quite amusing example of creativity. On the other hand, the problems produced by PAT:Pride were comparatively uninteresting. The solution, here, might be to use a TRAM-style model in a system that operates in close interaction with the audience, so that it is the humans who provide the necessary common sense reasoning. In a Minstrel designed this way, the audience could choose whether to steer toward or away from traditional stories, surreal stories, and nonsense. Of course, like the earlier example of an imagined Eliza/Doctor that employs the techniques of Abelson’s ideology machine, considering this speculative system won’t tell us nearly as much as examining actually constructed ones.

To put the lessons of this chapter another way, the issue for authors is not whether models arise from attempts to simulate human intelligence, from statistical traditions, or from other directions. The issue, as demonstrated by Orkin’s work on F.E.A.R. and Evans’s on Black & White, is how models — whatever their original source — can be employed toward current authorial goals in a manner that acknowledges their limitations. The next chapter considers systems with this more pragmatic approach to story generation.

For our culture more generally, the lesson is rather different. Minstrel and The Restaurant Game are legible examples of the limitations inherent in both symbolic and statistical approaches to AI. The problems they exhibit in relatively constrained microworlds become much greater in our massively complicated, evolving, and contingent everyday world. When considering public proposals for the use of AI systems, we would do better to remember fictional worlds in which knights eat princesses and restaurants fill with pies, rather than listen to the science fictions used to support proposals for Total Information Awareness and warrantless wiretapping.

2 Responses to “EP 6.5: Beyond Anthropomorphic Intelligence”

  1. Richard Evans Says:

    This is a great summary.

  2. noah Says:

    I’m very glad to hear it!

    I’ll be much happier sending this book to press, having had the contributions of people like you, Scott, and Jeff to this review — as both creators of projects I’m discussing and people who have thought deeply about the field.

Powered by WordPress