February 22, 2008
Of course, the Tale-Spin effect, as described above, mainly considers Tale-Spin as a piece of media. But, in its context at Yale, it was positioned as something else — or something more. As Meehan emphasizes repeatedly in his dissertation, the structures of Tale-Spin were not chosen because they were the most efficient way to have a computer output a story. If this were the goal, some method like that of Klein’s “automatic novel writer” would have been appropriate. Instead, Tale-Spin was meant to operate as a simulation of human behavior, based on the then-current cognitive science ideas of Schank and Abelson.
Turning to Tale-Spin from this perspective, some additional issues bear discussion. For example, recall the moment in this chapter’s example Tale-Spin story when George Bird, seeing no advantage to himself in answering Arthur Bear about the location of honey, decides to answer nonetheless. Tale-Spin doesn’t decide whether George will answer by simply asking the audience. Rather, it decides based on how kind George is. The audience responds that he is “somewhat” kind, so he decides to give Arthur an answer “out of the goodness of his little heart,” as Meehan puts it. But when George Bird calls Do-MTrans, this motivation isn’t communicated — and Do-MTrans decides to ask the audience whether George feels deceptive toward Arthur. The answer is “a lot,” so George ends up lying to Arthur about the location of honey out of the goodness of his heart. This isn’t a simulation of George thinking Arthur needs to diet, but a breakdown in the simulation — though Meehan passes over it without comment.
Simulation problems become even more apparent when Meehan gives us examples of Tale-Spin fictions generated outside its usual microworld of talking animals — in a world of interacting human characters. For example, in an appendix, Meehan provides the story of a person whose problem is Sigma-Sex. Here the strangeness becomes so apparent that Meehan cannot gloss over it, but his diagnosis is odd. Here are excerpts from the story:
Once upon a time Joe Newton was in a chair. Maggie Smith was in a chair. Maggie knew that Joe was in the chair. One day Maggie was horny. Maggie loved Joe. Maggie wanted Joe to fool around with Maggie. Maggie was honest with Joe. Maggie wasn’t competitive with Joe. Maggie thought that Joe loved her. Maggie thought that Joe was honest with her. Maggie wanted to ask Joe whether Joe would fool around with Maggie…. [She travels to where he is and asks him. His relationship with her (competition, honesty, etc.) is defined, he makes the necessary inferences, and agrees. They each travel to Joe’s bed. Then…] Joe fooled around with Maggie. Joe became happier. Maggie became happier. Joe was not horny. Joe thought that Maggie was not horny. Joe was wiped out. Joe thought that Maggie was wiped out…. [Maggie makes all the same inferences, and then, because she’s wiped out, this results in SIGMA-REST, and…] Maggie wanted to get near her bed. Maggie walked from Joe’s bed across the bedroom down the hall via the stairs down the hall across the living room down the hall via the stairs down the hall down the hall through the valley down the hall across a bedroom to her bed. Maggie went to sleep. Joe went to sleep. The end. (229–230, original in all caps)
Meehan comments, “The least Joe could have done would be to let poor Maggie sleep in his bed” (230) — as though the problem with the story lies in the design of Sigma-Sex. We might like an instance of Sigma-Rest that results from a successful Sigma-Sex to understand its context and prefer for the characters to sleep in the same place. But Tale-Spin does not work that way. This is similar to George Bird deciding to answer Arthur Bear out of kindness, and then the loss of that context resulting in his answer being an unkind lie. The relative autonomy of the possible worlds projected by each step in Tale-Spin’s planbox-based planning procedures won’t allow this sort of problem to be resolved with a simple tweak to one element, like Sigma-Sex.
Further, the problems with Tale-Spin telling stories of love don’t just lie in the structure of its planning processes. Consider the following statement from Meehan’s dissertation in terms of our culture’s stories of love — for example, any Hepburn and Tracy movie:
“John loves Mary” is actually shorthand for “John believes that he loves Mary.” … I’m not sure it means anything — in the technical sense — to say that John loves Mary but he doesn’t believe that he does. If it does, it’s very subtle. (64)
In fact, it is not subtle at all. It is a significant plot element of the majority of romantic novels, television shows, and movies produced each year. But from within Meehan’s context his conclusion is perfectly rational. If John doesn’t know that he loves Mary, then he cannot use that knowledge in formulating any conscious plans — and in Tale-Spin anything that isn’t part of conscious planning might as well not exist.
This blindness to all but planning — this assumption that planning is at the center of life — was far from unique to the work being done at Yale. Within the wider AI and cognitive science community, at this time, the understanding and generation of plans was essentially the sole focus of work on intelligent action. Debate, as between “neat” and “scruffy” researchers, centered on what kind of planning to pursue, how to organize it, and so on — not on whether planning deserved its central place as a topic for attention. This was in part due to the field’s technical commitments, and in part the legacy of a long tradition in the human sciences. Lucy Suchman, writing a decade later in her book Plans and Situated Actions (1987), put it this way:
The view, that purposeful action is determined by plans, is deeply rooted in the Western human sciences as the correct model of the rational actor. The logical form of plans makes them attractive for the purpose of constructing a computational model of action, to the extent that for those fields devoted to what is now called cognitive science, the analysis and synthesis of plans effectively constitute the study of action. (ix–x)
This view has, over the last few decades, come under widespread attack from both outside and within AI. As Suchman puts it, “Just as it would seem absurd to claim that a map in some strong sense controlled the traveler’s movements through the world, it is wrong to imagine plans as controlling action” (189). As this has happened — and particularly as the mid-1970s theories of Schank, Abelson, and Meehan have moved into AI’s disciplinary history — Tale-Spin has in some sense lost its status as a simulation. There’s no one left who believes that it represents a simulation of how actual people behave the in the world.
As this has taken place, Tale-Spin has become, I would argue, more interesting as a fiction. It can no longer be regarded as an accurate simulation of human planning behavior, with a layer of semi-successful storytelling on top of it. Rather, its entire set of operations is now revealed as an authored artifact — as an expression, through process and data, of the particular and idiosyncratic view of humanity that its author and his research group once held. Once we see it this way, it becomes a new kind of fiction, particularly appreciable in two ways. First, it provides us a two-sided pleasure that we might name “alterity in the exaggerated familiar” — one that recalls the fictions of Calvino’s Invisible Cities. At the same time, it also provides an insight, and cautionary tale, that helps us see the very act of simulation-building in a new light. A simulation of human behavior is always an encoding of the beliefs and biases of its authors — it is never objective, it is always a fiction.