March 2, 2006
In my previous two posts (1 2) I gave some background about two story generation systems, Minstrel and Universe, and outlined the basic set of plans and goals used by Minstrel. In this post I’ll discuss the main engine Minstrel uses for creating new stories: transformation and adaptation. As we’ll see, it’s both intriguing and problematic.
At the heart of Minstrel‘s approach to generating stories is Turner’s take on creativity, one which (like the structures of PATs) is based on case-based reasoning: TRAMs. These “Transform-Recall-Adapt Methods” are a way of finding cases in the system memory that are related to the current situation and adapting elements of these previous cases for new uses. In this way stories can be generated that illustrate a particular theme without reusing previous stories verbatim.
One example that Turner provides shows Minstrel trying to instantiate a scene of a knight committing suicide (though it is unclear which PAT this will help illustrate). Minstrel‘s first TRAM is always TRAM:Standard-Problem-Solving, which attempts to use a solution that already exists in memory. This TRAM can fail in two ways. First, it is possible that there is no case in memory that matches. Second, it is possible that the matching cases in memory have already been used twice, which results in them being assessed as “boring” by the system — so a new solution must be found. For either type of failure, the next step is to transform the problem and look for a case matching the transformed problem.
In Turner’s example, Minstrel‘s memory only contains the schemas for two episodes. In the first a knight fights a troll with his sword, killing the troll and being injured in the process. In the second a princess drinks a potion and makes herself ill. Neither of these is a direct match for suicide, so Minstrel must transform the problem.
One possible transformation is TRAM:Generalize-Constraint. This can be used to relax one of the constraints in a schema. In this case, it is used to relax the requirement of a knight killing himself. This is the “Transform” step in a TRAM, and it is followed by the “Recall” step. Here the system searches for a scene of a knight killing anything — not just himself — and succeeds in finding the scene of the knight killing a troll. Since this was successful, the next step is to attempt to “Adapt” this solution to the new situation, by reinstating the constraint that was relaxed. The result is then assessed, and deemed appropriate, so Minstrel determines that the knight can kill himself with his sword.
But this is only the most simple use of Minstrel‘s TRAMs, and the system finds other methods of suicide by a more complex route. For example, there is also TRAM:Similar-Outcomes-Partial-Change. According to Turner, this TRAM “recognizes that being killed is similar to being injured” (p. 49) and transforms the schema to one in which a knight purposefully injures himself. This, however, returns no matching cases. The knight fighting the troll is not retrieved, because the injury was accidental. The princess drinking the potion was not retrieved, because the actor was not a knight. But this does not cause Minstrel to simply give up on the direction proposed by TRAM:Similar-Outcomes-Partial-Change. Instead the TRAM process begins again, recursively, using the already transformed problem and applying a different TRAM to it. In this next stage, by applying TRAM:Generalize-Constraint to the actor, it is able to find the princess drinking a potion to injure herself. It adapts by reapplying the generalized constraint to create a schema for a knight drinking a potion to injure himself, and then returns to the original TRAM. This adapts by changing from injuring to killing, and the result is an event of a knight drinking a potion to kill himself. This is assessed as successful, added to the story, and added to memory so that it can become a case retrieved by other TRAM processes.
And that’s not all — the TRAM:Similar-Outcomes-Partial-Change also helps generate another plan for suicide when used as a second-level TRAM. In this case the first-level transformation is TRAM:Intention-Switch, which changes the schema from a knight purposefully killing himself to accidentally killing himself. When this, at the next level, is transformed from death to injury, the fight with the troll is found in memory. Minstrel then produces a story of a knight going into battle in order to die. With three different suicide methods found for the knight, Turner’s example comes to an end as well.
Through various series of small, recursive transformations such as those outlined above, Minstrel is able to produce story events significantly different from any in its memory. While it can only elaborate as many themes as it has hand-coded PATs, with a large enough schema library it could presumably fill out the structures of those themes with a wide variety of events, creating many different stories. But enabling a wide variety of storytelling is not actually Turner’s goal. He writes: “Minstrel begins with a small amount of knowledge about the King Arthur domain, as if it had read one or two short stories about King Arthur. Using this knowledge, Minstrel is able to tell more than ten complete stories, and many more incomplete stories and story fragments” (p. 8-9). We are told that accomplishing this requires about 17,000 lines of code for Minstrel, and another 10,000 lines of code for the tools package upon which it is built.
With such elaborate processes, requiring so much time to develop and so many lines of code to implement, why starve Minstrel for data — only giving it the schemas equivalent to one or two short stories? Certainly no human storyteller was ever so starved for data. We all hear and read many, many stories before we begin to tell successful stories ourselves. Certainly the reason is not to achieve greater connection with Minstrel‘s underlying theories from cognitive science. In Schank’s CBR theories an expert — such as an expert storyteller — is someone with access to a large body of cases that are effectively indexed and retrieved.
One possible explanation is that starting with a small body of cases shows off Minstrel‘s creativity to greater effect. It ensures that TRAM:Standard-Problem-Solving will be nearly useless when the program begins, so recursively-built solutions will be needed almost immediately. The number of stories the system is able to create (about ten) is also clearly much larger than the number it begins with (about two).
But it is more likely that the complex, and in some ways fascinating model of Minstrel was also exceedingly brittle. It may have produced more and more mis-spun tales as more data was added to the system, due to the unpredictable emergent behavior encouraged by the TRAM system. Turner gives some indication of this when he reports on his attempt to add a new theme after the system was complete. Unfortunately, the story produced by PAT:PRIDE is seriously flawed:
Once upon a time, a hermit named Bebe told a knight named Grunfeld that if Grunfeld fought a dragon then something bad would happen.
Grunfeld was very proud. Because he was very proud, he wanted to impress the king. Grunfeld moved to a dragon. Grunfeld fought a dragon. The dragon was destroyed, but Grunfeld was wounded. Grunfeld was wounded because he fought a knight. Grunfeld being wounded impressed the king.
(p. 240, original emphasis)
The problem arises because of the actions of a transformation called TRAM:Similar-Thwart-State, and Turner was able to revise this TRAM to remove portions of episodes that it was not able to adapt. But it is important to remember that this problem arose with the completed system (and not an incomplete one, as with the mis-spun tales of Tale-Spin reprinted by Aarseth, Murray, Bolter, and others). A similar error occurs when a knight kills and eats a princess, adapting a plan from a dragon (p. 278). Of course, a problem such as this could also be easily solved with further changes to the system. But it seems likely that, as any further data was added to the system, more emergent behavior problems would keep cropping up. Rafael Pérez y Pérez and Mike Sharples suggest something along these lines in their evaluation of Minstrel, writing:
[T]he reader can imagine a Knight who is sewing his socks and pricked himself by accident; in this case, because the action of sewing produced an injury to the Knight, Minstrel would treat sewing as a method to kill someone.
(p. 21, “Three Computer-Based Models of Story-Telling: BRUTUS, MINSTREL and MEXICA”)
In all of these examples we can see, in Minstrel, symptoms of a much larger problem. One which Turner, alone, could have done little to address. By the late 1980s it was clear that AI systems in general were not living up to the expectations that had been created over the three previous decades. Many successful systems had been created — by both “neats” and “scruffies” — but all of these worked on very small sets of data. Based on these successes, significant funding had been dedicated to attempting to scale up to larger, more real-world amounts of data. But these attempts failed, perhaps most spectacularly in the once high-flying area of “expert systems.” The methods of AI had produced, rather than operational simulations of intelligence, a panoply of idiosyncratic encodings of researchers’ beliefs about humanity. Guy Steele and Richard Gabriel, in their history of the LISP programming language, note that by 1988 the term “AI winter” had been introduced to describe the growing backlash and resulting loss of funding for many AI projects.
While Minstrel looked, from inside AI, like a significant improvement over Tale-Spin — with an improved model of human cognition, and a smarter choice of which humans to simulate — from our current perspective the conclusion is, to put it charitably, debatable. Instead Minstrel looks like an inventively constructed Rube Goldberg device, massive and complex, continually reconfiguring a few small pieces at its center, and likely to break if given a piece of even slightly different shape. It attempts to create more complete and structured fictional worlds than Tale-Spin, by bringing an author into the processes, but gives that author so little to work with that its alternate universes are mostly uninhabited. The end result of trying to simulate a human author, even with compromises toward greater system functionality, is 27,000 lines of code that produce roughly 10 stories of “one-half to one page in length” (p. 8).
And that’s all for my examination of Minstrel, but soon I’ll write about the rather different approach and results of Universe.