March 6, 2006
This is the last in my series (1 2 3 4) of posts about two story generation systems that were first published about in the mid-1980s: Minstrel and Universe. I think they’re not just interesting in themselves, but also in the lessons they give us for how we might approach story generation today (including interactive story generation). In fact, I think they’re interesting in helping us think about how we might design any system meant to exhibit behaviors we consider “intelligent” — behaviors meant to be interpretable to a human audience as similar to things we do ourselves.
The model of planning in Universe is somewhat different than in systems like Tale-Spin and Minstrel. Because Universe is not aimed at producing stories that end, but rather serial melodramas on the model of Days of Our Lives, its plans are never aimed at bringing things to completion. In Tale-Spin, and most AI work on planning, the focus is on achieving goals: Joe Bear is hungry, and the planning process tries to get some food and ingest it so that his hunger will go away. In Minstrel the plans are to flesh out a PAT schema, meet the other goals, and complete the story. Universe, on the other hand, plans based on “character goals” and “author goals.” Character goals are monitored to maintain consistency, while the primary impetus for story generation comes through author goals. And the author has goals for keeping things going, rather than bringing them to conclusion. The result is that Universe‘s plans can never specify a complete course of action, only one that seems appropriate given the current circumstances in the story’s universe.
High-level author goals are carried out by lower-level goals, and planning for both takes place through “plot fragments.” A higher-level goal to which Lebowitz gives particular attention is “churning” lovers, keeping them separated by new obstacles each time the previous set is cleared up. The forced marriage of Liz and Tony, on Days of Our Lives, is by Lebowitz regarded as a fragment that achieves (among other possible goals) the “churning” of Liz and Neil. This makes it quite apparent how character goals are treated quite differently in Universe as opposed to systems such as Tale-Spin. As Lebowitz writes about “churning”:
Obviously this goal makes no sense from the point of view of the characters involved, but it makes a great deal of sense for the author, and, indeed, is a staple of melodrama (“happily ever after” being notoriously boring in fiction, if not in life). Universe has a number of other plot fragments [besides forced marriage] for achieving this goal, such as lovers’ fights and job problems. (1985, 488)
Universe maintains a representation of outstanding author and character goals. The storytelling cycle begins with choosing an author goal that has no unmet preconditions. A plot fragment is selected that will achieve that goal, with preference given to fragments that also achieve other goals that are current. This is plot fragment is then made part of the story — producing new characters, events for output, and new goals as appropriate. Even “forced marriage” is a relatively high-level plot fragment, which needs to be filled out with lower-level fragments for the woman dumping her husband, the husband getting together with another woman, the threat from the parent being eventually eliminated, and so on. The potential choice of a number of different fragments and characters for each of these elements increases the variability of the story structures Universe produces.
As this process takes place, Universe doesn’t simply choose characters and plot fragments randomly. First, the personalities and plans of characters constrain which can play roles in the fragments (and, further, some fragments require the participation of characters that have particular stereotypes). Second, with each fragment Universe tries to select events and include characters that will help meet other active authorial goals. This helps create complexly interwoven plots, such as those of serial melodramas, in which events often contribute to several active storylines.
Below is an example of an actual Universe output for a forced marriage storyline (1985, 491), using the same characters as the Days of Our Lives plot summary in my first post about Universe. Those lines of the output that begin “>>>” represent low-level events, whereas other text provides a trace of the planning process. The system begins with two active goals: to churn Liz and Neil, and to get Neil together with Renee.
*(tell ‘(((churn liz neil)(together renee neil))))
working on goal — (CHURN LIZ NEIL)
Several plans to choose from FORCED-MARRIAGE LOVERS-FIGHT JOB-PROBLEM
— using plan FORCED-MARRIAGE
working on goal — (DO-THREATEN STEPHANO LIZ “forget it”)
— using plan THREATEN
>>> STEPHANO threatens LIZ: “forget it”
working on goal — (WORRY-ABOUT NEIL) — using plan BE-CONCERNED
Possible candidates — MARLENA JULIE DOUG ROMAN DON CHRIS KAYLA
Using MARLENA for WORRIER
>>> MARLENA is worried about NEIL
working on goal — (TOGETHER * NEIL)
Several plans to choose from SEDUCTION DRUNKEN-SNEAK-IN SYMPATHETIC-UNION JOB-TOGETHER
Possible candidates — DAPHNE RENEE
Using RENEE for seducer
>>> RENEE seduces NEIL
working on goal — (ELIMINATE STEPHANO)
Several plans to choose from ATTEMPTED-MURDER EXPOSE
— using plan ATTEMPTED-MURDER
Using ALEX for KILLER
>>> ALEX tries to kill STEPHANO
working on goal — (DO-DIVORCE TONY LIZ) — using plan DIVORCE
>>> LIZ and TONY got divorced
working on goal — (TOGETHER LIZ NEIL)
no acceptable plans
Other plot fragments that Universe can use for churning include LOVERS-FIGHT, JOB-PROBLEM, PREGNANT-AFFAIR, ACCIDENT-BREAKUP, STEAL-CHILD, COLLEAGUE-AFFAIR, and AVALANCHE-ACCIDENT. The variations on these depend on the characters involved. For example, in Lebowitz’s 1987 paper he shows output from churning Joshua and Fran. Given their jobs, they can experience the job problems of BUREAUCRAT and SLEAZY-LAWYER. Given other aspects of their characters, they can fight about IN-LAWS, MONEY, SECRETS, FLIRTING, and KIDS.
While Universe at this time only contained 65 plot fragments, it was already enough to generate stories with more variety than Minstrel and more structure than Tale-Spin in their completed states. Further, its fictional worlds were much more complex and complete than those of either system, despite the fact that Minstrel was not finished until significantly after this point in the development of Universe. In short, eschewing the simulation of human cognitive processes was a demonstrably powerful outlook, from the perspective of fiction — but where did it leave the AI researcher?
What kind of author?
Though it is clear that Universe isn’t designed to simulate a human author through its internal processes, at the same time, the main processes it uses for storytelling are referred to as “author goals.” This may lead one to wonder, “Which author is that?”
Also, while no cognitive science model of creativity is given as the basis of the system’s design, Lebowitz still gestures toward the cognitivist view of AI, writing in one paper that a goal of Universe is “to better understand the cognitive processes human authors use in generating stories” (1984, 172) and in another that we “can expect research into extended story generation to … give us insight into the creative mechanism” (1985, 484). Exactly how this will take place is never explained.
It is understandable that, less than a decade out of Schank’s lab, Lebowitz was unable to entirely drop cognitivist language in discussing his AI projects. In fact, to some extent, publication in AI journals and at AI conferences may have demanded it. In Lebowitz’s last paper on Universe, at the 1987 conference of the Cognitive Science Society, he even suggests that case-based reasoning may be a future direction for Universe‘s plot generation (p. 240).
But the next year Lebowitz left Columbia’s faculty, and by the publication of Ray Kurzweil’s book The Age of Intelligent Machines (1990) Lebowitz was riding out the AI winter outside academe. In the book, accompanying a capsule description of Universe, he is listed as a vice president of the Analytical Proprietary Trading unit at Morgan Stanley and Company (p. 390).
And it was only a few years later, in the early 1990s, that a set of AI techniques with no pretense to modeling human intelligence would rocket into the public consciousness: statistical techniques. For generating stories, however, better results are still achieved by processes that are specified explicitly by humans, rather than learned through the statistical examination of large pools of data. And the tools of traditional AI — both scruffies and neats — are powerful formulations for specifying such processes. Given this, such tools remain in use, but in a different environment. The successes of statistical AI, hard on the heels of “Good Old-Fashioned” AI’s (GOFAI’s) winter, have shaken to the core the belief that these tools can claim a special status. They are now simply ways of building a system, almost like any other. And, in this environment, people have begun to regard the task of story generation quite differently. We see this, for example, with projects like Michael’s Terminal Time.