September 4, 2003

AI and Narrative

by Nick Montfort · , 2:26 pm

I just finished a draft of an encyclopedia entry about “artificial intelligence.” It’s for the Routledge Encyclopedia of Narrative Theory and so, of course, it deals with how AI relates to narrative. Following Jill’s example, I have posted this draft in case anyone has comments on who I might have slighted, how I might have misrepresented AI, etc. I’d be greatful for any comments. It is as long as it can be, though, so I will have to cut things out if anything else is to go in there!

Revised: Thanks for your comments! I have replaced the first draft with the copy that I just submitted. (nm, 7 Sep 2003)

Artificial intelligence and narrative
Nick Montfort

Entry for The Routledge Encyclopedia of Narrative Theory

Artificial intelligence (AI) attempts to understand intelligence and to implement computer systems that can learn, reason, and make intelligent decisions. Philosophy, linguistics, and psychology are all involved in AI; computer science has had a central role. AI has dealt with narrative almost from the beginning. Narrative production and understanding are interesting types of intelligent behaviour; organizing stories into narrative may also be essential to human cognition (Schank 1990), a topic that scholars of *narrative intelligence consider. Conversational characters are one system at the intersection of AI and narrative. Other relevant systems are those that generate narratives, either by simulating an environment and what happens in it or by selecting which events will occur based on models of plot or character. Some of what has been learned in these systems has been employed in interactive systems, including interactive drama systems.
    The term ‘artificial intelligence’ was coined in 1955. Arthur Samuel had already written a program that played checkers against itself and learned to play at the championship level. *Computer games have remained important to AI research and applications. One early and perhaps archetypal AI system in a different domain was the General Problem Solver (GPS) developed by Alan Newell and Herbert Simon in the late 1950s; it proved mathematical theorems. The standard introductory text (Russel and Norvig 2003) provides good historical notes while describing the essential mathematical and computational techniques of AI.
    ELIZA (Weizenbaum 1966) was the first conversational character. The system communicated in text and simulated a Rogerian psychotherapist. Although it simply matched patterns in the user’s input and looked up responses, the system created the sense of a character and often elicited narratives from users; some found it compelling for psychotherapeutic, literary, or dramatic reasons (Murray 1997:214-247). Its successors included the 1971 simulated schizophrenic PARRY, an academic project by Ken Colby, and the 1984 commercial program Racter by William Chamberlain. A book of poems, The Policeman’s Beard is Half Constructed, was attributed to Racter.
    SHRDLU (Winograd 1972) used text and graphics to simulate a robot that could rearrange blocks. The system could answer questions about what had happened and could narrate the actions that had resulted in the current configuration; it was an important ancestor of interactive fiction such as the 1975-76 Adventure and the 1977-78 Zork. Several later systems had the generation and narration of stories as their main goal. The first of these was TALE-SPIN (Meehan 1976), which used planning to generate fables about animals with simple drives and goals. The system’s memorable, amusing errors revealed how difficult it is to automatically generate interesting stories. Michael Lebowitz’s 1984 UNIVERSE refined this approach and enhanced the representation of characters (embellishing certain stereotypes) to generate soap-opera narratives. MINSTREL (Turner 1994) was a similar system to generate Arthurian tales; it was able to get ‘bored’ and move on to other topics. A recent automatic storyteller is BRUTUS (Bringsjord and Ferrucci 2000), a system that uses a formal model of betrayal and has sophisticated abilities as a narrator. These story-generating systems do not accept user input as they narrate, but they show how AI can be deeply involved with *digital narrative even in non-interactive systems.
    In the early 1990s, interactive narrative systems that used AI techniques were developed at Carnegie Mellon University’s Oz Project. Graphical and all-text projects considered how parsing natural language, generating surface texts, and representing the emotional states of characters could be handled in a dramatic framework. One thread of this project continued in the work of Joe Bates’s company Zoesis, while Michael Mateas continued the Oz Project’s work at CMU, completing a graphical, Aristotelian, interactive drama, Façade, in collaboration with Andrew Stern (Mateas 2002). Façade does not have an all-powerful ‘director’ or completely autonomous characters; it allows the characters in the drama to cooperate (even when they are fighting) to attain common goals (e.g., ‘portray a fight and make the user uncomfortable.’)
    The systems above almost all represent story elements and ways of narrating explicitly, using techniques associated with rule-based or symbolic AI, also called ‘good old-fashioned AI’ (GOFAI). Since the 1990s much work in AI has been done in a different framework, successfully employing statistical methods and connectionist principles. Such approaches, although effective, are often unable to provide an explicit, human-understandable representation of the system’s knowledge or an explanation for its actions, which some see as a disadvantage for work with narrative. Statistical systems have been used to generate poetry (e.g., Jon Trowbridge’s free system Gnoetry) and in some work involving narrative and cognition (e.g., the computational neuroscience project Shruti at Berkeley). Whether GOFAI will continue to be favoured for creative narrative work or whether more recent AI techniques will be used in these endeavours as well remains to be seen.

References and Further Reading

Bringsjord, Selmer and David A. Ferrucci (2000) Artificial Intelligence and Literary Creativity: Inside the Mind of BRUTUS, a Storytelling Machine. Hillsdale, NJ: Lawrence Erlbaum.

Mateas, Michael (2002) ‘Interactive Drama, Art, and Artificial Intelligence.’ Ph.D. Thesis. Technical Report CMU-CS-02-206, School of Computer Science, Carnegie Mellon University.

Meehan, James (1976) ‘The Metanovel: Writing Stories by Computer.’ Ph.D. thesis, Yale University.

Murray, Janet (1997) Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: Free Press.

Russell, Stuart, and Peter Norvig (2003) Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice Hall.

Ryan, Marie-Laure (1991) Possible Worlds, Artificial Intelligence, and Narrative Theory. Bloomington: Indiana University Press.

Schank, Roger C. (1990) Tell Me a Story: A New Look at Real and Artificial Memory. New York: Charles Scribner.

Turner, Scott R. (1994) The Creative Process: A Computer Model of Storytelling and Creativity. Hillsdale, NJ: Lawrence Erlbaum.

Weizenbaum, Jospeh (1966) ‘Eliza — a computer program for the study of natural language communication between man and machine,’ Communications of the ACM, 9:36-45.

Winograd, Terry (1972) Understanding Natural Language. New York: Academic Press.

9 Responses to “AI and Narrative”


  1. William Says:

    There’s progress in the use of Bayesian engines and connectionist modelling for human interactivity at Berkeley’s ICSI. Srini Narayanan is working with logical inference engines that can understand metaphor. See the Shruti project and other work being done under the auspices of the Neural Theory of Language group.

  2. Jill Says:

    Ooh, AI gets more words than weblogs do! :)

    I like what you’ve written, you give a lot of examples, run through a history, even point out different techniques, it seems good for something on narrative, and for someone like me who knows bits of this but isn’t a specialist it’s a useful summary, and I’d probably send my students to read it too. Perhaps move up the reference to Weizenbaum (in brackets) so it’s after the first mention of ELIZA, and also make the sentences in the second paragraph flow more? The first two sentences in the paragraph are fine, but after that they read as too staccato to me, OK separately but they don’t quite fit together. I had the same problem in my blog definition: it’s really hard to compress lots of information like this. The rest of the text flows well, I reckon. There’s an “are” that is probably meant to be an “as” towards the end but as you can tell once I start picking at nits that small I’ve no larger bones to pick. Or something.

    Except is there no way you can include a cross reference to the blogs entry?

    ;)

  3. Dennis G. Jerz Says:

    Looks good, Nick. I copy-edited your draft to get it down to about 930 words, so you have room now to expand (and perhaps put back in some of the details I chopped). Maybe you could include a brief reference to Brenda Laurel’s criteria for meaningful interactvity — I’m assuming there isn’t a separate entry for interactive drama. I tried to make explicit some of the connections that I saw, and perhaps I distorted your meaning in a few places in the process. But now you have a little more room. (I could send you a MS Word file with all the changes marked, if you like.)

    The Works Cited list has Eliza as 1996 not 1966.

    Here’s the body:

    This field attempts to understand intelligence via systems that reason, choose, and learn. While computer science has had a central role in artificial intelligence (AI), philosophy, linguistics, and psychology are all involved. *Narrative has featured strongly in AI almost from the beginning, because the production and understanding of narrative are complex examples of intelligent behaviour, and because organizing stories into narrative may also be essential to human cognition (Schank 1990). Conversational characters, typified by ELIZA (Weizenbaum 1966), are one intersection of AI and narrative. AI can also generate narratives by simulating (at varying degrees of resolution, modularity and flexibility) an environment and/or characters.[I wouldn’t mention interactive drama until after mentioning games.]

    Although the term “artificial intelligence” dates from a 1955 proposal [Citation?], in 19XX Arthur Samuel’s checkers program learned to play at the championship level; *computer games have remained important in the field. In the mathematical domain, one early and perhaps archetypal AI system was the theorem-proving General Problem Solver (GPS) developed by Alan Newell and Herbert Simon in the late 1950s. The standard introductory text on AI (Russel and Norvig 2003) provides good historical notes and the mathematical and computational essentials .

    ELIZA (Weizenbaum 1966) was the first conversational character. Although it simply matched patterns in the user’s typed input and printed out responses emulating those of a Rogerian psychotherapist, the system created the sense of a character and often elicited narratives from users; some found it compelling for psychotherapeutic, literary, or dramatic reasons (Murray 1997:214-247). Its successors included the 1971 simulated schizophrenic PARRY, an academic project by Ken Colby, and the 1984 commercial poetry-generating program Racter by William Chamberlain. Racter was used to create the first book attributed to a computer author, The Policeman’s Beard is Half Constructed.

    SHRDLU (Winograd 1972) used text and graphics to simulate a robot that could rearrange blocks. The system could answer questions about what had happened and could narrate what actions had resulted in the current configuration; it was an important ancestor of interactive fiction such as the 1975-76 Adventure and the 1977-78 Zork. Several later systems had the generation and narration of stories as their main goal. The first of these was TALE-SPIN (Meehan 1976), which used planning to generate fables about animals with simple drives and goals. The system’s amusing and memorable errors illustrate the difficulty of automatically generating interesting stories. Michael Lebowitz’s 1984 UNIVERSE refined the approach and enhanced the representation of characters (embellishing certain stereotypes) in a soap-opera scenario. MINSTREL (Turner 1994) was a similar attempt to generate Arthurian tales, with the innovative ability to get “bored” and move on to other topics. A recent automatic storyteller is BRUTUS (Bringsjord and Ferrucci 2000), a system that uses a formal model of betrayal and sophisticated abilities are a narrator to generate short stories. None of these story-generating systems respond to user input as they run, but they are examples of AI deeply involved with *digital narrative.

    Important work on interactive systems that use AI techniques in narrative took place in Carnegie Mellon University’s Oz Project in the early 1990s. Graphical and all-text projects examined parsing natural language, generating surface texts, and representing the emotional states of characters in a narrative or dramatic framework. One thread of this project continued in the work of Joe Bates’s company Zoesis, while Michael Mateas continued the Oz Project’s work at CMU, completing a graphical, Aristotelaian, interactive drama, Facade, in collaboration with Andrew Stern (Mateas 2002). Facade does not have an all-powerful “director” or completely autonomous characters; it allows the characters in the drama to cooperate (even when they are fighting) to attain common goals (e.g., “portray a fight and make the user uncomfortable.”)

    The systems above almost all represent story elements and ways of narrating explicitly, using techniques associated with rule-based or symbolic AI, also called “good old-fashioned AI” (GOFAI). Beginning in the 1990s much work in AI has been done in a different framework, successfully employing statistical methods and connectionist principles. Such approaches, although effective, do not include an explicit, human-understandable representation of the system’s knowledge, which some see as a disadvantage for work with narrative. Statistical systems have been used to generate poetry; Jon Trowbridge’s Gnoetry is one free system for doing this. Whether GOFAI will continue to be favoured for narrative work or whether more recent AI techniques will be used in these endeavours as well remains to be seen.

  4. andrew Says:

    Looks good to me… you’ve managed to pack a lot of information into a small space.

    A follow up reaction appears in a new top-level post.

  5. nick Says:

    Thanks to all for the comments. I did add a mention of Shruti and a note that connectionist AI work involving narrative is happening; I fixed the several things Jill mentioned and also traced through Dennis’s version making edits in my draft. Dennis caught a good bit of “legacy text” such as my mention of how “four researchers” coined the term AI. I originally had named the four, then took that out as too wordy for a short entry like this, but for some reason left that useless bit of information in there.

    Some of the suggestions remind me of editorial situations that I found myself in as a journalist. I originally wrote:

    “This field attempts to understand intelligence and to implement computer systems that can learn, reason, and make intelligent decisions.”

    The suggested edit:

    “This field attempts to understand intelligence via systems that reason, choose, and learn.”

    The edited statement excludes pretty much every researcher at my university from the field of artificial intelligence. It asserts that you are only in the field of AI if you build your systems for the purpose of understanding intelligence. The original statement allows both attempts to understand intelligence (e.g., Minsky’s society of mind theory) and attempts to build systems that act intelligently (e.g., classifiers, theorem-provers, parsers), whether or not these systems are supposed to shed any light on the nature of intelligence.

    In this case the suggestion was part of a free (and much-appreciated) peer review, but I mention it since this is the type of innocent-looking edit I’ve often seen inflicted on my articles when I have no chance to see or approve the change before publication. [*] When it’s something like the definition of a whole field of research, subtleties can matter a great deal.

    Hopefully my article is now throughly Britished, with proper punctuation and spelling. I didn’t change the structure of the opening paragraph due to specific instructions about how it should outline all the main points of the article, and other instructions guided me in other places.

    Amusingly, the one change I didn’t make was to correct 1996 to 1966 for the Eliza article. I’ll send off an email with that correction.

    I’ll have to wait to see if this was actually the take on the topic that the volume editors wanted! I hope so…

    [*] Of course, my editors made my articles better at times, too. But what I remember are the times they maimed my writing.

  6. Dennis G. Jerz Says:

    I’ll be sure to use this example in my journalism class! :-)

    I did that edit late at night, and now I can see where I distorted your meaning. But home my word-whacking did leave room for some other good stuff — beyond just repairing the damage I did!

    My 200-level journalism class will read “It Ain’t Necessarily So,” a good book of case-studies that show how the media either gulp down and spit back biased press releases, or hype tiny details in otherwise unremarkable studies. (The recent scare about yet another big space rock hitting the Earth is a case in point.)

  7. nick Says:

    Lots of us write and edit late at night — I didn’t mean to look a gift editor in the mouth, and I did appreciate the comments and copy-editing, which helped me fix a bunch of unwieldiness and tighten up the article.

    Sheesh, I just noticed that I misspelled ‘Aristotelian,’ also …

  8. Dennis G. Jerz Says:

    No problem. Happy to help.

  9. michael Says:

    Nick says:

    The suggested edit:

    “This field attempts to understand intelligence via systems that reason, choose, and learn.”

    The edited statement excludes pretty much every researcher at my university from the field of artificial intelligence. It asserts that you are only in the field of AI if you build your systems for the purpose of understanding intelligence. The original statement allows both attempts to understand intelligence (e.g., Minsky’s society of mind theory) and attempts to build systems that act intelligently (e.g., classifiers, theorem-provers, parsers), whether or not these systems are supposed to shed any light on the nature of intelligence.

    Yes, AI from the beginning has had these two camps; those that build systems to understand human intelligence (cognitive science – Alan Newell and Herb Simon are paradigmatic examples) and those who build systems to do things that, if done by a human, would require intelligence (that is, artificial intelligences that may or may not have anything to do with human intelligence). Interestingly, I would place Marvin Minsky in the later camp. In Semantic Information Processing (MIT Press, 1968) Minsky defines AI as “…the science of making machines do things that would require intelligence if done by men.” The Society of Mind is not a theory of human cognition (like Soar or Act* are), but a sketch of a decentralized, heterogenous architecture that exhibits behavior that would require intelligence if done by people. For Minksy human brains are one element of the collection of physical systems that exhibit intelligence.

Powered by WordPress