December 15, 2003

Interactive Storytelling Exam

by Nick Montfort · , 1:35 pm

Although I should have been studying for my preliminary exam on Wednesday, today in the department I heard an unusual WPE-II presentation: “Interactive Storytelling: Coupling the Emotional Range of Drama with the Engagement of Interactivity.” (The WPE-II is the paper-and-talk that is the last of the preliminary exams here in the Department of Computer and Information Science at Penn, and will hopefully be my next stop after Wednesday.) The topic of Michael Johns’s talk (which was possibly closer to interactive drama than interactive storytelling) was hardly alien to me, but it was a bit different from the graph-theoretic or expectation-maximizing algorithmic goodness that we usually get around here.

Johns’s talk covered Façade by our very own Mateas and Stern, a system by Cavazza, Charles, and Mead at the University of Teesside, and Actor Conference by the Liquid Narrative Group at NCState. Although relatively few aspects of Façade were discussed, and there was little mention of the ability of actors to coordinate in Façade (discussed as the main feature of Actor Conference), I (and I think others) still got the impression that Façade was the real state of the art system here. Since I know less about “the competition,” it was interesting to see Façade in an interactive drama shootout.

I was left uncertain about how the second system discussed differed from Talespin; it used the simple idea of autonomous characters following conflicting goals. As best I could tell, the differences are the Unreal interface, the use of characters from Friends, the ability of the user to tinker with the world during execution of the program, and a new planning system. But it seemed basically like $talespin ~= s/bear/Ross/ s/honey/box of chocolates/.

In some ways, I must say that the conversation reminded me of some less productive humanistic/interdisciplinary discussion of these topics five or ten years ago: how we make it look like things are less constrained that they really are? Pull the user through the experience along a particular path? Keep the user from doing things that mess up our great story? I suppose we’ve had a discussion about this discussion not too long ago on here. The conclusion at this talk was, I think, pretty much what I’ve concluded: that if you’re asking lots of these sorts of questions all the time, you’ve probably framed the issue in an unproductive way. I just wish we were through with that conversation, I guess.

On the conclusions slide was the comment: “It has been notoriously difficult to incorporate the desires of professional writers into games.” Leading me to wonder: How about the desires of professional jugglers? (Or, to make less of a joke, architects?) I don’t think the difficulty of incorporating professional desires of any sort is a problem in and of itself — perhaps the issue is that traditional writing tends to help us much less than we expected in creating interactive experiences. What’s wrong in this case? It could be that we don’t know how to apply traditional writing techniques properly — or it could be that traditional writing doesn’t directly map onto this new form in any useful way.

I would suggest the ideas of potential narrative (or potential drama) and of simulated world provide much more effective ways of looking at these sorts of new media creations. “Game” and “story” map onto things like Façade much less directly, if at all. Brenda Laurel and others have been applying ideas from drama to interactive experience with care for decades, yielding some good results, but that’s because they made the implicit realization that an interactive computer program isn’t itself a drama; it’s part of a system that can provide an interactive and dramatic experience.

Anyway, I’m pleased that there’s interest in these topics in the CIS department. I hope that some computer science insights can aid in addressing problems faced in many disciplines when considering these sorts of systems, and I hope that students in CIS will be willing to look to narratology, potential literature, and other types of study which could prove as productive as Brenda Laurel’s interjection of a dramatic perspective into computing.

6 Responses to “Interactive Storytelling Exam”


  1. andrew Says:

    That sounded like an interesting presentation. Surfing around I believe Johns is collaborating with Barry Silverman’s group, which was briefly discussed in a previous blog post. Cool to see that the overall effort at Penn is even bigger than I realized.

    I think the square-peg-in-a-round-hole approaches you mentioned come up sometimes because researchers seem to want to (relatively quickly and painlessly) apply off-the-shelf AI techniques to the interactive story problem, when actually it will require quite a bit of evolving of and innovating from previous techniques / languages / architectures, to create new ones effective in this domain.

  2. Barry Silverman Says:

    I think Michael Johns was trying to point out that the AI enterprise, particularly that of planning systems, has not lead us out of the woods in terms of providing NPCs with ways to autonomously generate story paths and related dialog that is engaging in an IF. Indeed they are tackling difficult problems having to do with story failure and reconstruction, but they inherently seem to be variants of fixed story/dialog graph approaches. And the solution seems not to lie in that direction to some of us thinking about dialog-based games and rhetorical conflicts.

    I am quite intrigued by the notion of autonomous agents capable of affective reasoning that can operate in action settings, but which also can operate on the rhetorical plane. Please allow me a digression into architecture for a moment. Our agents at present have a small physiology (stomach, energy depletion, fatigue, stress, etc.) and a reasonably full implementation of the OCC model for emotional construals of world events. We have embellished this model with a Bayesian weighted set of Goal, Standard, and Preference trees which effectively encapsulate an agent’s personality and culture / backstory and current motivations/aspirations. All this is reservoir based so that GSP related emotions activate and decay as events occur in the world and based on relationships of the agent to others in the world. This is very much in the spirit of OZ. A departure we take is to also use the emotions as utility, as a somatic marker (gut instinct), in order to drive agent decisions about their next actions and utterances in the world. Of course we constrain agent decisionmaking according to stress as well. For now, in order to manage perception, we markup the world (agents and objects) in terms of what they afford to the agent that is looking at it. How we handle affordances has been the subject of several papers on my website, as has the rest of the agent model I just discussed.

    For his dissertation that is about to start, I have been urging Mike to try and explore verbal conflict (an essential element of good dramas as are interesting characters) and persuasion situations. Heart Sense Game is a non-linear IF we created (about 3 hours of play if you played it repeatedly) that requires the player to help the hero to solve a crime, make career decisions, and possibly win a romance. Along the way, the various allies who might give you some useful info, succumb to heart attacks and need your help in getting to the ER before they can help you. As it turns out, a major cause of morbidity and mortality related to a first heart attack is due to delays, not just from symptom recognition issues, but also due to attitude, social norms, and efficacy issues. Thus, there is substantial resistance and dialog from the victims — they are pretty dysfunctional with respect to their own health. This is not to say that HSG is a great game, just that it is a dialog and persuasion game. Given the target age group, I did not want a lot of sophisticated gameplay mechanics. Something in the IF realm seemed most appropriate. Mike Johns created the dialog graph tool that allowed us to input the nonlinear storylines, and that could be marked up with choregraphic, audio, and other media instructions. The flash-based characters in HSG are all pre-canned as is their dialog — there is no use of our autonomous agents here — and the nonlinearity arises from user selections from about ~1100 dialog choices (various edges of the dialog graph) for the avatar. All this was just summarized for the recent ICVS in France that is reported on elsewhere on this blog.

    Also, elsewhere on this blog site, I noticed that Chris Crawford suggested that, for a decent interactive story, players will need a minimum of 1000 verbs, ideally 2000-5000. Andy Stern mentioned that in Façade the goal is to give the player a non-trivial new set of things to do, such as socializing, taking sides in arguments, flirting, saying provocative things, all performable through a natural language & gesture interface. (Internally, everything you say or do gets turned into one or more of ~40 parameterized “discourse acts”.)

    These ideas are quite significant, and Mike and I have been exploring ways to leave the safety of the fixed, though nonlinear and bushy dialog graph approach, to step into the autonomous agent speech act world. There we use the idea that each agent (and the human) has up to a few dozen actions they can take at any given time to influence the situation, persuade others, and protest, observe, conduct conflict/flee, mill about, flock, or other physical world actions. Currently we are adding utterance formulation for the emotions and goals, standards, and preferences that are in an agent’s utility formulation and choice process. A struggle is to figure out how to allow this potentially wide range of utterances to be noticed in the affordances sense by the perception module that our agents currently incorporate. Some interesting other research questions pertain to hearing and noise, focusing and filtering, ascribing intentionality, and first vs. second level intentionality of the response processor. A lot of sitcom plots seem to boil around just these types of misunderstandings. Adding this to rhetorical conflict settings and we are back into the thick of it with persuasion and dialog. Well it is very late at night and tomorrow at 10 AM I have to meet Mike about this topic — actually he will be showing me some prototype he whipped up.

    I am new to this blog, and I guess I had a lot to say all at once. Forgive me for rambling on so. In point of fact, I probably have more to learn from reading everyone’s comments, than I do ideas to add that are novel or new to this discussion.

  3. nick Says:

    Barry, thanks much for your post – you are certainly welcome to ramble here, that’s what we all do! I enjoyed reading your and Michael Johns’s paper on AESOP and wish I had read more of the backstory in your earlier articles. (But if I ever get more time, I’m supposed to spend it reading papers in my research area!) Creating an authoring system that can actually work to build diverse systems is an extremely difficult goal. Façade is certainly not going to pop out of an authoring system anytime soon, although parts of it, like ABL, can be generalized nicely. That is a defect from one perspective. But it’s important that research be done on improving both interactive drama systems themselves and the ways of creating them.

    I have had a bit of exposure to educationally-motivated systems for creating interactive storytellers. I developed a Web version of Marina Umaschi Bers’ SAGE. It was called WISE; Amusingly, the first character we developed as an example was Aesop. I also talked some with Henry Jenkins and some others in the early stages of the Games to Teach Project, which has expanded into The Education Arcade. Educational systems present interesting challenges, since they are trying to achieve something other than interactive/dramatic/gaming/narrative excellence. On the other hand there are more established ways to evaluate how well they function.

    The idea that authors should be insulated from the workings of the authoring system and the complexity of the game engine is very rooted in principles important to computer scientists, namely, encapsulation and abstraction. But I suspect (and you guys are welcome to correct me) that Michael, Andrew, and Noah all agree that at least at the current time, such hiding of implementation details in the authoring system hasn’t allowed people to make interesting creative works. (I don’t know what Scott’s opinion on the matter is, but he can chime in, too, of course.) In interactive fiction, there are systems like ADRIFT that are touted as easy-to-use; these insulate the authors from the IF engine very well. And there are full-scale, general-purpose programming languages that are particularly suited for interactive fiction, such as Inform and TADS. Now, ADRIFT is a recent development, but there were earlier systems like it. As it turns out, just about all the interactive fiction that people like tends to have been programmed in one of the general-purpose languages, and pretty much none of the good stuff was written in the simpler systems.

    The situation may be different if education is your goal, of course, and it may be different for different sorts of interactive systems. I guess there are alternate explanations that could be advanced, too: Some people are willing to put in the large investment in time that is required to code up a game in the more general system; that group correlates very well with the group of people who are going to pay attention to all the other aspects of interactive fiction. Still, it seems that the details that are hidden have, at least in text-based IF, ended up being the ones that people need access to in order to create great games and great simulated worlds. This is one reason I think the problem of authoring systems is a hard one – it seems they have to be very general to work well.

    I definitely think that autonomous characters in a story-generating system should be aware both at the level of story and of discourse; this is a very good point. The simulation/story overlay that Michael Johns made was quite interesting, too, and I wish I’d written about that some more. It’s an interesting way to look at the functioning of different systems. I tend to think in terms of potential story or potential narrative, myself; the two ideas of simulation and story are interoperating, rather than overlaid, in this view.

    Anyway, I hope to hear more (in the department, now, as well as on the blog) about how these sorts of issues play out in the context of the development of real systems.

  4. Mike Johns Says:

    From my perspective, authoring problems seem to arise as much from human factors as technical ones. There really aren’t that many people out there that are equally comfortable writing stories as writing code. Early in the development of Heart Sense we realized we were in over our heads as far as the creation of the story was concerned, and it seemed natural enough to reach out to people experienced in creative writing for help. We figured that if the underlying system was easy enough to understand, we could build an authoring tool that they would be able to use to explore their ideas.

    It turned out that they were just about as uncomfortable doing that as we were trying to come up with the basics of a decent story, so what we ended up with was a wierd division of responsibilities that involved them writing rather complicated Word documents. They were more comfortable finding a way to incorporate branching into a word processor (without using hypertext) than they were using our tool that supported it natively(!). On one hand that was probably an interface design failure, but I suspect that it will always be an uphill battle to uproot people from the tools they’re most comfortable with while the task appears somewhat similar to what they normally do.

    The other major component of the disconnect was that we were asking people experienced in creating non-interactive stories to create something interactive, and as has been discussed here quite a bit, a large part of an author’s toolkit seems to be rendered ineffective when making that switch. It’s probably no accident that games based on movies or books tend to be so bland, and that movies based on games usually lack many of the most fundamental components that make a movie interesting.

    I guess the lesson to be learned here is that this type of thing seems to require people willing to challenge a large part of the mindset that has gotten them to where they are, and that it’s ok (and probably necessary) to feel uncomfortable and even incompetent while trying to evolve.

  5. Marc Cavazza Says:

    … I was left a bit puzzled by the comparison between our “Friends” system and Tale-Spin … which seemed to me to reflect certain misconceptions sometimes encountered when comparing Interactive Narrative with early NLP systems dealing with stories.

    Tale-spin uses/used planning for story generation within the then dominant paradigm of plans as models for (textual) story understanding. It can also be noted that planning has also been a model for text generation in the general case (see work by Appelt).

    Real-time planning for –embodied– actors is a quite different problem. You mention the following apparent differences from Tale-Spin “… as best I could tell, the differences are the Unreal interface, the use of characters from Friends, the ability of the user to tinker with the world during execution of the program, and a new planning system”

    Wow! Just the ability to interfere dynamically with the story world (and characters) is actually the essence of what the Interactive Storytelling community is trying to achieve.

    This is why most real-time 3D Interactive Systems (see those presented at ICVS 2003 or TIDSE) are actually based on planning, whether at plot level (e.g. Michael Young’s system) or character level (in our case).

    I still agree with you that we have to acknowledge the contribution of foundational work, but it that case we should probably look at the work of Norman Badler on plan-based animation of virtual humans. And I am sure that being at UPenn you would be familiar with that work.

    Marc

  6. nick Says:

    I am sure that being at UPenn you would be familiar with that work.

    I’ve heard of the work, but actually, I am not really familiar with it – neither my computer science research nor my new media research deals with planning or computer graphics at all, so to learn about such things I count on what I read in places like this and on what I hear in presentations around the department.

Powered by WordPress