May 26, 2004

Breaking Up, Broken Down

by Andrew Stern · , 7:16 pm

Continuing the theme of AI systems that use language: here’s a new paper by Rob Zubek at Northwestern, who has been thinking hard about how to make robust, richly interactive conversational characters. His PhD research is focused on building an architecture for structuring conversations as vast collections of reactions to player input, arranged in hierarchies, that compete to understand and respond to the player. Multiple possible threads of conversation all are listening simultaneously to what the player says at any time, and they each update their local probabilities of where they believe they are in the conversation. Assuming enough content is authored, this allows the conversation to have a variety of believable responses at any time, at varying levels of coherence. Thus the system can fail gracefully and perhaps move the conversation forward when the system has trouble understanding the player, or doesn’t have a good response.

Within this architecture, Rob is building The Breakup Conversation, in which the player is given the goal to successfully dump their significant other, played by the system. Sample dialog of The Breakup Conversation is included in the paper.

An interesting feature of such an approach is that “where you are” in the overall conversation can’t be pinpointed to single place; instead the conversational state at any one moment is the collection of potential directions the conversation can move in next.

Breaking up may be hard to do, but at least it’s becoming computationally inexpensive.

5 Responses to “Breaking Up, Broken Down”


  1. Dirk Scheuring Says:

    Although his paper doesn’t say so, I think that what Rob Zubek does with “The Breakup Conversation” is to frame a dialogue between human and computers as a goal-oriented story, sporting a “coherent temporal structure” – a plot – and two characters who, in terms of their dramatic “weight”, are essentially equal, thus conceptually overcoming the the standard (and, IMHO, very limiting) PC/NPC dichotomy. I think that this is a very fruitful direction to take.

    What I find particulary clever is how he associates the story goal with the character played by the human, and thus reverses the drama dynamics of Turing’s original “Imitation Game”: Instead of the computer having to convince the human (that a third party is lying), the human has to convince the computer (that the relationship is over). This makes it much easier for the computer to motivate its own (possibly repetitive) behavior and answer “why”-questions: “Because I still love you, that’s why!” The one weak spot in this concept is probably encountered by players exhibiting the “harassment behavior” often found among chatbot clients – unless the computer character is crafted to act in extraordinary masochistic ways, it might be difficult to use a “But still I do love you!” strategy and achieve suspension of disbelief beyond the third major insult.

    Anyway, I think that this work definitely moves forward, and demonstrates some of the opportunities of a “story-based” (or “coherent temporal structure-based”) approach to human-computer dialogue. Handing the conversational goal to the human – and thereby making the computer into the Change Character of the story – is (from the POV of the dramatist) not a universally applicable strategy, but it definitely has it’s uses, especially, I think, in a larger (game) context, and in using it, Rob is cutting some corners in an elegant way. I’d gained a lot by reading his earlier publications – which I’ve also recommended when the subject of NPCs came up on the Alicebot list a while ago -, but this time, he’s more inspiring than ever.

  2. Rob Says:

    Thanks for the great comments! :) Especially about the description of the explicit change in player-character dynamics. I suppose it happened more from an intuitive dissatisfaction with existing approaches, than from any kind of a conscious decision. :) I was trying to avoid the common problem, where the player has no investment in the interaction, no reason to work with what the character produces. Putting the player at the wheel and in medias res is a great way of setting the player’s mindset (a lesson learned from Facade :) – and then the system must try to support it all the way through.

    But a game that could actually involve the player with the characters ends up being quite different from games-as-we-know-them-right-now. And it results in a player experience that’s vastly different from what most game players expect – the actions are less clear, world state space is deeper and murkier, and the players ideally need to really put themselves in the frame of mind of a genuine participant in order to really understand what’s going on. I hope this won’t pose too much of a challenge in presenting these kinds of works to traditional gaming audiences…

    By the way, Andrew – I love your final sentence. Can I steal it? ;)

  3. Dirk Scheuring Says:

    Rob, your doubts about game design requiring that the player more or less consciously enters a specific “frame of mind” before she plays a game are very understandable. I also think it won’t work that way. I think that the extra mental step has to be taken by the game designer, not the audience. And I think that the extra mental step ist to design in the lack of “right frame of mind” on the part of the player.

    To me, it simply is part of the dramatic conflict: There’s one character who won’t play along, and another character who has to make him change his mind. I think that writers of interactive drama have to learn to ask the same questions that ‘traditional’ dramatic writers ask themselves: Whose story is it? What are the goals – those of the characters as individuals, the goal of their conflict, and the overall goal of the story? Who is the character that has to change in order for the overall story goal to be reached? This allows for some conclusions that I believe are quite novel in the context of interaction design: it might turn out that it’s helpful for some scenarios to regard the character enacted by the computer as the main character of the story/interaction. And the (initial) goal of the character enacted by the human might be to just break the game. Then the goal of the conflict would be to get that character to change, in order for all the characters to be able to reach the overall story goal. And that’s what the design should be doing.

    As an example, let’s twist your “Breakup” scenario one more time. Let’s assume the goal of the story/game is for the player to make an inheritance, and one condition to reach the goal is that she’s married to a certain computer character. Now the computer character declares the relationship to be over, and the human has to win it back… immediately you place the player in a position where insults won’t help at all. In sghort, the dramatics should be stacked in a way so that destructive behavior results in loss for the player – and not for arbitrary reasons, but as an integral, logical part of the story’s argument.

    It has rightfully – I think – been argued that artists should be programmers. But that coin does have a flipside: While I was educating myself about computer programming during the last years, I often had the impression that some programmers trying to design interactive characters assume that they already know everything there is to know about the development of (fictional) characters, about dialogue writing, character motivation, conflict – in fact, they seem to think that many things that ‘traditional’ writers usually spend a lot of time and effort to get any good at somehow come natural to programmers. Well, this is not the case. Just as an artist who wants to get into interactive drama needs to learn basic stuff such as that one man’s constant is another man’s variable (and why this matters in terms of program design), programmers who want to get into it need to learn basic dramatist stuff, like the fact that the ‘protagonist’ of a story isn’t necessarily its ‘main character’ (and why this matters in terms of story design). If true collaboration between artists and programmers is a desirable goal, then I’m pretty much convinced that the learning should be mutual.

  4. Grand Text Auto » Game Slash AI Says:

    […] n excellent dissertation at Northwestern (more on that in a future post; see an older post here) and has just joined Maxis. Paul was an AI developer for Metroid Prime, […]

  5. Grand Text Auto » Ian Horswill Blogging! Says:

    […] inment lab whose members included Robin Hunicke (now MySims design lead) and Rob Zubek (of Breakup Conversation fame), and administrator of a progressive Animate Arts c […]

Powered by WordPress