December 16, 2007

Façade, Petz, and The Expressivator

by Noah Wardrip-Fruin · , 6:35 pm

While researching my forthcoming book (about which more news soon) I’ve posted selections from correspondence about a number of influential digital fiction systems, including James Meehan’s Tale-Spin (1 2), Scott Turner’s Minstrel (1 2), and Michael Lebowitz’s Universe (1). Now I’m pleased to continue the series with some information from GTxA’s own Andrew and Michael. I emailed them to learn more about the relationship between Façade and two earlier efforts: PF Magic’s “Petz” series (on which Andrew worked) and Phoebe Sengers’s The Expressivator (created at CMU while Michael was there).

Andrew’s reply appears first, with sections of my original email as blockquotes. Michael’s is next. While the tone of all this is, obviously, informal, I think there’s valuable information in their replies that I want to make available to others interested in these topics.


Andrew Stern’s reply

Anyway, it looks to me like ABL’s “reflection” extensions are similar to Phoebe’s meta-level controls. Is that roughly right? Whereas Phoebe’s concern about observable, motivated transitions between behaviors was something that you treated more on an authoring level (with beat goals that involve transitioning in, for example) rather than on an architectural level?

It’s been a while since I read Phoebe’s thesis, so I don’t exactly remember her meta-level controls; but I do remember her discussion of transitions, and you’re right, in Facade we achieved that with transition-in and transition-out beat goals, as well as some specialized global mix-ins, such as “metacommentary”, e.g. a sarcastic reading of “Well, we’re all friends here…”, by the less-favored character after a particularly tense beat, before the next beat begins. Also there were lots of transitions between beat goals in the Therapy game (second half of the drama), when Grace or Trip would say things like, “No, let’s talk about Grace now”.

Would you say, also, that setting the non-interactive early part of a behavior was in part to avoid the dithering that is part of Phoebe’s schizophrenia critique of the observed actions of behavior-based agents?

Our goal (not fully achieved) was to never have non-interactive parts of behaviors. But, only some of the time (perhaps 25% of the time), behaviors are fully interruptable. Other times (perhaps 75% of the time), we force the first few words of a beat goal to be spoken, even if the player tries to interrupt them. This can result in a 2-3 second delay in responding to the player, if the player speaks at the beginning of that line. The primary reason for this was to force the drama to progress forward, at least a little bit, on a regular basis.

I’m not sure it really helps solve the schizophrenia problem; the characters can still be directed by the player to talk about different things quite often.

Rather, I think we alleviated the schizophrenia problem by making the current pool of responses at any one time all work towards similar narrative goals, in various ways. e.g. if Grace and Trip are fighting over her decorating (which at a more abstract level is about the tension of maintaining a facade), and you bring up the topic of sex, they’ll perform some global mix-in (that further reveals tension between them). They switched topics, but it all goes towards the same narrative goal — perform and reveal the tension between them.

Meanwhile, it sounds like the design goals for Petz and Babyz were all about observable, motivated behavior – but without the “needs meters” of the Sims or the architectural support for transitions of the Expressivator. Andrew, do you have some pithy comment about this somewhere that I could quote? Also, is there anything architectural from the Petz or Babyz projects that ended up in Facade, or is it more the design attitude?

Well, the concept of transitions between behaviors is an inherent requirement for fluidity and lifelikeness for any character, I think. There were Petz/Babyz versions of those transitions — little behaviors like “looking around for what do next” or “looking to the player for reassurance, for a pet/tickle” after finishing one behavior (e.g. eating) and going on to another behavior (e.g. playing). There were little ad hoc behaviors like that scattered throughout Petz and Babyz. My “idea” to make such transition behaviors wasn’t motivated from a theoretical standpoint, but simply from the perspective of what obviously seems natural and required for lifelike behavior, discovered by design iteration and my own playtesting.

Regarding architectural similarities between Petz/Babyz and Facade, in 1994-1995 I had read the Oz Project papers, shortly before starting work on the Catz AI in early 1996, which evolved into the Petz II AI in 1997, Petz III in 1998 and Babyz in 1999 (Ben Resner did the original Dogz 1 AI, in 1995, which was a basic state machine). I was influenced by the goal-and-plan architecture of Bryan’s Hap, and Scott’s Em emotion system, and implemented my own version of those. Hence my excitement to meet Bryan at the 1997 Socially Intelligent Agents symposium.

By Petz III, my custom goal-and-plan architecture was relatively sophisticated; as needed, I coded what could be called meta-behaviors — behaviors that activated or deactivated other behaviors. But these were relatively simple and ad hoc, compared to ABL’s elegant implementation of it. By Petz III the architecture was starting to get messy, since I was implementing goals and plans in C++, without the benefit of the cleaner syntax and organization of Hap.

My 1999 Babyz paper describes or alludes to some of these mechanisms.

While was I was doing simple and ad hoc versions of what we’d eventually be doing in Facade, these were of course all in the domain of language-free characters (animals and babies), whose actions were more abstract than Grace and Trip’s, so the cause-and-effect causality chains could be much looser. So while I was able to apply my experience building Petz and Babyz directly to architecting and authoring Facade with Michael, our ABL idioms for Facade were significantly more complex.


Michael Mateas’s reply

Regarding the reflection support in ABL, we were directly influenced by her work in the Expressivator. In her work, she implemented meta-level controls in an ad hoc way; it wasn’t built into the language. The meta-behavior support in ABL sought to extend the work she’d done in meta-level controls, and build it directly into ABL. The most powerful idiom for meta-behaviors that we came up with is the factoring of beat logic into canonical beat-goal sequences and handlers (where the handlers are all meta-behaviors). This particular idiom for meta-controls is unique to Facade; the only conversational Oz work that was done was Scott Neal Reilly’s work in The Office and The Playground, but he didn’t use meta-behaviors, and thus didn’t factor his conversational interaction this way. But handlers certainly shares some overlaps with Phoebe’s work, in that she used meta-behaviors to sequence transition behaviors, and part of what handlers do is sequence transitions.

As Andrew says, we use transition behaviors in our beat structure. I’m sure our thinking on this was influenced by us both having read Phoebe’s dissertation.

Joint behaviors help alleviate schizophrenia by making it easy to provide much tighter behavior coupling across agents. This tighter coupling makes it easy to add “signaling” behaviors that communicate the intentions between agents. So, while joint behaviors don’t in themselves automatically address the “schizophrenia problem” in multi-agent interactions, they enable idioms that allow you to easily address it.

The issue of the player being able to make the characters talk about lots of topics in a short period of time isn’t schizophrenic by Phoebe’s definition. By “schizophrenia”, she’s referring to agents switching between behaviors in a way that has an internal logic for the agent, but is completely inscrutable to an outside observer. When the player makes the characters in Facade switch topics, it makes sense to the player that the characters are switching topics (because the player is the person who brought the topic up); schizophrenia would happen if the characters then abruptly switched back to their original topic, or abruptly switched between topics, with no transition (leaving the player going “huh?”, even though, in the internal state of the characters, they are popping a conversation stack or something). The combination of the handlers and the beat goal behaviors (the beat goal behaviors have retry logic built into them) address this, and thus hopefully appropriately signal the character’s inner life to the player.

It’s interesting that Phoebe was driven to focus on how to signal the inner life of characters through a critical-theoretic analysis of agent architectures. I find this to be interesting work in its own right, because I like critical reads of AI technology, but this particular problem is bread-and-butter to designers and artists. As Andrew points out, he added these signaling behaviors to the Petz because, of course, as a designer, it’s all about making your interactions readable. And this is the crux of Expressive AI – that by taking design and art seriously, all kinds of new requirements and research directions fall out of it, like making sure behavior communicates the intentions and inner life of a character, making sure that multiple characters are coordinate on the accomplishment of author-level goals (distinct from character-level goals), etc.

Comments are closed.

Powered by WordPress