August 28, 2005

Emotions for believable agents

by Michael Mateas · , 5:01 pm

The call for papers is up for ACE 2006 – Agent Construction and Emotions:
Modeling the Cognitive Antecedents and Consequences of Emotion. This is the latest in a series of workshops on modeling emotions in autonomous agents.

While this topic is obviously relevant to anyone building autonomous characters, there’s an interesting tension between functionalist models of emotion that are concerned with how emotion serves as an internal resource for guiding decision making (emotions make us more rational), and computational models of emotion for believable characters that must respond to and convey emotion. For autonomous characters, AI research tends to take the stance that if we build computational models that capture the way human beings do something, then building a believable, autonomous character is merely a matter “applying” the model to the character. I’m deeply skeptical of this position. The whole believable agents turn, as first articulated by Joe Bates and the Oz Project, was about recognizing that the field of believable agents has its own first class research agenda, that characters are not realistic human beings, but are an artistic abstraction; the technical research agenda then becomes focused on AI architectures that support making these artistic abstractions (characters) autonomous. The tension can be seen clearly in this call; the main text focuses entirely on cognitive science questions in the modeling of emotion, while expression is buried in a topic bullet: “The use of computational models or methods to evoke emotion in human subjects”. This is not to say that cognitive scientists and Expressive AI researchers shouldn’t talk to each other, just that the research goals of Expressive AI shouldn’t be reduced to an application of the research results of cognitive science.

While we’re on the topic of emotion modeling, it’s interesting that all of the emotion models I’m familiar with ultimately represent emotion as a vector of floating point numbers, where the number of elements in the vector correspond to unique emotions, and the value corresponds to how much the agent is feeling that emotion (e.g. the agent is angry with degree 4.2 and sad with degree 3.1). While there are obviously many details regarding how you update these values, how they decay over time, and how actions taken by the agent are conditional on these values, emotion is still treated as a collection of simple, local internal state that can be used to conditionalize decision making. I’m interested in architectural conceptions of emotion that would take a much more organic, wholistic, view of emotional state, not reducing it to a state vector isolated within a special “emotion” box. Aaron Sloman has written extensively on architectural models of emotion, and has argued for just such an organic notion of affect, though how you actually implement such an architecture remains unclear.