June 7, 2005

Thoughts on AIIDE

by Michael Mateas · , 5:09 pm

Andrew did a great job posting his talk notes for AIIDE. In this post I’ll describe some of my reactions and thoughts to the talks and conversations I had at AIIDE.

Chris Crawford

Andrew and I are certainly in agreement with Chris about the need to increase verb counts in order to achieve interactive story. But Chris strongly wants to avoid natural language, and instead move to a custom logographic language. Further, he wants to use parse technology to provide constraints as the player writes sentences in the custom language – I imagine something like pop-up menus. I understand the impulse to avoid natural language (seems like an impossible, AI complete problem) and to prevent the player from being able to form nonsensical sentences, but I worry that:
1) logographic languages will feel unnatural
2) a pre-parse interface that constrains what symbols you can use based on the symbols you’ve used so far will prevent players from being able to speak in their own style.

Chris says that based on his own experiences with Siboot, players were able to pick up languages right away; it will be an empirical question to see how natural or unnatural people find such languages, especially as the vocabulary size grows.

As I’ve argued before, part of the allure of natural language interfaces are that people can say things in their own words, increasing the agency they experience. If a logographic language provided only one way to express each discrete verb (e.g. “flirt with Trip”, “criticize Grace”), this sense of speaking in your own style would be lost.

An advantage of logographic languages is that internally, the characters can speak to each other in exactly the same formalism that the player uses to speak to characters. This symmetry allows the AI characters to interact and coordinate with each other in exactly the same way they do with the player. On the other hand, the player is different than the rest of the characters; the whole world is organized around creating an experience for the player. So perhaps the AI characters should be able to more tightly coordinate with each other, read each other’s minds, etc. in order to coordinate on creating an experience for the player. We certainly found this useful in Facade.

As a design experiment I’m very curious to play an interactive story that uses a logographic language for communication. But the jury is definitely still out as to whether natural language or constrained artificial languages are the way to go.

Will Wright

Will points out that AI isn’t just a single technology or design, but rather a bag of tricks. This is the prevalent view within academia as well. The problem with this definition, however, is that it fails to describe what AI is a bag of tricks for. There are lots of tricks and techniques one can use while writing programs; only a subset of these tricks and techniques are AI. AI is about creating behavior that can be read intentionally, that is, behavior that can be interpreted by a viewer (player) as if it was being produced be an intelligent entity. Often the best way to do this is to build a simple simulation of the intelligent entity; that way there are fairly obvious connections between junks of code (e.g. a chunk of code called a “plan”) and the perception of intelligence in the viewer (e.g. “that monster looks like it has a plan”). Without those connections, it’s difficult for an AI artist to craft the code in such a way as to produce the interpretations she wants to produce in the player’s head.

He specifically mentioned interactive drama as a difficult design problem. Drama consists of complex causal chains; one break in the chain can blow your cover. But creating and maintaining those complex causal chains is crucial for giving the player global agency, a sense that what they do now has complex and evolving ramifications in the future. Interactive drama is hard precisely because it takes the modeling and simulation of open-world games, applies it to character psychology instead of character physics, and requires that the overall experience satisfy complex global constraints.

During one of the breaks I had a nice conversation with Will about prototyping. Will’s design approach is to have his team write hundreds of throw-away prototypes, each of which abstracts some aspect of the game. The prototypes themselves might just look like squares and circles with numbers next to them and a few sliders or menu items to select. After having seen a number of these prototypes, I noticed that they all were either about gameplay or procedural graphics. A gameplay prototype explores some small verb space by modeling how the verbs directly impact the abstract, score-like state, ignoring all the intermediate detail that would exist between the verbs and the abstract state. A procedural graphics prototype explores how you might automatically generate curvy roads or character flow behavior in a city, without worrying about all the work it takes to make the graphics pretty or to hook the inputs of the model to the rest of the game state. But none of the prototypes I’ve seen are what I’d call AI prototypes; something that abstracts away a bunch of code detail while still letting you explore an AI approach (for example, a particular architecture for character AI). I hypothesized that prototyping doesn’t work for AI. All the code detail is the AI; there isn’t a useful level of abstraction that would give you real design feedback. In Facade, Andrew and I didn’t use any prototypes; our design work was all done in the context of building the complete system. So I posed this hypothesis to Will to see what he thought. He completely disagreed, politely saying that if you can’t build a prototype, then you don’t understand what you’re trying to accomplish. I briefly described to him a pet project I’m just starting in automatic game generation (for simple games) and asked how you could prototype that. He suggested that you could build a prototype that spits out games as abstract features (not a playable experience) and explore the design space at that level first. After talking for awhile, I figured out that one of the keys to making prototypes work is imagination; you have to be able to look at the very abstract output of your prototype and with confidence imagine what that abstract output means in the fully-fleshed-out completed experience. For AI prototypes this means developing a good architectural imagination, that is, being able to see the implications for the full architecture just from abstract little pieces of it, and being able to imagine what that architecture would be like to author within, and what kinds of player experiences the architecture affords. Will has a ton of experience designing games and so with high confidence can perform this act of imagination. In my own work I’m going to start using rapid AI prototypes and see if I can develop this skill.

I’ll stop here. I’ll post more on AIIDE later.