July 16, 2004
TIDSE 2004 (part 2)
As promised, here is the long-delayed second half of my TIDSE 2004 trip report.
The second day opened with an invited talk from Ron Baeker, a computer graphics pioneer. He described a new initiative at the University of Toronto: Knowledge Media Design (KMD). KMD are computational media that systematically embody knowledge in a way that encompasses data and process, as well as task space and interpersonal space (social component of tasks and media). The focus is on systems that support human creativity and control, rather than on systems that autonomously generate media artifacts. The best part of the talk was some of the videos he showed of his early work. The first video showed the Genesys animation system that he built in 1966 at the Lincoln Labs at MIT. Genesys allowed users to construct animations by tracing animation paths on a screen. Interestingly, given the discussions on virtual humans at the conference, he did some work in the mid-60s on a system that supported the animation of stick figures. It turned out to be difficult to maintain the constraints between the various parts of the stick figure, so he “moved on to easier problems.” Some of his early projects involved looking at program code as a form of human communication, something which, given all the writing I’ve done on GrandTextAuto about programming as an expressive media, why artists should program, and so forth, I heartily agree with. One of these projects explored the idea of a program book. If software is to truly have a long life, the code should be published as a designed book, with the full source code printed in the book in such a way as to facilitate reading the principles, design decisions, issues, and so forth that are expressed in the code itself. He showed some pictures of The Eliza Book, one of the program books that they made. I’d love to leaf through this book!
In general his work has been about supporting human creativity rather than building systems that exhibit machine creativity. This distinction made me think about my own work in Expressive AI, where the AI architecture is not viewed as a substitute for human creativity, but rather as a medium in which authors can write creative works, which then function autonomously (exhibit machine creativity) for the audience that interacts with the them. AI-based art and entertainment doesn’t have to force a binary decision between human authorship and autonomous generation.
After the invited talk, the first session focused on systems that support the authoring of interactive narratives.
Stéphane Donikian, Jean-Noël Portugal
Stéphane described DraMachina, a GUI authoring environment for authoring the logic of interactive narratives. DraMachina was developed in collaboration with Daesign, a game company that has been developing an interactive story-based game over the last four years (the game, sadly, has been canceled). The system allows you to specify characters in terms of traits and relationships, author dramatic units (story units) with preconditions and effects, and author dialogs using dialog graphs. The system includes a notion of character interaction via positive and negative strokes, where the idea of stroke comes from transactional analysis. This idea is similar to some of the discourse acts we use in Façade (e.g. ally, oppose, praise, criticize). The system outputs the authored structures in XML, which can then be interpreted by some real-time, runtime system. From the screen shots he showed, the system reminds me most of the Erasmatron, not in terms of the specific authorable entities, but in terms of having a large number of different, but interrelated graphical editors to support authoring the many different pieces of an interactive drama. An interesting non-programming approach to story authoring.
Daniel F. Abawi, Silvan Reinhold, Ralf Dörner
Silvan described a toolkit for authoring non-linear, mixed reality story environments. The toolkit turned out to be a component-based framework supporting software engineering design patterns for building mixed-reality environments. “Story” was used in a fairly broad, undefined way.
Richard Wages, Benno Grützmacher and Stefan Conra
They described an authoring environment for story-based VR experiences. The narrative authoring is done by specifying story graphs that maintain additional state for further constraining transitions. The full suite of tools (from story authoring to supporting post-production editing of the VR experience) is motivated by an analysis of production processes in the movie industry. I first met Richard at Level Up last year, where he showed me a demo of the system. He gave me a copy of the story authoring tool to try at Georgia Tech. I’m curious to see how some of the students with non-programming backgrounds like it as an interactive narrative authoring environment. I’ll try it the next time I teach Interactive Narrative.
Catherine Vaucelle, Glorianna Davenport
Catherine presented Textable Movie, a video database system that supports performative story-telling. The user types descriptions of actions, situations, locations etc. in a text-chat-like way; the system pulls up and plays video clips relevant (via simple keyword-based lookup) to what the user is typing. The clips change as the user continues typing, backspaces over already typed text, and so forth. It’s very responsive, and thus effective in a real-time performative context. They’ve done a series of international workshops where children create and annotate their own video databases and then play with other people’s databases.
The next session described mobile (e.g. PDA or cell phone-based) approaches to interactive narrative and games.
Valentina Nisi, Alison Wood, Glorianna Davenport, Ian Oaklay
They presented Hopstory, a physically distributed interactive story about an accident at the Guinness factory. Visitors wander around a physical environment where they can “pick up” bits of the story by pressing an iButton (a hand-held interactive gizmo) against cat sculptures (corresponding to a cynical brewery cat) distributed throughout the environment. As the viewers exit the environment, they can view the story (as video) that they have constructed by their particular journey through the environment; the particular order and identity of cat sculptures they’ve interacted with effects the video they see. During the discussion several people brought up the problematic separation between the physical exploration in the environment and experiencing the constructed story.
Christian Geiger, Volker Paelke, Christian Reimann
Christian described some technology frameworks and prototypes developed for supporting mobile entertainment computing. Squipped is a Dragon’s Lair-like video adventure, intended to explore the use of high-quality video imagery on PDA’s while minimizing the interaction complexity (he noted that some users complain that PDAs are physically awkward to interact with). MobEE is an authoring framework for adventure games that supports exchangeable representations of story content (to support multiple portable devices with different rendering capabilities) and context refresh, where the player, when beginning a new session, is reminded of what they’ve done in the game so far. The later capability is intended to support the fact that players tend to play mobile games in short bursts (e.g. for 5 or 10 minutes while waiting for a bus), and so have a greater need to be reminded of the play context when they next pick up the game. Their final prototype makes of use the cameras found on many PDAs and cellphones to play simple augmented reality games. Their first game is AR-soccer; the player looks down at her feet using the mobile camera, and kicks around a virtual soccer ball superimposed on the real image.
The final session focused on educational applications of interactive narrative.
Leonie Schäfer, Agnes Stauber, Bozana Bokan
Leonie presented StoryNet, an educational game for teaching social skills. The player is presented with a work situation in which she must make social tradeoffs among three coworkers. The game makes use of a simple social model to represent the relationships between the three characters.
Peter Stephenson, Keiko Satoh, Audray Klos, Diane Kinloch, Emily Taylor, Cindy Chambers
Peter presented the educational game Inner Earth, which was developed for a science museum. The game teaches children about earth geology by providing them with a magic elevator to explore different strata where they perform various activities. The most interesting part of the talk was the discussion of game design patterns. Peter explicitly referenced various game design patterns during the discussion of his game design. For example, one of the activities in his game is an instance of the “Tamagotchi pattern”. (sp?)
Oliver Schneider, Stefan Göbel, Christian Meyer zu Ermgassen
Oliver revisited the Geist project, this time describing a VR (as opposed to AR) version of Geist that makes use of some novel matte techniques.
In the evening of the second day, there was a party at the Cybernarium, a museum spin-off of the Center for Computer Graphics (where the conference was held) that showcases demos that have been developed over the years, allowing the general public to interact with them. The entertainment that night was a German Elvis impersonator. As someone who grew up in Nevada, where Elvis impersonators regularly perform in casinos, and in fact where you can be married by one, it was very funny to see an Elvis impersonator performing so far away from the kitschy American roots of Elvis fandom (for those interested, he was a young sexy Elvis, not an old fat Elvis).
The last day opened with two theory papers that generated much interest and discussion.
Craig A. Lindley
Craig, whom some of you may know from his Gamasutra article, as well as the work that he and his student Mirjam presented at DIGRA last November, presented a temporal model of games and narrative. The point of his model is to argue that, while the temporal structure of games and narrative are different, hybrids are possible, and thus narrative games are possible. The technical path towards achieving this, with which I entirely agree!, is to move towards more generative narrative: “This explains the perceived tension between narrative and game play and suggests strategies for overcoming this tension by developing game play mechanics that are fundamentally dramatic, in that their consequences do affect the higher level narrative patterns of the game … This suggest a strategy for achieving narrative game play by moving away from high level predefined narrative pathways at the model and generative levels towards a more detailed integration of principles for narrative generation with the mechanics of player interaction.”
Ulrike Spierling
Ulrike, one of the organizers of the conference, described her current work in using interactive digital storytelling in educational environments. She’s interested in providing tools to support educators in building real-time, 3D, narrative learning environments. Her theoretical model describes four levels of authoring and participant interaction: Story, Scene and Action, Character Conversation, and Actor and Avatar. She describes a spectrum on each level from predefined to autonomous, and begins to describe how choices on these four different spectra influence various aspects of participant agency and authorial control.
I missed the next paper session, instead having hallway conversations with folks.
The last session focused on technologies and design techniques for games.
Stefano Ferretti, Marco Roccetti, Stefano Cacciaguerra
They presented an event synchronization framework for multiplayer worlds where world state is distributed across multiple machines. They have developed techniques for determining when it is safe to drop event update messages without incurring inconsistent state across servers, thus minimizing network traffic.
Laurent Cozic, Stephen Boyd Davis, Huw Jones
Laurent described a camera control technique that borrows from the rhetoric of cinematography, particularly the use of cinematographic techniques to create suspense. He showed some nice clips of gameplay in a survival horror game demonstrating the effects produced by their suspense camera.
Kyle Kilbourn, Larisa Sitorus, Ken Zupan, Johnny Hey, Aurimas Gauziskas, Marcelle Stienstra, Martin Andresen
They presented the results of a student design project, the task being to design some kind of guide or directions for computer-based playground equipment (blocks that you can arrange in different patterns, hop around on, and that light up in different ways depending on both how you arrange them and how you hop around on them). Their solution was a popup book that, through the story, introduces the different games you can play with the blocks.
In addition to the paper sessions, several demos/artworks were shown throughout the conference. My favorites were Beyond Manzanar by Tamiko Thiel, and BirthData by Marlena Corcoran.
While I’ve seen Beyond Manzanar before, I enjoyed having an opportunity to hear Tamiko talk about it. Tamiko developed Beyond Manzanar to explore parallels between the imprisonment of Japanese Americans at the Manzanar Internment Camp in Eastern California during World War II, and Iranian Americans threatened with a similar fate during the 1979-’80 hostage crisis. She explores parallel issues of assimilation and distrust experienced by both Japanese and Iranian immigrants. The viewer moves through a virtual recreation of Manzanar, entering different spaces that combine Japanese and Iranian culture, and, through the journey, experiences the dual status of immigrant and enemy. Beyond Manzanar was finished before 9/11, but Tamiko commented that curators are expressing a renewed interest in the work because of its contemporary relevance…
Marlena showed documentation of a the multimedia performance BirthData. Much of the text of the performance is a poem written in the form of a phone menu. She commented that, while there have been masterpieces composed for the violin, none have been written for the much-hated phone menu. One of the motivations for this work was to explore what it means to take the phone menu seriously as a form. Of course, in the performance, the phone menu is non-interactive; it is used as a poetic form. But the effect was quite compelling, leaving me wondering how well it would work as an interactive form. One of the students in my Interactive Narrative class this Spring wrote a phone menu story that is intended to entertain callers while they wait on hold when calling a customer service number. In his case he was not attempting poetry, but rather an action-thriller narrative. It would be interesting to attempt an interactive literary phone menu piece.