December 19, 2005
Nicolas Szilas, friend of GTxA and fellow interactive drama researcher, has written up a summary of last month’s Int’l Conference on Virtual Storytelling in Strasbourg, France. (Also see his summary from 2003.) Thanks again, Nicolas!
In Nicolas’ summary I’ve inserted a link to Ernest Adams’ ICVS keynote presentation, “Letting the Audience onto the Stage”. Ernest tells us he’s lately been questioning some of his long-held assumptions about agency and interactive story, which is evident in his slides.
This was third edition of ICVS, after Avignon in 2001 and Toulouse in 2003, a conference focused on digital/virtual storytelling. ICVS is a computer oriented conference with a flavour of Humanities and Art.
Regarding the core issue of Interactive Drama — no dramatic change! The topic was discussed, but I was expecting more concrete solutions. Ernest Adams reminded us that Narrative and Interactivity are hard to combine (pdf), and did not omit discussion of Façade as one of the most advanced approaches. Ken Perlin advocated for a procedural approach to Interactive Narative, but did not go beyond the stage of general advice and intuitive narratology. Sandy Louchart and Ruth Aylett presented an interesting comparison between Reality TV and emergent narrative, but how this will be effectively exploited in a computer system is yet to come.
The graph-based model is still largely in use, often (but not always) inspired by the Propp Model: the Scenejo project (Erfuhrt University), the INSCAPE project (I am refering here to the Stefan Gobel et al. paper), The Knut Hartmann paper from the University of Magdeburg, the screenwriting approach from H.G. Struck… should we call this the German School of Interactive Narrative?!
Paradoxically, I finally found more useful two theoretical papers (from the Mediterrean school…). From Portugal, Nelson Zagalo (and colleagues) experimentally compared the emotions expressed in games with the emotions expressed in movies. From Italy, the paper from R. Damiano and colleagues describes a formal theory of drama; while the authors are well aware of research in Interactive Drama, they prevent to claim that their model is a conceptual model of Interactive Drama that could be implemented. It is primarily a formal model of drama that might be useful for Interactive Drama. I would have been happy to find such a model when starting the IDtension project…
Speaking of IDtension, I was presenting there the integration of my narrative engine with the Embodied Conversational Agents developed at the LINC, Univ. of Paris 8. I presented a demo of it on stage, and offstage I could demo the latest version of IDtension, in text mode, with the new GUI.
Apart from these researches on narrative mechanisms, the trend of the last edition of ICVS has been confirmed: more and more projects and discussions concern physical interfaces that are different from the classical keyboard-mouse-and-screen interface, or from “classical” VR systems. Tangible interfaces was not only the topic of Ana Paiva’s invited talk (a rag dolls for emotion expression, a large letter box for influencing the story) but it was also mentioned in a number of talks: wallstones in a public exhibition context (Michitaka Hirose), a small doll in a table (Youngho Lee and colleagues), a simple square marker to control a video (Norman Lin and colleagues), a digital mirror (A.C. Andes and A. Opalach), etc.
Janet Murray discussed the relation between immersion and tangible interfaces: full immersion put our own body in another world but in the same time it makes it obvious that we are not there. Tangible interfaces however are threshold objects and create both a distance from and a link to the imaginary world.
Also an original paper presented work on “real 3D”, that is small hemisphéric globes which diplay volumetric objects (K. Langhans and colleagues).
As usual, I haven’t mentioned many other talks, but I was impressed by two innovative applications. The first one was developed at IRCAM and exhibited in the G. Pompidou museum in Paris (Beaubourg). It is a sound and image environment that the user interacts with through a haptic device (R. Cahen and colleagues, incl. X. Rodet, who presented the system). The second consisted in automatically transforming a written story into a multimedia animation for children. Children proved to better understand the transformed story than the original one.