December 4, 2004

Hard to Believe

by Andrew Stern · , 3:23 am

Robin Hunicke attended last week’s Game Tech industry seminar, I assume circumventing the $2450 registration fee :-). The gathering was comprised of a Creating Believable Characters Seminar and a Game Tech Leadership Summit. She wrote up a great three part summary of the event. (Update: Make that five!)

Robin reports that the believable characters seminar was pretty much limited to (impressive) animation techniques; the presentations went little into AI and behavior, because there’s little tangible work to talk about there.

…about AI and believablity, it’s clear that they tried to find a good speaker or two – and just couldn�t. It�s not that people aren�t trying some simple things… or even that they aren�t attending the conference. For example – Checker (at Maxis) and Jay (at Valve) had a long debate during a break on Day Three about whether the industry is ‘doomed’ because for all our realism, characters are still empty husks. So clearly, it�s being discussed. But results are limited, work is slow, and not a lot of people are stepping up to say what they think will take us in the right direction. That worries me.

Um, hello… (pdf GDC04 powerpoint GDC04 video)

But, okay, generally speaking, Robin is right — no group has yet built a working demonstration, let alone entertainment experience, with a broadly capable, non-shallow believable interactive animated character.

By broad I mean characters who have a full repertoire of the basic actions and reactions you’d expect a good actor to have, from walking and sitting and running and using objects, to eating and sleeping and playing, to emoting and gesturing and talking and acting dramatically. And, doing all of those things with personality, varying emotion, charisma and character. It’s not enough to have an animation loop or two for each of these; we’re talking hundreds of reasonably complex, intermixing behaviors and procedural animations. (And to keep things simpler I’m not including natural language understanding or generation here, or fancy ragdoll physics.)

With the ABL language technology of ours I just linked to, we’ve built slightly broad, non-shallow conversational characters (but ones not yet publicly released — we’re almost done!), but even so, they only have limited, shallow physical action behaviors. The good news is, as our paper and talk suggest, ABL is very much capable of supporting rich physical action, but, we haven’t done the hard work to build those behaviors yet. And it’s a lot of hard work, even with ABL in hand. (And it can eventually be in your hands, once ABL gets publicly released.)

Bryan Loyall, who created the original Hap believable agent language (the language ABL is based on) as part of the CMU Oz Project (scroll down to his thesis) has continued his work at Zoesis. Zoesis has released a few believable agent demos over the years, but frankly they don’t show off the capability of their technology well enough yet.

Some games have some impressive custom character behaviors — games like Ico, Half Life 2, some of the better sports games, etc. come to mind; but they are limited to just the few behaviors the game needs; these characters aren’t broadly capable.

Petz (I was one of the developers) are somewhat broadly capable, not-too-shallow dog and cat characters, but they are only dogs and cats, and they aren’t really fleshed-out detailed characters. The Sims are surely the most broadly capable interactive characters built to date, but their behavior is pretty shallow, sometimes ant-farm like.

There’s a bunch of research labs working on the technology needed for deeper believable characters, and some under-the-radar companies and groups working on various pieces. But we’re still waiting for a fully-realized demo to emerge from all this work.

A thought experiment: what is it really going to take to create broad, non-shallow believable interactive characters?

If someone said to me, Andrew, tell me what you need to put together a team to do this — a Manhattan project for believable characters — here is what I’d say I’d need, to create just one really, really good believable character:

This would probably cost about 3 million dollars.

Once the first character has been created, it would probably only take 25% of the original effort to make an additional character. And so on.

This would be difficult, relatively risky R&D.

Add in basic natural language understanding and generation, and we’re talking 24-36 months of 3-4 talented, experienced engineers and writers, starting with the best of today’s NL technology, adding about another two mil to the budget. (The $5 million virtual man… we can build him… we have the technology…)

Well… I guess Robin has reason to be worried.

(hey, are there any wealthy frustrated gamer philanthropists out there reading this? :-)

17 Responses to “Hard to Believe”


  1. Ian Wilson Says:

    I am sure it is a constant source of frustration to many of us that the areas we work in get great press and are they are paid great lip service but to date have had little in the way of serious resources put behind them. Everyone wants to come and see the “emotion in games” track or whatever it happens to be called at the particular conference but then go back and work on the latest particle effect for making blood splatter in a more gruesome fashion ;)

    There are many reasons for this, as Andrew points out it takes a good deal of cash, it needs a proven market (a catch 22), it needs technical expertize in the required fields but I think the three most important element that are currently missing are a Platform, Tools and Training.

    There also needs to be some form of architecture or road map in place that inherently recognizes which technologies are still immature (Natural Language Understanding, robust goal planning for example) and suggests place holders or alternative strategies for this areas.

    This is where technology transfer from Academia to Industry often falls over. The demostration applications or designs produced as research often will not work in a general purpose environment and rely on custom integration, scripting etc, etc. This in itself is not so much a drawback as long as those limitations are know and understood and solutions are in place to address them.

    Many of the parts are already out there, perhaps you could add those commercial enterprises to your resources? Not a shameless plug but I am always on the look out for companies enabling the missing pieces of this larger jigsaw, especially those that are creating the pieces of a believable character (I will list my current contacts below).

    Sorry, hope you are not bored yet….

    So with my $5M I would set out the following plan:

    1. Gather and analyse all of the players large and (mainly) small who have the disparate parts of this jigsaw. The parts that are needed are:
    IK based biped character with a range of ideally procedural movement
    Muscle model for the biped, especially facial muscles
    Generated facial features/skin for the model
    Generated facial and body gestures that are consistant with age/gender mood and personality (that would be my department)
    High quality Text To Speech synthesis (including emotional pronouciations)
    Accurate lip synching
    Robust Natural Language understanding
    Content Creation Toolkit for writers, not engineers
    3D Environment and interaction / physics engine

    2. Define a strategy to create a standard platform for creating [insert your label of choice here / emotionally engaging interactive entertainment]. While the games business is busy building one off pre packaged applications we should be focused on the character above all else. This means we can reuse those engines already created and available for the platform. Pushing the graphical envelope should not be a priority. An example could be the Half Life 2 engine that I know the Machima people are using. I am looking at the code know to see what is possible but something like this would be an excellent platform for something like Facade Life 2 ;) where the developer is concentrating on the characters and story but not on building a game engine.

    3. To go hand in hand with the platform we need a “Drama/Story/Scene (again insert your own label) Toolkit” This is to allow writers to create the content for the experience as opposed to scripting. Also allowing them to create scenes etc with dialogue without the aid of engineers.

    4. The final part is the work in progress here, when you have the Tools and the platform you need training or a manual to get you started on how to create interactive drama. Maybe you have that already?

    With a small team of similar make up to Andrews but more focused on platform integration than content creation I think you could start building some compelling applications and a platform for others. Ideally you would have a road map of what is possible at what stage of the game that would allow you to showcase your work and generate revenue (for example a cell phone application that would allow you to constrain the experience to within the boundries of your tools).

    I mention Half Life 2 as again they have raised the bar but there are still problems with what they have done (i.e. eye focus) but that single game could open a lot of industry eyes to what is possible and we should leverage that (not least by perhaps using that engine to showcase our own technologies/abilities).

    Anyway I am waffling now, here are some links to look at:

    Platform:
    http://www.emotionfx.com/protomgd2004/index.html

    IK Characters:
    http://www.naturalmotion.com/
    http://www.sovoz.com

    Muscle models:
    http://www.cgcharacter.com/cghuman.html
    http://www.di-o-matic.com/products/Plugins/FacialStudio/index.html
    http://www.di-o-matic.com/products/Plugins/Hercules/index.html
    http://www.lifemi.com/

    Skining:
    http://www.genemation.com/index.cfm
    http://www.facegen.com/

    Others:
    http://www.bdi.com/content/sec.php?section=diguy
    http://www.visagetechnologies.com/index.html

    Toolkits:
    http://www.inago.com
    http://www.eclipse.org (a toolkit framework)

  2. Dirk Scheuring Says:

    I like the way you break it down, Ian. As for a platform, I’ll definitely stick with AIML.

    This might strike people as an odd choice/faith, but the reason behind it is that AIML is so simple that one can use it as an interface to/glue for everything.

    For example, Kino Coursey, who offers several free tools to connect large AI systems to real-world applications, produced CyN, which connects the massive Cyc AI (or its open-sourced brother, OpenCyc) to Gary Dubuque‘s AIMLpad interpreter/editor (latest source code is here). AIMLpad can also use WordNet and ConceptNet.

    As for visuals, designers, writers, and other moderately technical people can integrate AIML with Oddcast’s V-host (as is done here), or use the Flash toolkit. For the more l337 haxxors, it can be integrated with the open-sourced vrspace (a platform independent virtual reality client and server), as Josip Almasi does it.

    The folks at Cycorp acknowlege the fruitful connection by hosting Kino’s CyN description on their site, showing that AIML already generates some network effects. Who knows – maybe one day, it’ll be possible to access ABL through AIML.

    Speaking of which, I feel that Michael and Andrew have given AIML some unnecessarily bad press in their recent comparison between AIML and ABL. In Section 7 of that paper – “Relationship To Chatterbots” – they list 5 differences between their approach and AIML. What I see in this, list, though, are not differences, but a misunderstanding of what AIML can do:

    1. Using a 2-phased approach to natural language processing instead of a stimulus-response approach is no problem at all in AIML. In fact, using an n-phased approach is no problem – that’s what the “srai” operator in connection with the “get” and “set” methods is for (e.g. I use 4 phases).

    2. Their concept of “positional facts” is something that is routinely being done by AIML authors via simple string manipulations/substitutions/insertions.

    3. I’ve been author-declaring my rule salience without a hitch for some time now, using 2.

    4. Retraction is easily done in AIML, using “srai” and “_”.

    5. As seen above, AIML can be connected to WordNet, too.

  3. andrew Says:

    Ian and Dirk, thanks for your comments. It’s clear we all care a lot about this, which is a good thing.

    Ian, thanks for the additional resource links. Along with those, the links in my post to under-the-radar groups (which includes an AI middleware link) and the two links to lists of research groups also serve to cover some of that.

    Here’s a few comments about your analysis: first, I agree with many of the items your list, but I’m not sure it’s necessary to go to hyperrealistic animation (facial muscles, skin, etc.) to create believable characters, in fact it may be a bad idea. We may be more successful by creating a more abstract believable character, such as animated graphic novel character or excellent cartoon character like Bugs Bunny or Charlie Brown, versus a very realistic virtual person. The more we move towards realism, the more likely we are to fall into the Uncanny Valley and get really bogged down, or fail. But even abstract characters will require lots of technology support to be expressive, so there’s plenty of need for sophisticated tech here.

    Just as important, I’m really skeptical that a “Content Creation Toolkit” or a “Drama/Story/Scene Toolkit” “for writers, not engineers” is going to be feasible. To be reactive and offer agency, I believe that dialog will need to be built into behaviors, which requires programming. This requirement becomes more and more obvious when you try to do more than branching dialog/event trees or brute-force matrices of dialog, which may be close to the limit of what you can do without programming. We’re going to need to find programmers that have creative writing talent, and train them to be better writers; and we should find writers that can think procedurally, and train them to be programmers.

    Dirk, in our comparison of ABL+NLP to AIML, Michael and I didn’t intend to say AIML was strictly incapable of the kinds of functionality you’ve listed; as you rightly point out there is a lot of overlap between ABL+NLP and AIML. (I say ABL+NLP because we added on two secondary NLP languages to ABL in order to make it capable of conversation+action, as the paper describes.) And it’s really useful for you to point out AIML’s capabilities, and ways to implement larger features. However, our point in making the comparison is that it can be overly cumbersome in AIML to achieve many of the capabilities we found important when building our conversational characters. The comparison is really about the affordances of the two approaches — that is, a concern about what is natural to express in each language, what may be easier to express in each, which requires fewer work-arounds, etc., that may give authors a leg up towards creating more and more complex conversations.

    As you say, one of AIML’s strength is how easy it is to hook up to other technologies, which is important. That said, I would want to understand how cumbersome or not such integrations end up being when authoring entertainment experiences in them.

    Anyhow, with all of this, in the final analysis, there are many approaches that should be tried. My post suggests a tack I would take, and I’m learning a lot from reading yours, and hopefully we’ll hear more from others too.

  4. Ian Wilson Says:

    I fully agree with the point about more “abstract” characters, I guess I get a little blinkered with muscle models ;) I would prefer myself to work on more cartoon like characters if that was possible in the short term. However one very important, perhaps crucial, point to bear in mind is what the audience (consumers) want. Just as in non interactive entertainment the majority of characters will be human like, ideally. So the “uncanny valley” will have to be crossed at some point and with understanding of the systems involved in behavior it can be.

    I would tend to disagree about the Toolkit however. “Writers” of interactive narrative/entertainment etc (and I guess they are evolving as we speak and are not a “writer” exactly in the traditional sense) are not likely to be engineers. Even if they are the roles are very different and their tools should reflect that. This toolkit, like the systems it is helping build, will be evolving over time but should be the way forward, rather than having “writers” programming.

    Stepping a little out of my depth here, but would interactive writing really be procedural when the field is more mature? I would say it would be declarative and event based, not procedural. Procedural is how games are currently written, i.e. Half Life etc and while this is sufficient for its purpose I dont believe this is where interactive narrative would be headed. Procedural is fine for “action” entertainment as you have the action to keep your arousal levels high but I dont think that would work for a non action scenario, or am I misinterpreting your definition of “procedural”?

    BTW, GtxA guys, how about a FAQ about what you do and what this site is about? That would be useful to read for novices like me.

  5. andrew Says:

    Hi Ian, I’ll reply more to your comment when I get a chance, but to answer your last question — it’s probably not apparent enough, but if you mouse-over our names at the top of the page, you get little bios about us; clicking on the names goes to our personal pages; and clicking on “A group blog” gets you a slightly out-of-date index of all our posts.

  6. Dirk Scheuring Says:

    Ian wrote:

    Stepping a little out of my depth here, but would interactive writing really be procedural when the field is more mature? I would say it would be declarative and event based, not procedural.

    I’m sure that it has to be declarative and procedural at the same time. “Declarative” relates to character, “procedural” relates to plot, and a basic truth for writers is “Without plot, there is no character – without character, there is no plot”.

    This is the core problem that we have to solve – how can this be done? How can we write for an interaction? Because we’re writers, we just know that we’re gonna need both plot and character – both dramatic and non-fictional writing that describes people and their actions crucially relies on this basic duality. The as-yet unsolved problem is how to implement a system that can autonomously decide when to deliver a character point and when to deliver a plot point, and is able to generate satisfying stories that way.

    My guess is that writers that go into coding are as good candidates for solving this as coders that go into writing are. But the place where it’s gonna be solved will definitely be relatively low-level code, and the main tool has to be a classical editor; no GUI will be of help at this stage, because nobody knows what the layout and features will turn out to be. This is what makes the simple and powerful approach of AIML so attractive to me. People also used ass to build C.

    (Ed. note: I think Dirk means “assembly to build C.” :-)

  7. Dirk Scheuring Says:

    andrew wrote:

    As you say, one of AIML’s strength is how easy it is to hook up to other technologies, which is important. That said, I would want to understand how cumbersome or not such integrations end up being when authoring entertainment experiences in them.

    Depends on where we can take it. There have been several attempts so far to build a dedicated AIML editor to make it easy for non-programmers to write bots. Probably the most successful of those following the “conventional” approach (“conventional” in that it supplies a GUI) is the one at Pandorabots, an AIML bot hosting company that allows botmasters to edit their bot through a web interface.

    What’s good: it’s real easy – as of July 17 this year, Pandorabots reports 44526 registered botmasters who run 52022 bots. Most of those botmasters are not programmers.

    What’s not so good: the GUI they offer allows for writing basic bot functionality, but doesn’t support any serious coding in AIML, because the number of graphics-supported tag combinations they let you use is rather limited, and if you have to hand-craft all of your interesting code anyway, the GUI can only get in your way.

    Several independent attempts at building a full-fledged GUI-based AIML editor have been made and given up during the past four years, teaching us that the immense flexibility that the AIML tagset affords is hard to capture in a conventional card/field-based GUI. Gary Dubuque’s AIMLpad takes a radically different approach: it let’s you script the interpreter/editor so that you can generate AIML code by just talking to the bot. Sure, you need somebody who can script special AIML categories that can generate other AIML categories, but once you have those, writers with no programming knowledge alltogether can create AIML code, name files, and store those in arbitrary locations on the network, just by inputting content, in a direct interaction with the bot they teach. It works, and it’s available; on the downside, you have to invest in developing a class system for your AIML categories as part of your creative venture, adding cost.

    I also know of a couple more current projects approaching AIML generation, visualisation, and editing. I won’t put pressure on the developers now by blowing their cover, but I’m sure we haven’t seen the last of it.

  8. andrew Says:

    I asked Robin in an email, “regarding believable characters, what are the developers at Game Tech saying they are looking for? Or, perhaps people there are saying, ‘what do we even need here? we don’t know?’ ” Robin replied,

    I think people are asking a lot of questions… like: ’We know what we’d like (believable actors who emote, who generate/support empathy, etc.) but… how does that dovetail with “gameplay”, exactly? How do we make strong characters and still come away with “play” and not “entertainment”? And even if we did know for sure – how would we get the time/mandate to build it? Would it help sell games if we could only do this only *halfway*? A quarter of the way? What’s the low-hanging fruit? What’s the middle ground?’

    I also suggested to Robin, similar to what I wrote earlier, that ABL and related technology “is useful beyond story-oriented characters — but, none of us have built anything to demonstrate that yet”. Robin replied with a question,

    Just as a thought experiment – how do you envision it being used outside of the story character context?

    I’ll answer that last question here — which also replies to Jurie’s post at Intelligent Artifice, where he’s seeing this discussion as being about interactive story. (I’ll try to respond to Robin’s first questions about “dovetailing with gameplay” in a later comment.)

    The idea of believable characters is more general than characters for interactive story. This certainly how the Oz project folk, who I think were the first to use the term “believable” in the context of virtual worlds, intended to define the term. For example, a typical action-oriented game with characters would almost surely benefit from characters with stronger personality, emotion, self-motivation, change, the ability to form social relationships, and who exhibit a strong illusion of life — the bullet points found in the link above. Those are the kinds of things that would be developed in a “Manhattan project” for believable characters that I imagined above, and where the ABL language or something like it could be of great utility, when combined with procedural animation, etc. (ABL stands for A Behavior Language, btw.) But, besides our posts on GTxA, neither Michael or I or the Oz folk at Zoesis have yet done much proselytizing of these languages for use beyond story-oriented applications; so far, we’ve hinted at it some in our GDC talk (linked to above in the post), and Michael and his students are currently hooking it up to Unreal.

    A quick explanation of this approach. In a nutshell, a behavior-based language like ABL allows for complex mixing of hierarchies of behaviors. Behaviors are like functions, with several additional features: the steps of a behavior can be more than just sequential, they can also be run in parallel with each other. The language gives you a lot of support in managing this parallelism. It’s precisely this control-flow support that allows a programmer to relatively quickly and easily, in relatively few lines of code, write “mind-like” behaviors and activity, i.e., creating and keeping track of motivations, emotions, personality — and “illusion-of-life” type behaviors, i.e., doing many things at the same time in complex ways. For example, Scott Reilly’s emotion algorithms (Em) can be implemented in this kind of language, integrated with motivation and action. To quote from pgs. 8-9 of this paper,

    The paradigm of combining sequential and parallel behaviors, and propagating their success and failure through the [active behavior tree], are the foundation of the power of ABL as a language for authoring believable agents, versus purely sequential languages such as C++ or Java where parallelism has to be managed manually. ABL is effectively a multi-threaded programming language, making it very easy to author behavior mixing – a powerful feature, but one that can get out of control quickly. In this way, ABL is challenging to program in, even for experienced coders. An important feature of ABL (the primary feature that distinguishes it from Hap) is support for synchronized joint behaviors, which helps the author harness the power of multi-threaded programming.

    The point is, really any application of characters (games, interactive stories, animated web agents, etc.), that need more than what a finite-state machine can do, could find this type of approach useful. Game developers are of course already implementing parallel behavior in their characters, but presumably in imperative languages such as C++, or in lighter-weight scripting languages; these approaches limit their complexity, or make it too cumbersome to get much more complex. To get to “believable actors who emote, who generate/support empathy, etc.”, it becomes necessary to have your programming language do more of the work for you, to help manage the complexity.

    At the risk of sounding like an infomercial — before working with ABL, I was skeptical of its usefulness; I had programmed goals, plans, emotions, etc. for Petz in C++, and although cumbersome, it worked pretty well. But now, after a few years with ABL, I’m a believe-er.

  9. mark Says:

    (I think I may be agreeing with you here, especially with the cartoon analogy.)

    One of the things that’s always bugged me about the “believable actors” field is the insistence on copying humans. IMO, it’s complex behavior that is somehow rooted in what the player does that’s necessary, not specifically complex behavior that closely mimics what a human in the situation would’ve done. (The “situated embodied agents” community is perhaps the best example of the approach I don’t much like in that respect.) I think people are good at projecting personality onto things, so what we need to do is give them something they can project onto and interpret, not necessarily something that acts exactly like people.

    The biggest illusion-breaker isn’t, I don’t think, when a character gestures in a way that’d be weird (or physically impossible) for a human, but when a character seems to be ignoring the player entirely, and just carrying out either pre-scripted or randomly generated behaviors. If an entity is consistent, complex, and convincingly responsive, then I think whether they’re accurate copies of humans or some invented alien species isn’t that important; people will interact with them and find them believable either way.

    As an aside, it might be because I grew up with computers, but I actually find avatars of any kind outside games a little irritating. I don’t want educational and/or tutorial software that makes facial expressions and gestures at me. I’d much rather interact with it if it were a computer than as if it were a person. I’ve never used a mid-1980s science-education program and thought “what this really needs is Clippy explaining the periodic tables to me rather than this boring computer interface”, and I don’t think anyone I knew did either. What we did think at the time was, “what this really needs is some smarter software”. A disembodied-brain science program would be just fine if that brain were reasonably sophisticated.

    (But that’s a different issue from games.)

  10. nick Says:

    I never thought that “believable actors” had much to do, directly, with copying humans or other animals. You can have cartoon people, as Andrew and Michael do in Façade, or you can have things that are even less humanlike (such as Woggles), and you can still work to make them believable – to make them compelling to interact with and worth suspending your disbelief to play with.

    That goal can be distinguished from one of building “life-like actors,” which I think is closer to the vein of work that you don’t like, Mark. With a goal of creating something life-like, you might code up behaviors that are strongly based on the observed behaviors of living creatures – I would say Bruce Blumberg’s work is more along these lines and the Zoesis work is more aiming at the “believable.”

    Andrew and Michael, I’m not making up this distinction, am I? I feel it’s a fairly classical one, but couldn’t turn up anything about it on the Web in a quick search.

    If you’re focused strongly on one of the two different goals (you don’t have to be at one pole or the other, of course) you might not have the same artistic, scientific, or other overall purposes in mind. Those doing life-like work might want to make a statement about what we think real human or animal behaviors are like and how we real interact with humans or animals (in a way that is tightly tied to their observed behaviors), while those pursuing believable work might be driving at a richer authored interactive experience that explores human nature in different ways.

    Even the life-like work doesn’t have to be extremely realistic, simulating musculature and skin albedo and such, in order to achieve some of its goals, so I also think that the concern with level of detail or fidelity is a bit of a different issue, although it does seem more allied with the life-like than the believable.

  11. Dirk Scheuring Says:

    Yes, andrew. Now, where can we get it? How is it licensed? What else can it connect to, besides WordNet? You got a “My First Behavior” tutorial?

  12. andrew Says:

    Dirk, Michael and his students are working on a “my first behavior” tutorial for ABL as we speak, and an example hook-up of it to Unreal. It’s coming soon; I apologize for pumping a technology that is not released yet. Bear with us, thanks! (You could potentially ask Michael for a pre-release version, if you want to see it right now, he may be able to do that.) I would have waited to start really promoting ABL until it’s available, but Robin’s Game Tech post sort of egged me on, you see… :-)

    Nick, Mark, there definitely is a distinction between believable and realistic. It can get a bit muddy with the term “life-like”, because the term “illusion of life”, part of the Oz project’s definition of believability, a term originating with Disney animators (see book), seems not too far from the term “life-like”. In fact Michael’s 1997 Oz-overview paper says “A believable character is one who seems lifelike…”

    But your point about the abstract Woggles endowed with believable behavior gets at the essence of what believability was meant to be defined as.

    Getting back to one of Ian’s earlier points,
    I would prefer myself to work on more cartoon like characters if that was possible in the short term. However one very important, perhaps crucial, point to bear in mind is what the audience (consumers) want. Just as in non interactive entertainment the majority of characters will be human like, ideally. So the “uncanny valley” will have to be crossed at some point and with understanding of the systems involved in behavior it can be.

    I’ve been trying to use the word “abstract” and not the word “cartoon”, since cartoon or comic is sometimes interpreted to mean a colorful, goofy, juvenile appearance and design. (Comic book artists deal with this characterization all the time.) Abstract-looking humans should still be able to satisfy grown-up audiences, I hope; they can be human-like, yet avoid the uncanny valley problems and all the extra work required for realism, while retaining (or gaining) believability. Call them “illustrated” if you like.

    btw, perusing minutes ago through the new Dec 2004 issue of Game Developer magazine, I came across: Programming Believable Characters For Computer Games, by Penny Baillie-de Byl, Charles River, May 2004; and, an article on the Uncanny Valley by Steve Theodore — “Are Your Characters Lovable or Creepy?” To quote from it a bit,

    Mori [the researcher who coined the term “uncanny valley”]… suggested that the best tool for creating empathetic, likeable robots was simplification. Rather than pushing directly toward photorealistic creations, he advocated deliberately crude designs that stuck to the lower slopes of the realism curve.

    Theodore goes on to say,

    Contemporary graphics technology has brought us to the pass that looks down into the Uncanny Valley. We’re only just getting good enough that being terrible is a real possibility. … we’ll be dealing with ever subtler and ever more subliminal problems and facing ever more thankless criticism for our efforts. Games like Half-Life 2 have begun to turn gamers attention to the nuances of character and physical presence, but this work is only in its infancy. It’s going to be an exciting and infuriating time.

  13. andrew Says:

    I’ve branched off and expanded some of this discussion in a new thread.

  14. ErikC Says:

    hello a little confused by notions of procedural and declarative here, are there online definitions of it as used in interactive fiction/storymaking etc?

    Ian wrote:
    Stepping a little out of my depth here, but would interactive writing really be procedural when the field is more mature? I would say it would be declarative and event based, not procedural.

  15. andrew Says:

    Erik, we expanded a bit on declarative vs. procedural in the new thread linked to in comment #13, and in my lastest comment there I’ve responded to your question.

  16. andrew Says:

    Here’s a detailed writeup of the believable characters seminar at Game Tech last December.

  17. andrew Says:

    A new Game Brains article Closer than Ever (Yet Still Far Away) is worth a read.

Powered by WordPress