July 25, 2003
Here, there, middleware
Eric Dybsand, a long-time contributor to the AI scene at the Game Developers Conference, just published an informative series of Gamasutra articles about new AI middleware (i.e., code libraries and toolsets) for games. It seems that in the last year or two, several companies have released libraries/toolsets that put together common AI algorithms and techniques into a single package, intended to be easily integrated into a game. The articles take a detailed look at AI.implant, DirectIA, Renderware AI, and SimBionic. A conclusion article summarizes it all. (By the way, whatever happened to Motion Factory?)
Eric concludes with the following:
“After some 16 years of designing and developing AI solutions for many different genres of computer games, I was admittedly skeptical that commercial AI middleware could help the game development process in any significant way. After all, I thought, the AI for every game is unique, and thus requires a craftsman such as myself to design and develop good AI solutions. While that might be true to some degree, this week’s survey of AI middleware revealed to me that these tools offer viable game development solutions.”
These packages may offer viable solutions, but it seems solutions tailored towards the status-quo of game character behavior — action-oriented games in which characters have insect-level intelligence: fight, flee, run around obstacles, etc. Essentially, each of these systems offers a particular finite-state-machine-based idiom for authoring of character behavior. (One system, DirectIA, offers an additional decision making layer, in which events can generate emotions which can motivate behaviors.)
My feeling is that these systems are probably most useful as low-level libraries of AI algorithms and techniques, that game developers can use as they wish, to avoid reimplementing the wheel for certain common AI techniques (pathfinding, state machines). Yet systems that require a developer to conform to a particular overall idiom will probably feel too constraining, making it cumbersome to innovate outside the idiom. For example, the bulk of the original Sims (1999) could not be authored in any of these packages, nor could Petz or Babyz (1996-8). (BTW, here’s a article about the upcoming Sims 2.)
As Michael has educated me, this gets to the idea of affordances of software architectures. That is, even within architectures that are “open-ended” — theoretically, all of the above AI middleware packages are “extendable” enough that you could implement some very complicated behavior — some architectures make it a lot easier to do so than others. Michael delves into the nature of affordances in his thesis:
“The authorial affordances of an AI architecture are the ‘hooks’ that an architecture provides for an artist to inscribe their authorial intention in the machine. Different architectures provide different relationships between authorial control and the combinatorial possibilities offered by computation.” (p. 125)
This also gets back to the earlier discussion about artist programmers. Several of these middleware packages offer GUI-like interfaces, potentially allowing non-programmers to develop behaviors. However, from what I can tell about these packages, it feels like the farther away you move from actual code towards more abstract, simplified representations of behavior, the fewer affordances you have, and less expressive you can be as an author.
I think the closer that AI middleware is to a robust and expressive language, the better… a language with all of the expressive power of traditional programming languages, with new features that support reactive character behavior, intelligent decision making, etc… plus sample code describing different methods (idioms) for using that language, to get you started…