December 17, 2007
Via AiGameDev.com I’m excited to see the Game/AI blog come back to life with an active thread on game industry and academic AI research collaborations. It’s pretty clear that finding common ground for this kind of collaboration is a challenge — though one that people are are trying to address through conferences like AIIDE.
Off the cuff, I think part of the problem here may be that game AI and game graphics don’t have similar relationships to their academic counterparts. Many people in academic circles worked for years on real-time graphics (funded by, say, government flight simulation projects) without any thought that these same techniques might be relevant for games. But, as far as I know, there has been a much smaller body of academic work on AI for things like character behavior (say, the Oz Project, some work at MIT’s Media Lab, some work at Northwestern, etc) and it wasn’t necessarily aimed at the same problems that game AI researchers are trying to solve. Maybe the closest fit is academic AI work toward “the ability to deal with intentional threats from other agents.”
Personally, rather than improvements in combat, I’d be more excited if academic/industry collaborations could get us beyond dialogue trees — and toward a somewhat more robust model of what characters say (and when). I’m finding it very frustrating in Mass Effect when I accidentally trigger long pieces of NPC monologue that I’ve already seen. Why are you saying that again? Or, say, when my squad finishes killing a bunch of Husks (technozombies) made out of a group of researchers, then we roll into their abandoned camp and I hear my squadmate say, “Where is everybody?” We just killed them!