At the recent 2010 Computational Intelligence in Games conference in Copenhagen, Denmark, there were competitions for making race car controllers, human-like FPS bots, and Ms. Pac Man players, among others. The competition that drew my interest, however, was the Mario level design competition, which challenged entrants to create procedural level generators that could generate fun and interesting levels based on information about a particular player’s style. The restrictions on entries (use only fixed numbers of gaps, coin blocks, and Koopas) meant that the winner would have to be cunning: it wouldn’t be easy to just estimate player skill and make the level more or less difficult by adding or removing obstacles; you would instead have to find ways to place obstacles that made the same obstacles more or less difficult. And because entries would be played by players at different skill levels, they would have to be flexible and adjust their output over a broad range of difficulties to get a high score. Finally, because score was based on the audience’s relative rating of enjoyment between level pairs, there would be no way to game the system and optimize some metric set without making truly enjoyable levels. Between these constraints, the winning entry should have been a demonstration of the power of procedural content generation to adapt to players of different skill levels, which is one of several reasons that PCG is useful in games. Unfortunately, the competition design may have been a bit too clever.
This is a preview of
Retrospective on the CIG 2010 Level Design Competition
.
Read the full post.