January 11, 2007

Introducing Process Intensity

by Noah Wardrip-Fruin · , 10:41 am

I’m currently working on the first chapter of a book manuscript, trying to find the right way to introduce a number of concepts that will be key for understanding the chapters that follow. Recently I’ve been trying to find a concise way to introduce Chris Crawford’s 1980s concept of “process intensity” — while also arguing for a view of the concept updated for our current circumstances. My current draft is below.

We might think of Pong and many other early computer games (e.g., Tetris) as being authored almost entirely in terms of processes, rather than data. An “e-book,” on the other hand, might be just the opposite — a digital media artifact authored almost completely by the arrangement of pre-created text and image data. In an influential 1987 article, game designer and digital media theorist Chris Crawford coined the phrase “process intensity” to describe a work’s balance between process and data (what he called its “crunch per bits ratio”).

Crawford points out that, in early discussions of personal computers, certain genres of software failed despite widespread belief that they would be attractive — specifically, he cites checkbook balancing software and kitchen recipe software. He argues that these genres failed for the same reason that the 1980s computer game hit Dragon’s Lair (which played sequences of canned animation, rather than dynamically drawing graphics to the screen) was a dead end, rather than the first example of a new game genre. In all these cases, the software is designed with low process intensity. In fact, Crawford goes so far as to argue that process intensity “provides us with a useful criterion for evaluating the value of any piece of software.”

In Crawford’s article games other than Dragon’s Lair come out quite positively. He writes, “games in general boast the highest crunch per bit ratios in the computing world.” But Crawford wrote in 1987. Almost two decades later, game designer and theorist Greg Costikyan gave a keynote address at the 2006 ACM SIGGRAPH Sandbox Symposium titled “Designing Games for Process Intensity” — reaching a rather different conclusion. As Costikyan writes in a blog post from the same year:

Today, 80+% of the man-hours (and cost) for a game is in the creation of art assets.

In other words, we’ve spent the last three decades focusing on data intensity instead of process intensity.

In fact, the shift has been so profound as to call for a rethinking of the very concept of process intensity. The games cited by Crawford — such as Flight Simulator and Crawford’s own game of political struggle, Balance of Power — use much of their processing toward the game’s novel behavior. However, in the time between Crawford’s and Costikyan’s statements the graphics-led data-intensive shift in computer games has not only increased the amount of effort placed in creating static art assets. It has also driven an increasing share of processing toward greatly improved visuals for remarkably stagnant behavior. While this represents an increase in processing, it’s the same increase that could be achieved by taking a kitchen recipe program and adding live 3D extrusion of the typeface and freeform spinning of simulated recipe cards with the latest lighting effects. This would send the recipe program’s process intensity through the roof … while running completely counter to Crawford’s ideas.

This kind of distinction — between processing used for graphics and processing used for behavior — is not only of interest to game developers. It is also a distinction well understood by players. For example, it is not uncommon for players of PC games to choose a lower level of graphical rendering (e.g., in order to increase the responsiveness of the interface or reduce the visual weight of elements not important to the gameplay). Players who choose to lower levels of graphical processing are not considered to be playing significantly differently from players who choose higher levels. On the other hand, some games also allow players to vary the level of artificial intelligence processing employed by the system. This changes the game’s behavior by, for example, making computer-controlled opponents easier to defeat (e.g., in computer chess or a first-person shooter). Players view this type of change, a change in behavior-oriented processing, as a much more significant change to gameplay.

Players have also “voted with their feet” in favor of behavioral processing. While many games pursue increasingly photorealistic graphical rendering, Will Wright and his team at Maxis designed The Sims around low-end graphics and complex behavioral systems. The game’s publisher, Electronic Arts, at first resisted the title — in part because its process-intensive design had created innovative, unproven gameplay focused on managing the lives of simulated suburban characters. But when The Sims was released it became the best-selling PC game of all time. It accomplished this in part by reaching a significantly wider audience than the “hard core” (stereotypically, young males) toward whom most computer games seem to cater. However, despite the success of The Sims and the fact that game company executives regularly express the desire to reach wider demographics, innovation at the level of behavior-oriented processes is still largely resisted within the game industries, viewed as a risky alternative to the tried-and-true approach of combining flashier graphics with the same gameplay behaviors as previous data-intensive hits.

This book’s focus is on what systems do — what they enact, how they behave — rather than what the surface output looks like. This could be characterized as an interest in “behavioral process intensity” of the sort practiced by digital media designers like Wright (which is probably what Crawford meant from the outset). As is probably already apparent, this will bring a significant amount of “artificial intelligence” into the discussion.

From here the chapter transitions into a discussion of concepts such as Michael’s Expressive AI (on which, perhaps, more later). If there’s interest, I may share more material as I keep writing.

12 Responses to “Introducing Process Intensity”


  1. warpedvisions.org » Blog Archive » Plot versus content in games Says:

    […] lot versus content in games Here’s an excerpt from an upcoming book that describes process intensity, which is the amount of processing power (and effort […]

  2. Aaron Reed Says:

    There may be a connection between the increased focus on graphical intensity and the decreased attention span of a gaming culture continually flooded with more and more product to choose from. Besides three state-of-the-art console systems and a PC, there is a snowballing number of handhelds, cell phones, and even older consoles for which new games continue to be produced. At the same time services like Gametap and XBox Live make even more titles available, while encouraging people to spend less and less time with each individual game. Graphics and other “surface” processing power does a great job at making the increasingly important first impression, which may be why emphasis has shifted to these areas. By contrast, the “behavioral process intensity” of a system often takes time to notice and becomes apparent only after multiple play sessions. “Dragon’s Lair” was spectacular on its first play-through: it’s only as you started seeing those same clips over and over again that it lost its luster. Conversely, “Civilization” only reveals its charms after long hours spent delving into the underlying processes.

    Making simple choices is also something the public seems to understand more readily than interacting with a complex system. Whenever I try to explain IF to people, they almost invariably compare it to “Choose-Your-Own-Adventures,” but even Zork is more complicated than that, simulating states and tracking variables. In my own work, I try to use lots of under-the-hood process intensity to increase the player’s ability to affect the story and environment, but many people just assume they’re reading a static block of text, unless they take the time to delve deeper. I gather with Facade there were similar problems with people’s perceptions of the facial animation system.

    Interesting excerpt, Noah. As you continue work I’d love to see more bits and pieces posted.

  3. Patrick Says:

    This sounds like a good book. Not suprisingly, I’m making my money these days working on data-intesive games, and spending money researching process intensive gameplay modes.

  4. Luke Munn Says:

    Couple quick questions that might help you condense and clarify things. At the beginning of your article, you make a clear distinction between the two ‘intensities’. Process seems to equate to gameplay, interactivity, significant choices made by the player. Data equals art and graphics, decoration, interface. Your sentence about 3d spinning recipe card confused me. Surely this is more ‘refined’ data, rather than an increase in process intensity?

    Secondly, you might want to detail a little more about the lo-res graphic option that hardcore gamers use. Some readers might not be familiar with the extremely low level of detail that counterstrike, unreal, etc players are willing to live with for that extra framerate that gives them an edge ingame.

    The original GTA was designed by developers that deliberately chose dated, top down 2d graphics over the awkward 3d translations currently saturating the market. Again, like the Sims, this graphic tradeoff allowed for a more fully formed world that responded intelligently to the player. Nintendo is another example of a company that has deliberately forsaken developer years crafting artwork and cramming consoles with graphical power in favour of a focus on gameplay, WiiSports, WarioWare, Mario series, etc. Although for your book I’m assuming you’ll want to delve more into deeper AI strategies used in games like The Sims, Civ, etc.

  5. Matt Kirschenbaum Says:

    Noah,

    This is good stuff and I (for one) want to see more! My quick take–from this brief excerpt–is that it would be useful to balance the introduction of process intensity by also giving graphics their due, via some of the more successful theoretical accounts of images in various strains of visual studies. I’m thinking here of Johanna Drucker’s work, for example. I totally get what you’re after here, but graphics can’t just be a foil for behavior.

  6. noah Says:

    I’m glad people found this excerpt interesting, and I’ll definitely post more.

    Aaron, I agree that graphics are much more marketing-friendly than processes. We can evaluate graphical improvements by looking at a magazine or television ad — we don’t need a copy of the game — whereas any touted process improvement is likely to be treated with some skepticism (or even confusion as to its potential impact) until players experience it themselves.

    Luke, I think you’re right that the spinning recipe cards may be the wrong example. I’m meaning to point out that processes can be used for a variety of things, from AI behavior to real-time lighting effects. But being able to freely spin the recipe cards is actually a new type of behavior. So I’ll rework that passage. Also, I agree it might be good to include an illustration that shows how far people are willing to turn down their graphics in return for increased system responsiveness.

    Matt, you’re of course correct that graphics can be complex and interesting. So can text. But I think the thing I’ve failed to make clear here is that this book is about processing. From this viewpoint, graphics and text can be interesting output from processes, and they can be the (structured or unstructured) data on which processes operate, but they’re not central to the discussion. Still, you’re probably right that I will want to make it clear, somewhere in the discussion, that I have a somewhat more nuanced view of data than “that pile of stuff over there.”

    Overall, my sense from the discussion so far is that the best excerpt to present next might be the one that precedes this — where I introduce my focus on processes and establish my version of the process/data vocabulary. I’ll plan to post that in the near future.

  7. Matt Kirschenbaum Says:

    In Mechanisms I gloss a “mechanism” as both a product and a process, but as you know my emphasis in the book is more on the product: new media as historically locatable artifacts. Makes me wish I could insert one big, fat hyperlink to your manuscript under “process.”

  8. Gilbert Bernstein Says:

    This seems very readable from the perspective of a gamer, especially since the “graphics aren’t everything” issue comes up even if you’ve done only the most casual reading of gaming journalism. I can’t really say anything about non-gamers, though I bet they’d have some really useful comments.

    This seems like more of a crystalized idea and explanation of process intensity than when I posted on here before about it. I’m really excited to see your explanation the data/process distinction, since this seemed the weakest point of the process intensity idea before.

    I largely agree with graphics bashing, because truly insignificant graphics are often over-emphasized. However, I’m not so hot on creating a divide between graphics processing and behavioral processing. I think it’s really useful to try to cull out superfluous processing, but I don’t think graphics = superfluous. As examples, both low-data high-process graphics and vice-versa have been essential to the experience of playing some games. In particular I’d think of Mario 64 as an example of a high-process low-data game, (although low-data is debatable, in comparison to many 3d games now it’s arguably low-data) and Resident Evil as an example of a low-process high-data one. As I see it, Mario 64 was huge in that it introduced conventions and vocabulary for 3d platformers, subsequently providing a base for many other 3d game vocabularies and new forms of game interaction. Resident Evil, however was successful for creating the experience of horror in players, to which the graphics choice of fixed perspective and detailed static renderings (data or pre-process?) contributed enourmously. That you couldn’t always see where the zombies were coming from and that the environments are detailed enough to convey the atmosphere considerably aided the horror aim. If Resident Evil should be considered a low process intensity game, then it could be an example of where low process intensity dictated conditions that (perhaps inadvertently) furthered the game’s aims. In both cases the graphics might be such that they fall under categories that are considered superfluous and ornamental, but I don’t think that would be accurate for the graphics in these games.

    Anyways, It looks really good. I hope bringing up contentions isn’t too distracting at this point in your writing process.

    Looking forward to more.

  9. noah Says:

    Matt, thanks! I’m looking forward to the release of Mechanisms.

    Gilbert, I agree entirely. It doesn’t make sense to think that we can judge the value of a work based on its process intensity. Resident Evil is a nice graphics-centric example. Also, an e-book fiction, email narrative, or blog novel with strong text would have very low process intensity — but it could make for a great, innovative audience experience (just like we can still have with print books). In fact, this is probably easier than making a successful audience experience via high process intensity. We’re just starting to learn how to author media processes, whereas we have millennia of experience authoring media data. But authoring media processes is also, from my perspective, the great promise of digital media. So that’s why it’s the focus of the book.

    As for separating graphics processing from behavior processing, you’re right that they’re not totally separable and that graphics processing is not by definition superfluous. I don’t mean to say graphics processing is bad — only that my book is more about things like rules for character behavior than things like ray tracing (or complex database operations, or…).

    All that said, don’t worry about bringing up difficult questions at this point in the writing. If I’m leading people astray conceptually it’s definitely better to know now than later.

  10. Jimmy Maher Says:

    Just to prove everything is not back or white:

    If we go back a bit, we can actually find examples of process being used to generate content. I am thinking of the original Elite in particular. It had a universe of several thousand planets for the player to explore, but said universe was not stored as static assets. It was rather generated on the fly through mathematical algorithms that were always seeded with the same constants, thus leading to the same environment always being calculated, transparently to the player. This was the only method that could allow such a huge field of exploration in a game that ran in 32K of RAM.

    Crazy stuff those 8-bit pioneers got up to…

  11. noah Says:

    Jimmy, good point. The general idea of process and data having a certain interchangeability does need to be acknowledged. Crawford’s essay provides his take on it:

    Experienced programmers know that data can often be substituted for process. Many algorithms can be replaced by tables of data. This is a common trick for expending RAM to speed up processing. Because of this, many programmers see process and data as interchangeable. This misconception arises from applying low-level considerations to the higher levels of software design. Sure, you can cook up a table of sine values with little trouble — but can you imagine a table specifying every possible behavioral result in a complex game such as Balance of Power?

    Of course, what you’re talking about is really more like procedural content, rather than saving a table of numbers as a shortcut for calculations. Personally, I think of procedural content as a good example of process intensity. Costikyan makes a similar point, in discussing Spore, in his post on the subject.

    But this doesn’t entirely put the matter to rest. For example, we might ask the same question about a system that uses n-grams. The raw data could be analyzed for its n-grams each time that the information is needed, or each time that the system starts up, or by a different system beforehand (and then stored as data). This could result in very different levels of “process intensity” for the system, but in each case the system would be operating based on the results of running the same n-gram process on the same body of data. We’re really just talking about programming efficiency here, and it doesn’t seem to me that the different system designs really have different behavioral process intensity.

    As you can probably tell, I think it might be interesting to write an essay some time about these sorts of “edge cases” for process intensity. But my hope is that, for this book, just laying out the broad outlines of the idea will suffice.

  12. Grand Text Auto » Media Machines Says:

    […] -process manuscript — currently titled Expressive Processing — on the topic of process intensity. Interesting discussion ensued, I decided to post fur […]

Powered by WordPress