September 27, 2006

Gaming‘s Rapidly Refreshing Theory

by Nick Montfort · , 2:08 pm

GamingA Review of Gaming: Essays on Algorithmic Culture
Alexander R. Galloway
University of Minnesota Press
2006
168 pp.
$17.95 paper / $54.00 cloth

The five essays that make up Galloway’s book Gaming are conversant and compelling, offering valuable perspectives on gaming and culture. They are appropriately concise and well-written, and they show Galloway’s sure command of theory and his solid understanding of games and how they are played.

To be sure, the essays take a high-level view of gaming and its place in culture; although Galloway cites and considers numerous titles, his book will be less useful for close critical encounters with particular games and more useful for understanding the shape and topology of gaming overall. There is another strange twist: the essays fail to inform one another on important points and perspectives, limiting the reach and success of the discussion. But this book does work very well in opening up new ways of thinking about gaming – for instance, in showing how new connections to film and art can be usefully drawn – and supplies good food for thought for scholars and students.

I’ll briefly mention some of the most intriguing things about the five essays in Gaming in order:

1. Gamic Action, Four Moments

Here, Galloway characterizes games as actions and presents two orthogonal axes along which games can be placed: One that varies between diegetic and non-diegetic and one that varies between the control of the operator and the machine. “Diegesis,” although known to me as a term from narratology, is applied here with an understanding of the differences between narratives and games and with good discussion of theories of game experience. These axes can be used (and are used) to classify entire games as being characteristic of a particular point. But these moments are better at classifying particular states and time-slices within games. A full-motion video sequence featuring characters in the game is diegetic and machine-controlled. My selecting whether or not the score will be displayed in non-diegetic and operator-controlled.

While extremely useful, I find that adding an additional axis clarifies the understanding of games considerably without causing too much of an explosion in dimensionality. Along this axis, the variation is between computation and playback. Your computer opponent’s move in Advance Wars would be seen in the original scheme as being on the diegetic and machine ends, and so would Shenmue‘s introductory full-motion video sequence. The two-axis approach is good at making many important distinctions, but results in an unfortunate conflation here. The “action” of the game is clearly different in this case because a more computationally intensive process is present in one situation, and this fundamentally affects the experience of the game. The distinction makes sense whether what is happening is diegetic or not; the computer could be either generating or playing back non-diegetic decorations for parts of the screen, for instance. And it even makes sense when the operator is mostly in control: is the operator responding to something that is being played back (Dragon’s Lair) or to something that is being generated or simulated on the fly (Geometry Wars) or something in between, something that mostly follows a standard, pre-scripted behavior but which might have a random or generated element of some sort (Space Invaders with the flying saucer)?

2. Origins of the First-Person Shooter

This essay draws some very nice connections between the subjective shot in film and the first-person shooter, and is persuasively illustrated. Galloway distinguishes the subjective camera as a special case of a more general “first person” view, showing how the photography in this case does more to enact emotion, perception, and mental state. Numerous antecedents of first-person shooters in film are found and discussed.

The odd thing here is that this essay almost completely overlooks an important conclusion of the previous one, that video games are actions rather than motion pictures. By looking only at the visual predecessors of first-person shooters, it uncovers little from the past that informs how these games work. Where are the target ranges, carnival shooting galleries, and clay pigeons that explicitly inspire several games – Duck Hunt, Hogan’s Alley, and less directly stand-and-shoot games with gun controllers such as the Time Crisis and House of the Dead series? I would guess that the shooting gallery and other game antecedents must also leave their mark on first-person shooters in terms of how these games function and how they are actions, and that they are part of the origin of these games and relate to the filmic ancestors discussed here.

3. Social Realism

Here Galloway makes an important distinction between the quest for verisimilitude and what is thought of in film and literature as realism. I found that the discussion ofAmerica’s Army lacked surprises and fell short when compared with the rest of the essay and in the rest of the book. The essay is quite valuable for engaging the issue of realism and social realism in gaming, and adds an important dimension to the discussion of abstraction versus representation, for instance, Mark J. Wolf’s article in the Video Game Theory Reader.

4. Allegories of Control

In this essay, Galloway treats power, control, and ideology as embodied by games, particularly Civilization III, and finds the more explicitly represented ideology of such games to be trappings or decoys when compared to the deeper function of the games. This essay is notable for having a strong focus on a particular game as well as theoretical heft; it would be a good selection for those teaching Civ III.

5. Countergaming

Here’s the really fun stuff: a look at subversive work by artists that challenges the game concept in various ways. The relationship of commercial gaming to artist-made mods and games (including work by Jodi and Tom Betts) is considered in terms of Peter Wollen’s seven thesis on counter-cinema, opposing Godard’s work to “old cinema.” While quite explicitly cinematically-driven, this essay adapts cinematic ideas in a way that is sensitive to the nature of games.


Overall, Galloway makes valuable theoretical contributions in Gaming, drawing on earlier aesthetic and critical approaches rather than bludgeoning games with them. It also covers a nice array of games from interesting perspectives. Game studies scholars, and those looking into games from other arts, will want to read this book. The main thing I would have liked would have been for Galloway to continue – there are plenty of ways to further develop the ideas here and put the conclusions of one line of thinking into practice in another.

15 Responses to “Gaming‘s Rapidly Refreshing Theory”


  1. Gilbert Bernstein Says:

    a comment (without having read the book)

    In your comment for essay 1, you suggest the addition of a third dimension, computation vs. playback. Being very tech nerdy about it, there’s quite a bit of computation involved in playback, and there are many things that might be seen as playback, but are actually done with quite a bit of computation, such as (for a somewhat trivial example) procedural texture generation. We might classify this as non-diegetic, computational, and machine controlled, but we would also classify tactical behavior in Counterstrike bots in the same way. The problem is that from the perspective of the average player these are different; the procedural textures are just being played back. Maybe it would be better to clarify computation as being context sensitive, by which I mean the computer is reacting to whatever has been classified as external stimuli. (ie. other computer agents, the physics, players, etc.)

  2. noah Says:

    I have the book, and it’s high on my agenda, but this summer didn’t allow much time for reading. Given I haven’t read the book in question yet, the most appropriate things for me to comment on here are Nick’s ideas.

    It sounds to me like the dimension of “computation vs. playback” is meant to express (somewhat as Gilbert suggests above) the amount of potential computational variability. It may take some computation to decompress some FMV and shove it on screen, but there’s no potential for variability there. On the other hand, AI-driven NPC behavior has much potential for variability.

    Of course, this still leaves us with things like procedural textures. They have potential for variation (just tweak the parameters) but usually this potential can’t be realized via player actions. I’m guessing this puts them somewhere in the middle of the computation/playback axis, but I’d be curious to hear Nick’s thoughts on this…

  3. nick Says:

    I’m just talking about what Chris Crawford calls process intensity, as discussed on here before by Michael.

  4. noah Says:

    Well, that’s one of the problems with Crawford’s concept. The “crunch per bit” ratio doesn’t really tell you what that crunching is for, and it might just be, say, to project Doom II in stereo onto the walls of the Cave. It’s a lot more computationally intensive, and it’s pretty cool looking, but the behavior of the system is exactly the same. You could send “crunch per bit” through the roof by adding a bunch of transparency effects and real-time textures to Crawford’s example of the kitchen recipe program. Clearly, “crunch per bit” is not what Crawford’s really trying to talk about.

    Of course, I haven’t read everything he’s written. Has he expanded on the concept somewhere and dealt with this? I think the important issue is the intensity of the processes that determine the system behavior, but then maybe we need a longer term — like “behavioral process intensity.”

  5. nick Says:

    I don’t really see a problem with Crawford’s concept. One program prints a list of 1000 prime numbers that is stored in a file as data. Another program computes these 1000 prime numbers using the Sieve of Eratosthenes. The second one is clearly more process intensive. You can’t tell from the output that they are different, but you can tell from the program – and the fact that the second program will easily generalize to any number range, while the first will not.

    If some programs are doing “wasted” computation (computing something and throwing the results away), or have a lot of overhead because all programs that run on a particular platform have such overhead, they might look more process intensive than they are. So you can remove the wasted and baseline computation before figuring out where they lie along this axis, and the “crunch per bit” tells you what useful computation is happening.

    Distinguishing diegetic and extradiegetic texts seems like a harder problem in many cases, but I think it’s similarly useful to consider.

  6. mark Says:

    Even as far as intensity of the processes that determine system behavior, I’d guess what really matters is some more abstract idea of “process” than literal code-crunching—rewriting code so that it does the same thing more efficiently than before would reduce a “crunch-per-bit” in a literal count of machine instructions per bit of data, but surely that doesn’t reduce the degree of procedurality in a meaningful way. I read Crawford as presenting it as more of a rule of thumb, although it might be problematic even as that when dealing with AI-ish sort of stuff, since AI stuff varies wildly in computational complexity, not always in ways that have much to do with what a player would observe.

  7. Gilbert Bernstein Says:

    Maybe as mark says, Crawford’s process intensity might work as a rule of thumb, but its grounding in computation is still problematic when viewed from the technical standpoint. In particular, the following, from the beginning of Crawford’s article, seems to indicate a misunderstanding of what modern computation is:

    “The difference between process and data is profound. Process is abstract where data is tangible. Data is direct, where process is indirect. The difference between data and process is the difference between numbers and equations, between facts and principles, between events and forces, between knowledge and ideas.

    Processing data is the very essence of what a computer does.”

    The key insight of the Von Neumann Machine (ie. every modern computer) is the “stored program” or that process is data; there is no fundamental difference between the two. Crawford also distinguishes between just moving data and processing, but at ISA level these are really both just instructions. What appears to be computation at one level of abstraction may actually involve moving quite a bit of data shunting, and likewise a request to move data can often be abstracted such that nothing is actually moved in some cases.

    It seems to me that if process intensity is to be a useful concept, process and data need to be expressed such that they don’t appeal to the lower level, and highly abstract world of arithmetical operations and primitive data types.

  8. nick Says:

    Crawford is not misunderstanding modern computing. His point as I understand it does not relate to how programs and data are stored in a computer. The Von Neumann architecture simply stores the bits relating to both programs and data in the same way, which hardly eradicates all distinction between them. The Harvard architecture doesn’t do this, and recent innovations such as the NX (no execute) bit that AMD introduced bring an architectural distinction between programs and data back into modern computing.

    But none of this really matters when it comes to process intensity. I like considering instruction set architectures as much as the next girl, but process intensity isn’t directly about CPU utilization or things that are more detailed. Printing 1000 prime numbers from a file is less process intensive than is actually computing those numbers; It’s useful to characterize creative computing along this dimension to understand what is going on in it.

  9. Gilbert Bernstein Says:

    I suppose it’s just that I don’t see what the significance of computing vs. storing the first 1000 prime numbers is beyond a time/space complexity tradeoff. Perhaps if one didn’t scale, but regardless of which approach is used for the first 1000 numbers, the rest could still be computed with the sieve. The only difference between the two perceptible from the other side of the function abstraction is differing time/space requirements.

    I mean, any process on a computer ends up being a function, and mathematically a table (vector) of values is just a function too. (albeit with a finite domain, though this could easily represent a finite partitioning of an infinite domain) I can make a table lookup very complicated and look like a function, or I can make a function look very much like a table lookup. I can take a function which doesn’t use tables at all and pre-compute a common case table to speed up execution. Regardless, I’m left with, mathematically, the same function. It’s just written differently.

    Maybe, is the point is to talk about how the author conceives of and writes the process rather than what the actual process going on is?

    ps. sorry if I’m being confrontational. I’m just trying to get a handle on the concept

    … I really do like the blog. =)

  10. noah Says:

    Nick, I think Crawford’s talking about process intensity as a way to understand why software proposals such as recipe programs and checkbook balancers for early PCs were bad ideas. They’re bad ideas because very little processing goes into what they do, into how they behave. But when Crawford formulates this as the “crunch per bit” ratio this focus on program behavior is lost.

    What I’m saying is that I think a recipe program, even if it used fancy rendering to achieve a high crunch per bit ratio, would still make sense to place “low” on the process intensity scale. We just need to stop thinking of process intensity as “crunch per bit” and instead as something like “crunch per behavior” or “crunch per decision”.

  11. nick Says:

    Gilbert, I like having conversations on the blog, as we all do here, so, thanks for the replies.

    The example I’m using comes from Edsgar W. Dijkstra, specifically EWD277. The only thing Dijkstra says about the program that prints a stored list of 1000 prime numbers is that it is an “obvious version of the program” and “We shall not pursue this version, as it would imply that the programmer hardly needed the machine at all.”

    One can find two programs, one using playback and one using computation, to do anything that can be computed. The differing time/space requirements may not even be perceptible; there may be no perceptible difference at all in terms of output or runs. But a video game or other program that does almost nothing but playback is one in which the programmer hardly needs the machine at all. It’s not just a matter of the author’s conception of the program. With such a program, the function implemented cannot be general and is less likely to be interesting.

    To relate this to one of Galloway’s axes, operator-machine: If we take a particular ten minutes of play of Grand Theft Auto: Vice City, capture video of it, and then play back the video, our video does exactly the same thing that the video game did in terms of output. We could say that having a video vs. having the game (manipulated in a certain way) is just a matter of time/space tradeoff. But that misses the point that we have an interactive computer program in one case – capable of dealing with input and doing something else as well as what it did – and a video that can do only one thing in the other case.

    Dragon’s Lair is interactive, but consists of the playback of clips of video; characterizing it along the operator-machine axis as well as the playback-computation one helps to differentiate it from, say, a turn-based strategy game like Advance Wars in which there are (similarly) limited points of control, but what happens is being generated at least in part rather than being played back.

    I’m quite interested in understanding new media at the level of computer systems and organization as well, but what I’m trying to get at here is not really at this level. It also isn’t just a question of how the author thought about things, although that is important, too. Instead, it’s meant to be an aspect of how the program functions, as Galloway’s two axes are.

  12. nick Says:

    Noah, I agree that there are ways to refine this metric to highlight the computation that is making contributions rather than just spinning the processor. At the same time, I don’t mean to map “computation” to “doing interesting stuff.”

    Perhaps in distinction to Crawford, I’m not introducing this playback-computation idea only to say that playback is bad. My comments have tended to snub playback a bit, but I don’t think games get monotonically better the less playback they have. Dragon’s Lair is an interesting game, and was innovative because of how it used more playback than other games of that time.

    Even the recipe program with fancy rendering has more potential to render different scenes (perhaps untapped) coded into it; whether this is an interesting capability or whether this power is used is another matter. The player’s actions (considered on the operator-machine axis) can be “thrown away” or can cause no significant, high-level change, as with additional computation, but it’s still useful to look at both axes.

  13. scott Says:

    I like recipe programs. Recipe databases are the most useful aspect of the contemporary Web from the perspective of the gourmand or amateur chef. PCs will only have truly advanced however, when they sport actual process-intensive cooking capabilities, as per the Jetsons. At that point it will truly make sense to consider a high crunch per bit ratio into the value equation. And why not? We have robot vacuum cleaners.

  14. Walter Says:

    We just got done discussing Crawford’s article and a related one by Greg Costikyan (not his new one, but we did discuss that a bit) in Expressive Computation here at GATech, but I’ve been mulling over this question of what we’re really trying to get at with the term “process intensity” since reading this thread. I think the concept we’re really looking for is one of articulation (computational or representational articulation, articulation complexity), recalling the term “points of articulation” that people use to talk about, say, action figures. Instead of looking at processes and computation, which are a bit too general, we should be thinking about the ontology on which a program can operate to articulate meanings, and the complexity of articulation it affords. So while an FMV playback program can articulate color value, pixel placement, audio variables, etc., a videogame can potentially articulate on the same levels as well as on the level of individual game objects (there’s a big gap in fidelity, of course, when comparing Dragon’s Lair to a more typical videogame from the same time period). With word processors, the user can re-articulate the data already there to create new meanings (by cutting and pasting), whereas the typewriter generally has no capacity to re-articulate the user’s existing data at all.

  15. Grand Text Auto » Postmodern Programming Tackles Primes Says:

    […] long while ago Gilbert Bernstein discussed process intensity and programming with us in the comment section after my review of Alex Galloway’s Gaming. I mentioned one of […]

Powered by WordPress