January 23, 2008

EP 1.2: Authoring Processes

by Noah Wardrip-Fruin · , 6:05 am

Figure 1.1: Authoring data and process.

A few paragraphs ago I said that the possibility of creating new simulated machines, of defining new computational behaviors, is the great opportunity that digital media offers. Seizing this opportunity requires a bit of a shift. It is common to think of the work of authoring, the work of creating media, as the work of writing text, composing images, arranging sound, and so on. But now one must think of authoring new processes as an important element of media creation.

In undertaking this shift, it may be helpful to think of the creation of a piece of digital media as being organized like figure 1.1. The work is made up of data and process, with a somewhat fuzzy line between them.1 The data elements are mostly pre-created media (text, still images, video and animation, sound and music) and the sorts of things that are stored in spreadsheets (lists and tables of information, with varying degrees of structure).

The processes, on the other hand, are the working parts of the simulated machine. Some are dedicated to tasks with simple structures, such as displaying a series of video images on a screen. But many of digital media’s tasks are more complex in structure, requiring processes capable of performing in a range of different ways. Even a simple piece of digital media such as Pong (figure 1.2) has processes that define behaviors much more complex than showing a series of images in quick succession. The processes of Pong define and calculate simple rules of physics (how the ball bounces off the paddles and walls) and simple game rules (who receives each serve, how points are scored, and how winning is achieved) that, when well-tuned, can combine to create a compelling experience of gameplay — even in the face of remarkably primitive graphics.

Figure 1.2: The iconic early video game Pong gives players a simple goal: to use their simulated paddles to knock back a simulated ball — keeping it in play until one player misses, causing the other player to score.

Of course, the idea of creating media through the authoring of novel processes is not new. Tristan Tzara’s Dada cut-up technique was presented, in the wake of World War One, as a process for turning a chosen newspaper article into a poem. On a more technological level, the pioneers of early cinema had to develop novel processes (embodied in physical machinery) to capture and display their sets of image data. And, on a longer-term level, the creation of board and card games has always primarily been the development of process definitions, embodied in game rules, that determine how play moves forward.

In important ways the non-computational media processes mentioned above are like the processes of digital media: they are defined previously, but (at least in part) carried out during the time of audience experience. This is true as Tzara pulls a paper scrap from his sack, as the Zoetrope image flickers, as the poker hand goes through another round of betting, and as the image of a Pong ball bounces off the image of a Pong paddle. The processes of digital media are, however, separated from non-computational media processes by their potential numerousness, repetition, and complexity. For example, we might play a game of tennis using the rules of Pong — they’re simpler than the normal rules of tennis. But we wouldn’t want to play Pong as a board game, having to hand-execute all the processes involved even in its (extremely simplified) modeling of physics. It is the computer’s ability to carry out processes of significant magnitude (at least in part during the time of audience experience) that enables digital media that create a wide variety of possible experiences, respond to context, evolve over time, and interact with audiences.

Process intensity

Returning to data and process, we might think of Pong and many other early computer games (e.g., Tetris) as being authored almost entirely in terms of processes, rather than data.2 An “e-book,” on the other hand, might be just the opposite — a digital media artifact authored almost completely by the arrangement of pre-created text and image data. In an influential 1987 article, game designer and digital media theorist Chris Crawford coined the phrase “process intensity” to describe a work’s balance between process and data (what he called its “crunch per bits ratio”).

Crawford points out that, in early discussions of personal computers, certain genres of software failed despite widespread belief that they would be attractive — specifically, he cites checkbook balancing software and kitchen recipe software. He argues that these genres failed for the same reason that the 1980s computer game hit Dragon’s Lair (which played sequences of canned animation, rather than dynamically drawing graphics to the screen) was a dead end, rather than the first example of a new game genre. In all these cases, the software is designed with low process intensity. In fact, Crawford goes so far as to argue that process intensity “provides us with a useful criterion for evaluating the value of any piece of software.”

In Crawford’s article games other than Dragon’s Lair come out quite positively. He writes, “games in general boast the highest crunch per bit ratios in the computing world.” But Crawford wrote in 1987. Almost two decades later, game designer and theorist Greg Costikyan gave a keynote address at the 2006 ACM SIGGRAPH Sandbox Symposium titled “Designing Games for Process Intensity” — reaching a rather different conclusion. As Costikyan writes in a blog post from the same year:

Today, 80+% of the man-hours (and cost) for a game is in the creation of art assets. In other words, we’ve spent the last three decades focusing on data intensity instead of process intensity.

In fact, the shift has been so profound as to call for a rethinking of the very concept of process intensity. The games cited by Crawford — such as Flight Simulator and Crawford’s own game of political struggle, Balance of Power — use much of their processing toward the game’s novel behavior. However, in the time between Crawford’s and Costikyan’s statements the graphics-led data-intensive shift in computer games has not only increased the amount of effort placed in creating static art assets. It has also driven an increasing share of processing toward greatly improved visuals for remarkably stagnant behavior. While this represents an increase in processing, it’s the same increase that could be achieved by taking a kitchen recipe program and adding live 3D extrusion of the typeface, with the letters coated in simulated chrome and glinting with the latest lighting effects. Executing these computationally expensive graphical effects would send the recipe program’s process intensity through the roof … while running completely counter to Crawford’s ideas.

This kind of distinction — between processing used for graphics and processing used for behavior — is not only of interest to game developers. It is also a distinction understood by players. For example, as Jesper Juul (2005) and others have pointed out, it is not uncommon for players of PC games to choose a lower level of graphical rendering (e.g., in order to increase the responsiveness of the interface or reduce the visual weight of elements not important to the gameplay). Players who choose to lower levels of graphical processing are not considered to be playing significantly differently from players who choose higher levels. On the other hand, some games also allow players to vary the level of artificial intelligence processing employed by the system. This changes the game’s behavior by, for example, making computer-controlled opponents easier to defeat (e.g., in computer chess or a first-person shooter). Players view this type of change, a change in behavior-oriented processing, as a much more significant change to gameplay.

Players have also “voted with their feet” in favor of behavioral processing. While many games pursue increasingly photorealistic graphical rendering, Will Wright and his team at Maxis designed The Sims around low-end graphics and comparatively complex behavioral landscapes. The game’s publisher, Electronic Arts, at first resisted the title — in part because its process-intensive design had created innovative, unproven gameplay focused on managing the lives of simulated suburban characters. But when The Sims was released it became the best-selling PC game of all time. It accomplished this in part by reaching a significantly wider audience than the “hard core” (stereotypically, young males) toward whom most computer games seem to cater. However, despite the success of The Sims and the fact that game company executives regularly express the desire to reach wider demographics, innovation at the level of behavior-oriented processes is still largely resisted within the game industries, viewed as a risky alternative to the tried-and-true approach of combining flashier graphics with the same gameplay behaviors as previous data-intensive hits.

This book’s focus is on what systems do — what they enact, how they behave — rather than what the surface output looks like. This could be characterized as an interest in “behavioral process intensity” of the sort practiced by digital media designers like Wright (which is probably what Crawford meant from the outset). As is likely already apparent, this will bring a significant amount of “artificial intelligence” into the discussion.

Expressive AI

The problem with artificial intelligence (or “AI”) is that, in trying to capture the structure of the world or the way reasoning works, it always captures someone’s idea of how things are, rather than any transcendental truth. Of course, this isn’t a problem in all contexts, but it is when trying to understand human intelligence (the overlap of AI and cognitive science) or when trying to create a software system that acts intelligently in a real-world context (most other uses of AI). This, in part, is why the most prominent AI efforts of recent years have been statistically-driven approaches to very focused problems (e.g., Google’s search results, Amazon’s recommendation system) rather than hand-authored approaches to large problems (e.g., general-purpose reasoning).

However, when it comes to media, the goals are no longer general-purpose. Rather, the authoring of media is precisely the presentation of “someone’s idea” of something. For fiction, it’s someone’s idea of people, of stories, of language, of what it means to be alive.

Given this, if we look at the history of artificial intelligence from the perspective of media, we see something other than a sad collection of failed attempts at objectivity and universality. Rather, we see a rich set of tools for expressing and making operational particular authorial visions. This is the shift marked by Michael Mateas (an AI researcher, artist, and game developer) in naming his practice “Expressive AI.”3 As Mateas puts it:

Expressive AI views a system as a performance of the author’s ideas. The system is both a messenger for and a message from the author. (Mateas, 2002, 63)

Of course, from the point of view of digital media (rather than AI) Mateas is saying something rather everyday. For example, Ted Nelson, in a 1970 article later reprinted in his seminal book Computer Lib / Dream Machines (1974), described “hyper-media” computational systems that would embody and perform authorial ideas — more than three decades before Mateas. Similarly, designers of computer games clearly author processes to embody and perform their ideas for audience experience. But both hypermedia and computer game designers have been content, largely, with data-intensive approaches, while AI has traditionally developed process-intensive solutions. And it is Mateas’s approach of combining AI’s process intensity with the authorial perspective of digital media and games that has allowed him to co-author groundbreaking digital fictions such as Terminal Time (perhaps the only successful story generation system yet created) and Façade (the first true interactive drama) — both of which will be discussed further in coming pages.

For this book’s purposes, of course, the important issue is not whether any particular technique arises from, or connects to, traditions in AI. Rather, it is the potential for using process-intensive techniques to express authorial perspectives through behavior. This brings me to one of the two meanings for “expressive processing” in this book: a broadening of Mateas’s term, beyond AI and into the processing that enables digital media in general.

Notes

1Though the concepts of “data” and “process” seem clear enough as ideas, in practice any element of a system may be a mixture between the two. For example, the text handled by a web application is generally thought of as data. However, this textual data is often a mixture of plain text and markup language tags (from an early version of HTML or an XML-defined markup language). These markup language tags, in turn, may define or invoke processes, either on the server or in the web browsers of the site’s audience. Luckily, this sort of intermingling (and more complex cases, as when a process is used to generate data that might as easily have been stored in the system initially) does little to diminish the basic usefulness of the concepts.

2It is perhaps worth clarifying that my argument here is not that authoring digital media requires authoring both data and processes. The data and process elements of a work of digital media may be newly-authored, selected from found sources (e.g., found footage is still data and the vendor-supplied behaviors in an authoring tool such as Flash are still processes), or even largely undefined at the time of authoring (and instead defined by processes executed at the time of audience experience). In any case, they will rest on a foundation of process and data that make up the platform(s) on which the work operates.

3I pick out Mateas because of his particular interest in fiction and games. But similar shifts have been undertaken by a number of other prominent young researchers with AI backgrounds, such as Phoebe Sengers and Warren Sack.

Continue reading…

32 Responses to “EP 1.2: Authoring Processes”


  1. nick Says:

    Perhaps it’s worth noting that this “vote” in favor of The Sims and process intensity was a just a follow-up to a previous vote in favor of a data-heavy game that had little in the way of behavioral processing: The Sims unseated Myst as best-selling PC game.

  2. noah Says:

    Right, I don’t mean to suggest that process-intensive experiences are the only ones that players enjoy — but rather that such experiences actually are valued, when many seem to assume only better graphics matters. Adding a note might be a way to clarify what I’m after.

    I also don’t want this to seem like a competition between graphics and behavior. That said, however, it works nicely with the flow of the discussion here for The Sims to have unseated such a graphics-focused game. Another good reason for the note…

  3. Terry Says:

    I’m not sure that I would use Tetris as an example of an early computer game.

  4. Lord Yo Says:

    I second Terry’s opinion. It is a good example for a game with simple rules, but certainly not an early game.

  5. Lord Yo Says:

    If rendering graphics is a process, rendering behavior that is expressed graphically is a meta-process. Of course it doesn’t make sense to painstakingly distinguish between discrete meta-levels as each compiling / chunking / condensing step in terms of programming creates its own level (starting at the binary level) – in other words, we don’t need to map out what those levels are in detail. However it might make sense to become aware about the rudimentary distance between those levels – which you are touching upon in this subchapter “process intensity”.

  6. nick Says:

    A lot happened between 1972 and 1985, but I’m sure contemporary game developers consider Tetris an early game – I think even something like System Shock could count as an early game by now, in many discussions.

  7. Terry Says:

    @Nick

    I think this is like calling Murnau’s Sunrise an early movie and then calling 2001 an early movie in the same breath. I don’t think it’s useful to lump 13 years worth gaming in one “early” category. I think another example of an early game in this statement, which is essentially an aside, would be less jarring for the reader.

  8. noah Says:

    How about a word like “iconic” instead of “early”? I think that gets at what I mean about these games, without opening the door to people wondering, “Does he actually know when Tetris first appeared?”

  9. Barry Says:

    I would query your characterisation of The Sims here. Sure, it can run on moderate hardware, but the sheer number of art assets created for the game is phenomenal. I can only guess at how many artists work on each iteration of the Sims, but that art department is there for a reason? A lot of the game art assets created have no real procedural function (we decorate our houses and play dress up, as well as play with the lives of our Sims) and I wonder if the understanding of graphics intensity here is a little crude (number of polys on screen at any one time) rather than recognising the way in which The Sims is also a visual toybox that has followed industry norms, but by ramping up graphic volume rather than intensity. The extent to which the in-game camera function in later iterations has been used for machinima making, rather than simply recording the ‘lives’ of the Sims, might indicate that the visual possibilities offered by the game are important to some players? Those art assets might not put a heavy processing load on a PC (although my memory of The Sims 2 was that it ran like a donkey on my fairly high spec PC) but they sure eat up a lot of space (along with all the sound files) on my hard drive, and in that sense at least this is an extremely data-heavy title. I understand the focus of your study, but isn’t the player of The Sims assumed (by Wright, EA and originally Maxis) to be deeply concerned with ‘what the surface output looks like’? That the surface wasn’t filled with mud-brown space marines and blood splatter particle effects might account for some of the other drivers you allude to as responsible for its commercial success? There are two uses of ‘in part’ close together in this paragraph that might also indicate the need for a note that at least flags the extent to which The Sims is not an extreme alternative to graphically intensive games?

  10. Terry Says:

    I like iconic.

  11. Chris Lewis Says:

    I think Barry is right here. I don’t know if Wright freed himself from spending time on photorealistic graphics in order to spend man-hours on the gameplay instead? The effort to put all the art assets in The Sims must have been pretty close to what it was taking to produce more realistic games with less scope. Looking at the PC games that were around in 2000, like Deus Ex and Crimson Skies, I’m not able to find a particularly strong example of something that must have taken much more time than The Sims. You’d need to have an actual citation of Wright stating this was the case for me to fully accept it.

    The paragraph also seems that it might be trying to make the point that it is that the low-end graphics appealed to the “wider audience”, which is something I would agree with. Obviously there was no danger of The Sims skirting around the Uncanny Valley, but it was simple enough to allow players to project their own ideas and personalities onto the Sims. By purposefully avoiding geek stereotypes of orcs/elves/heaving breasts/space marines, a wider audience was definitely enticed. However, this doesn’t reach the paragraph’s eventual conclusion of innovating behavioural systems being as valid a path to a hit game as using flashier graphics on standard mechanics.

  12. noah Says:

    Hmmm. I think these are good points, but they’re not on exactly the topic I’m trying to address. I’m intertwining two points in this paragraph — and maybe neither is coming through clearly.

    The simple point is that the original version of The Sims intentionally used simple graphics, and comparatively low-end graphical processing, rather than latest high-end lighting and so on … but this did not stand in the way of its success.

    The more complex point (which is also more important to my argument) is that the original version of The Sims used an innovative model of character behavior that creates an emergent complexity. This was seen as risky by the industry (which tends to re-skin proven models, that also are generally simpler models). But this innovative behavioral model, and its emergent complexity, seems to have been key to its success.

    Of course, the problem I have making these points may be, rather than the fact that they are intertwined, the fact that they follow Crawford and Costikyan talking about huge amounts of data. It’s definitely the case that The Sims has a huge asset library. But it combines these assets with behavioral processing that is both comparatively intense and comparatively innovative, when stacked up against other contemporary games.

    So, this leads me to two questions. First, do you buy the clarified points above? Second, if so, any ideas how to make them clearer in the main text? (For example, should I consider a footnote, should I revise this paragraph, should I restructure this whole section?)

  13. noah Says:

    You’ve hit upon a sticky question here. In some earlier work I tried to distinguish between different forms and roles of computation in a finer-grained way. But I felt that I was getting caught up in defining the categories too much, and for that reason spending less time on the analysis that the categories were meant to support. It’s a tricky balance, and one I’ll probably need to revisit in the future.

  14. Chris Lewis Says:

    Noah,
    I agree with both the clarified points you make, but I probably would say that the first point is too weak to include. The industry has not always been focused on graphics, and there have always been games that have eschewed the graphical race in order to spend more time focused on the core game. Myst is a wonderful example of going too far down the graphical route, as are a lot of the early CD-ROM games. The industry as a whole isn’t always on the bleeding edge, and does recognise that you don’t have to have the latest whizzy graphics. I don’t think Will Wright would have found this to be a hard sell; none of the Sim games were particularly complex graphically.

    I definitely think that the second point is a strong one, and you’ve clarified it well. The emergent complexity is what drove the game, but it would be very hard for people to have properly visualised the challenge (I can see Wright now: “So you’re trying to drink an espresso to get awake, while trying to eat cereal and you’ll need to take the trash out before you go to work. Trust me, it’ll be fun!”)

  15. Barry Says:

    I suppose I am still a little uneasy about the implications of this example. If, to use the analogy above, this were to be a ‘vote’ then it isn’t a straightforward vote between procedural intensity and intensity of graphics. And Greg Costikyan’s 80/20 split in development expense might even remain true of a title such as The Sims. I can see how this argument works with Introversion’s Darwinia (but that title hasn’t got the sales of The Sims), or how experience of the Wii has foregrounded the short sightedness of relying on a technologically dependent ramping up of rendering effects and detail levels (rather than gameplay novelty – but the Wii isn’t exactly home to much procedural intensity either) to attract new consumers, but it wasn’t a question of presenting consumers and players of The Sims with either intensity of graphics or procedural intensity. Rather The Sims added interesting behaviours inside a game which wasn’t markedly inferior in what was on screen to other titles of its time. Until Company of Heroes I would have expected less intensity of graphics in an RTS than in an FPS because there is a balance, always, between more assets and the level of realisation of individual assets, both in the costs of making each asset and its processing cost when on screen. So my key issue would be that you claim players are voting with their feet for procedural intensity, and using The Sims as the example, but there are so many other possible reasons for the success of The Sims – particularly the fact that this was a different kind of content that had appeal to a different demographic – that it seems to weaken the claim of what players are ‘voting’ for. Your clarification is certainly helpful, but the statement that “this innovative behavioral model, and its emergent complexity, seems to have been key to its success” turns on ‘seems’, while I can imagine the reasoning behind those executives unwilling to greenlight procedurally intensive projects is informed by the same lack of certainty that this was what was responsible for its sales.

  16. noah Says:

    Barry, you’re absolutely correct about the lack of certainty, but I’m not sure we can get much further than “seems” in matters of this sort. Maxis isn’t going to do a controlled experiment, releasing four versions of The Sims — with high/low levels of graphical fidelity and behavioral complexity. Perhaps the only thing we can conclude with certainty is that complex behavioral processing isn’t so repellent to audiences that they refused to buy The Sims. But I’m personally convinced, even though I don’t think it can be proven, that the gameplay of The Sims is a major factor in its success. That gameplay is founded on comparatively innovative and intense behavioral processing, compared with other games of the period. This is the point I’m trying to make, which I clearly need to keep thinking about how to clarify in the main text.

  17. josh g. Says:

    Somewhere here I’m getting lost in mixed definitions, specifically what we’re measuring the intensity of. This paragraph starts off by introducing a rethinking of the concept, but I’m already juggling two different concepts: process intensity measured by how the resulting media behaves, and process intensity measured by how development time is spent.

    Perhaps Costikyan’s intention was to say that the two are linked, but I don’t think that’s a given. eg. discussion below on The Sims having had plenty of art asset production itself.

    I’m echoing what’s been said below a little, but I wanted to drop a comment further up here since this is where it started to seem a bit confused to me. Any chance there’s a quote from Costikyan’s article that doesn’t bring relative production time into the discussion? I know it was central to what he was saying, but it doesn’t seem to fit into your main point.

  18. noah Says:

    I think you’re right. Greg is making a somewhat different point from Chris, then I make a somewhat different one in turn, and after that I try to take the discussion to a further point. But the current version of the chapter doesn’t make the shifts clear.

    So I should revise the chapter’s text. Perhaps with clearer markers at the borders between ideas — or maybe by not flying over this territory at such speed. I guess my inclination is to try the markers approach first, because I don’t want this to turn into a chapter about process intensity. Or, as you say, it might even be better to choose a quote from Greg’s treatment that doesn’t introduce his shift in the concept, or I could not quote him at all in the main text. Then I’d be juggling fewer concepts in this short section (and I could still talk about Greg’s approach in a footnote).

  19. sol gaitán Says:

    Seth Schiesel’s recent article in the NY Times says that the list of the 10 top-selling console games of 2007 released recently by the market research company NPD Group, “highlights the soaring popularity of mass-market franchises like Guitar Hero and the Wii at the expense of critically acclaimed projects aimed at the same young-male audience the industry has relied on for years. (As recently as 2006, sales charts were covered with single-player diversions and sports games.)”

  20. noah Says:

    Sol, in response to your comment above: good point. I’m not writing a book primarily about the public’s taste in games, but it seems clear from Schiesel’s piece and other evidence that what’s expanding the game market is an expansion of the models and themes of play. The Sims, with its complex landscape for crafting everyday behavior of human characters, is just one direction for expansion. Games like Guitar Hero and systems like the Wii represent other directions. (Sorry I can’t respond above, but currently CommentPress only supports three levels of comments.)

  21. Randall Couch Says:

    Your analogy of adding glitzy volume and surface effects to recipe typography may be improvable. To add such effects would not only be a wasteful or trivial use of processing resources (which I take to be your point) but it would not have a neutral bearing on the usability of the recipe program–it would certainly degrade it (see Tufte to start with). Thus you may be starting another hare you don’t really want to chase here by your choice of illustration.

    You also imply here that improved graphics quality in e-games is a trivial component of the user experience, compared to process complexity, prior to stating and supporting that argument in graf 13ff.

  22. Randall Couch Says:

    Here and in graf 14 you imply that graphics v. behavioral processing is a zero-sum tradeoff in which some players vote for behavioral. Does this support a theoretical hierarchy of processing sophistication over experiential quality?

    This is partly a question of style and means, and partly one of the limits of the technology. Absent any technical constraints, some tasks are still best facilitated by simplifying and abstracting (the London Underground map); thus some games would logically feel more successful with such a design (like many age-old table games, e.g., go). Others might be most successful when they most closely approached the sci-fi grail of a completely immersive alternative reality whose texture was not distinguishable from everyday reality. (Paintball in a blasted Sarajevo with bombs falling). Each involves design choices.

    Given technical constraints, behavioral and experiential processing may not both be optimizable. In that case, the second type of game I mention may yield a less satisfying user experience than the simpler game, because the shortcomings intrude more into the psychology of play.

    My point is simply that the relationship of the game’s created experience (for which “graphics” is a crude proxy) to the available behavior novelty and complexity seems much more complicated than the binary you draw; while companies may be luddite about how they want to apportion development effort, I’d expect e-games twenty years from now to offer enormously greater sophistication and variety in *both* behavioral and experiential processing.

    I’m still waiting for Huxley’s wonderful pun-machine the scent-organ, which manipulates emotion via sequences of odors. What concerts, what games, we could play.

  23. noah Says:

    Thanks for pointing that out. Perhaps it would be better to talk about the program generating handmade paper textures on the fly, along with simulated handling and age marks, just as each card is called into view, or something similar? That might also degrade usability, but not as harshly, and fit in with the “let’s make it more real” aesthetic we see in some places.

    I can imagine making a change like that, if an example of this sort survives the re-factoring of this section that seems necessary. But I may just start the section over from scratch.

  24. noah Says:

    Randall, I think we’re basically in agreement. The problem is in the text in this section.

    What this section is trying to argue against is the idea that “Games are all about graphics.” As people commenting on the next paragraph have pointed out, given the recent successes of systems like the Wii and games like Guitar Hero, there just aren’t many people around arguing that games are all about graphics anymore. (Even if, as Greg C points out, much of the production effort still is focused on graphics.)

    I need to re-cast this section, making the discussion of process intensity more nuanced and focusing on how processes are key to defining gameplay — rather than something in a tug-of-war with graphics. The book is much more about the possibilities of new forms of gameplay and new fictional experiences, some of which are actually enabled by advances in graphics (as when I talk, in a later chapter, about the Improv project that was underway when I was at NYU).

  25. Mark Marino Says:

    I’m surprised you didn’t use the word database here, especially since I don’t usually associate “spreadsheets” with music, video, and animation. Was that choice meant to avoid confusion or to be precise?

  26. Mark Marino Says:

    Really the creation of all games, right? Oral games (“The Minister’s Cat), school-yard games, et cetera. Or is there some importance being placed on having an author to attribute the work to?

  27. noah Says:

    Do you read this sentence to say that music, video, and animation are the sorts of things found in spreadsheets? If so, I may really need to clarify it!

  28. noah Says:

    This section is about authoring processes — the act of it, not the later attribution of it.

  29. Mark M. Says:

    Ah, I see, I was missing an “and” there. That seems to suffice for the other readers ;).

  30. Mark M. Says:

    I guess what I was saying is that all games seem to fall into this description. However, your point may be most clearly illustrated by the kinds of contemporary symbolic games that we attribute to someone rather than the “folk” games that we have difficulty attributing to an “author.”

    Did someone make up “Twenty Questions” or did it just evolve over time from communal play? If it did, then it is not the best model for authoring a process in the sense of a single-author — though it certainly fits the parameters of what you are describing.

  31. Matt Barton Says:

    I wonder if it’s fair to say that Tetris was an “early game” to the Russians. I don’t know how to compare our state of the art with theirs at the time, but I’m curious.

  32. Consideraciones Previas « Tecnologías Literarias Says:

    […] de Juan B. Gutiérrez (2007, 12) y al magnífico blog de Noah Wardrip-Fruin Grand Text Auto, en su entrada del 23 de enero de 2008 titulada Authoring processes, concretamente en el apartado “Expresive AI”. En cuanto al segundo […]

Powered by WordPress