February 26, 2005

Beyond Procedural Literacy

by Noah Wardrip-Fruin · , 12:08 am

I didn’t have a chance to comment on Michael’s Why Johnny Must Program post back in January. I started to write a comment earlier this evening, but then realized I should just make a new top-level post. In this post I’m going to agree with Michael about procedural literacy, disagree with him on the same point, argue for the unavoidable synthesis of my two opposing points of view, and then make the case that we need another layer on top.

To put that a bit more clearly, the short version of my argument is that procedural literacy is only one of three types of education around these issues that we should be offering students of digital media (students focused on scholarship and/or creation of computational media).

But before I get into all that, I want to begin a bit oddly, by offering a response to Scott’s first comment that is a slightly different version of Mark’s comment.

Michael, in the draft paper linked from his post, wrote:

Without an understanding of how code operates as an expressive medium, new media scholars are forced to treat the operation of the media artifacts they study as a black box, losing the crucial relationship between authorship, code, and audience reception. Code is a kind of writing; just as literary scholars wouldn’t dream of reading translated glosses of work instead of reading the full work in its original language, so new media scholars must read code, not just at the simple level of primitive operations and control flow, but at the level of the procedural rhetoric, aesthetics and poetics encoded in a work.

Scott wrote:

While a game developer who sets out to write a game without learning how to program is much like an author who sets out to write a novel without bothering to learn to read and write, I think that a games scholar who does not know how to program is more on the level of an English professor who claims that she is able to analyze a sonnet without having studied theoretical linguistics.

Mark responded:

Being fluent with programming languages and tools isn’t quite the same as having an extensive background in the underlying theory. I’m drawing a blank trying to come up with a good analogy in the Sonnet case (there may not be one?), but I’d compare it to someone studying theater (the good old-fashioned non-interactive drama). It’s possible to study theater merely by watching a lot of plays… However, it seems like it would be useful to be able understand what goes on in putting on a play—the whole mess of stagehands and props and lighting and scripts and casting that goes into the final product.

I’d like to offer a similar response, but one that shifts our attention to a more obviously technical, screen-based medium. Procedural literacy is not to software/game analysis as theoretical linguistics is to sonnet analysis. Rather, it’s more like knowledge of film production (lighting, editing, directing, continuity management) is to film analysis. If you have no clue about the authoring process for the medium’s primary component (sequenced shots for film, procedures for software) you’ll be limited to studying things like audience perception (which isn’t uninteresting, but shouldn’t mark the limit of inquiry) and anything you try to say about the work itself will be prone to embarrassing gaffes.

That said, Scott’s point is well taken that we can’t wait to analyze Halo 2 until it is no longer a commercial product and is perhaps eventually released open source. While there may be interesting ways to read code, we don’t often get our hands on the code, and what we’re really talking about at this point is reading procedures. That’s why I say it’s something like knowing about how film editing is done, as a process, rather than being able to actually see what did and didn’t end up on the cutting room floor. Michael points to the same thing when he mentions Kurt Squire’s procedurally-focused reading of Viewtiful Joe. (I assume Michael is referencing Educating the Fighter, which breaks down processes with no indication of access to the code.) Michael may use the term “code” – but when he expands the statement he refers to “the procedural rhetoric, aesthetics and poetics encoded in a work.”

(By the way, does anyone reading this know of a single example of interesting code-level analysis for code not written to be analyzed? I’ve heard the stories that people have read ideological assumptions in old AI systems from the fact that data structures were given names like “beliefs” when they could have been called “myTable” — but I’m not sure I ever got a reference for one of these readings. I think the only code I know of that’s been read critically is the code for software art and “codework.”)

Of course, there’s also a potential being spoken of in Michael’s paper. What happens if we graduate a generation of digital media scholars who (1) know digital media has a history and (2) are procedurally literate? Perhaps then we might have people who are able to do things like interpret the creation of new programming languages — from influential languages like LISP to (thus far) single-work languages like ABL. They might be able to interpret the histories from which these languages arise, the assumptions they embed, the problems they were designed to express elegantly, and the blind spots they encode. These scholars could also help us understand what constitutes an interesting or virtuosic performance with code. (Why is writing a single-room Inform piece notable? Perhaps because it runs against the fundamental design pattern the environment was created to support. Next question: why is implementing a chess program in Inform something anyone would do? And why would I ask that?)

As becomes clear, I want to have my cake and eat it too. I want to agree with Michael that “it is not the details of any particular programming language that matters, but rather the more general tropes and structures that cut across all languages.” I want to agree that our goal is to educate scholars and artists that can, by being generally procedurally literate, understand the “procedural rhetoric, aesthetics and poetics” in a work even without access to the code or details of the development environment. I also want to disagree strongly. I think there are important parts of digital culture, and particularly digital media, that we can only understand by knowing the details of particular languages and environments, such as those discussed in the paragraph above. For educating students, I think both are important goals — general procedural literacy and nitty-gritty familiarity with the tools and circumstances of writing and running code. And luckily one can’t actually achieve understanding of either the former or latter without the other. I’d say, rather than be embarrassed that students have to learn the details of particular languages in the process of becoming procedurally literate, we should celebrate the fact that they’re learning the first part of what will be another important body of knowledge for them — both as interpreters and creators of digital work. I know that Michael understands this (for one thing it’s apparent in the section discussing Java and Processing, what they require, and their communities of practice) but I wish it were more foregrounded in the discussion. I’d like to see it promoted to a top-level goal, on the level of procedural literacy.

What’s more, I want to add another layer to this increasingly thick sandwich. (Let’s hope the students can open their mouths this wide.) I think it’s important, when doing this kind of education for new media students, to also teach them something about how the field of computer science has viewed these topics. The digital media field is built on computer science research results, and often the research (e.g., in graphics, in AI, in hypertext/media, in NLP/G, in sound synthesis, in HCI) is ongoing. Computational media students need to know at least the basics of the vocabulary and conceptual structures used by computer scientists in areas related to their work — otherwise they won’t be able to do a meaningful review of their field of interest, they won’t be able to find and use tools related to their work that come out of computer science, and they won’t even be able to construct a web search that hits a large body of work related to their own.

I realize it’s a tall order. Michael’s approach is, I think, starting us down a road that will prove more successful (for people in our field) than earlier attempts such as Design by Numbers. I think it deserves to be a national model. But I also think we have to broaden our approach, and do even more than Michael’s course does, in order to give our students the background in this area that will help our field develop as we believe it should.

13 Responses to “Beyond Procedural Literacy”


  1. Anonymous Says:

    (By the way, does anyone reading this know of a single example of interesting code-level analysis for code not written to be analyzed? I’ve heard the stories that people have read ideological assumptions in old AI systems from the fact that data structures were given names like “beliefs” when they could have been called “myTable” – but I’m not sure I ever got a reference for one of these readings. I think the only code I know of that’s been read critically is the code for software art and “codework.”)

    Some of Phil Agre’s work might be useful to you there, particulary his paper “Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI”. He doesn’t step through any particular piece of AI code, but he offers an analysis of AI’s language (inside and outside of code), and shows how it works by being very precise and very vague at the same time.

  2. noah Says:

    Yes, I’m fond of that paper, and recommend it to folks. There’s no code-level analysis, though, come to think of it, that may be where I got the idea there might be code-level analysis out there:

    Critics of their research have often focused on particular substantive positions that have seemed unreasonable, for example the frequent use of computer symbols such as REASON and DECIDE and GOAL whose relationship to the actual human phenomena that those words ordinarily name is suggestive at best.

  3. Matt K. Says:

    > By the way, does anyone reading this know of a single example of interesting code-level analysis for code not written to be analyzed?

    Perhaps you’re conflating outcomes with critical method here. If by “interesting” you mean some analytical enterprise whose outcome is a “reading” in a published essay or a monograph then the question becomes pointed in the way you intend it. But isn’t that a smaller (and more institutionally determined) sense of “interesting . . . analysis” than we need to accept? The work Nick and others are doing reverse engineering Mystery House is (from what I know of it) “interesting code-level analysis,” isn’t it?

  4. noah Says:

    Matt, I think I may misunderstand your point. I think the reason one does reverse engineering is because one doesn’t have access to the code, which would mean it can’t be an example of “code-level analysis.” Or do you mean to suggest that I should, in that phrase, also include analysis performed using code? That’s an interesting thought, but it’s pretty separate from the question I was launching from, which was Scott’s about the unavailability of code for artifacts we want to analyze (e.g., Halo 2). Still, the point about analysis performed using code, regardless of whether it’s a product of my misunderstanding of your comment, is a good one — yet another reason we should be educating our students about these things.

  5. Matt K. Says:

    > which would mean it can’t be an example of “code-level analysis.”

    I disagree ;-) But my basic point is that “analysis” should not be an actviity confined to the normative range of outcomes recognizable as critical hermeneutics (“readings”). Is reusing code “interesting code level analysis”? I certainly think so.

  6. noah Says:

    Matt, I think we’re in complete agreement about what should count as “analysis” — and we’re just using the phrase “code-level” differently.

    I take it I have your agreement on everything else in the body of my post?

  7. nick Says:

    one doesn’t have access to the code, which would mean it can’t be an example of “code-level analysis.”

    Everyone certainly does have access to the Mystery House binary code – and to the Combat binary code, which I did some critical analysis of and which might be a better example – it’s just that we don’t have the source code on hand. In my paper on Combat, I argue against the idea that the source code is necessary for code-level analysis:

    Critics have sometimes shied away from making comments on the code level because they do not have access to the source code of the game or because they lack programming expertise in the language the game was written in, but in such cases much can still be said about how programming practices, tools, and languages influence the development of a game.

    We know that Combat was written in assembly language to fit into 2KB of ROM, the first phase of the work begin done by Joe Decuir (who worked on the Atari VCS hardware design) and the second phase by Larry Wagner, and that certain code and coding tricks were re-used later (which is why Air-Sea Battle and other carts also have games that last 2 minutes and 16 seconds). And we know all about the capabilities of the Atari VCS as a platform. And we have an annotated, complete disassembly of Combat.

    We also know a lot about the Apple II and Mystery House, even that the drawings were input by with a VersaWriter tablet by Robert Williams, with Ken Williams writing the drivers for that new piece of hardware.

    It’s certainly worthwhile to look at source code when it’s available, but to argue can we can’t do any code-level analysis is like saying we can’t speculate at all about how Cicero composed his texts because we don’t have his secretary Tyro available for interrogation. If you mean an analysis of source code specifically, you might just say “analysis of source code.” But even then, we could ask people about how well-commented the source code was and what variables were named and they could reply from memory, without the source code being available…

  8. noah Says:

    Yes, Nick, I did mean source code. As I pointed out to Matt, I was writing in reply to Scott’s comment about the source of Halo 2. Yes, I agree that there are things that can be analyzed about binary code, etc. But is this bit of terminological wrangling really the most interesting part of the post? If so, I must have done a pretty bad job.

  9. Matt K. Says:

    > I take it I have your agreement on everything else in the body of my post?

    As a matter of fact you do.

    > is this bit of terminological wrangling really the most interesting part of the post? If so, I must have done a pretty bad job.

    It hits close to home for some of the stuff I’m working on.

  10. noah Says:

    Glad to hear it. Those are the best reasons I can think of for us focusing on my parenthetical paragraph!

    Is there more info out there about the stuff of yours this connects with?

  11. Matt K. Says:

    Noah,

    Take a look here

    http://www.otal.umd.edu/~mgk/blog/archives/000758.html

    and especially here

    http://www.solasi.org/moin.cgi/CodedAndRecoded

    I’d like to encourage GTA readers to contribute to the Wiki page, which was set up by a former student of mine, Matt Bowen. It has the potential to become a resource that would be widely used by many of us in our teaching and other situations.

  12. andrew Says:

    Just a few general comments to throw in here, from my perspective of someone who hasn’t thought a lot about developing curricula, teaching, etc. but who is intrigued by the concept of studying code along with analyzing the playing of the software / game itself.

    First, I find it charming that Nick (and others?) are studying the assembly code of Combat and other early computer games. I think they’re worthy of study because of their place in history, they have some elegant features, their necessary use of abstraction (as opposed to the ever-increasing realism of today’s games), their extremely constrained operating systems (so little memory, CPU speed, squeezing in computation in between drawing of frames when the raster gun was travelling back to pixel 1, etc.). I find it amusing because I’d bet the mindset of the folks making those games at the time was simply to get a dumb little tank to move around and shoot the other tank. It’s doubtful they could have appreciated at the time how their efforts would fit into the continuum of computer gaming and scholarship.

    My initial reaction to idea of studying the code of today’s games, is that would be a hugely complex undertaking. There’s a lot of code to look at. These are huge, huge machines we’re talking about. For example, looking at any one page of code is kind of like standing in one small room of equipment in an aircraft carrier and trying to understand how it fits in to operation of the whole ship.

    More productive than looking at source code itself perhaps would be studying the architectures of programs. Higher-level diagrams of the machine, all of its subsystems, then diving into particular subsystems, its data structures, API’s. Perhaps zooming on the code of a few key pieces, but even then I wonder if looking at raw source code would be that fruitful…

    Also, reading detailed technical design specs might be a good thing. (But this gets back to the issue that it’s hard to capture the details of behavioral systems in descriptive texts.)

    Then I thought, maybe studying the code of Combat and the like would be easier, since it’s so relatively smaller than the code of contemporary software. Although much of that code is probably about working around the quirks of that operating system, rather than about pure gaming algorithm.

    What we almost need is to pull out the essential algorithms of a game’s architecture, that strips away all the mundane plumbing and wiring that isn’t terribly interesting, allowing scholars to focus on the core procedures of the game itself.

    There’s got to be some good literature from the software engineering community, on the best ways to technically analyze and understand large complex software systems.

  13. noah Says:

    Matt, I’d read your blog post, but not made the connection to our conversation here. Let me see if I can follow the connection through.

    I guess I’ve generally thought of textons and scriptons in the realm of electronic literature. In this realm I’m pretty comfortable calling the language reader/players see/hear “scriptons.” Everything else that is a representation of what they might see/hear (text, sound files) I think of as “textons.” The way things move from being textons to scriptons (e.g., algorithmic recombination) I think of as the “traversal function.” But then there are parts of any electronic literature experience that aren’t any of these things. There are likely parts of the electronic literature piece itself that are there for purposes such as allowing the reader/player to make input of some sort. That’s not texton, scripton, or traversal function. There are also likely pieces of application software, or support code, or operating system code, that are required for experiencing the piece. But these are not part of the piece, and for this reason I haven’t been thinking of them as texton, scriptons, or traversal functions.

    I guess I’d say what you’re working on strikes me as an interesting attempt to expand in a new direction while borrowing some of Aarseth’s terminology — but I don’t remember anything in Cybertext or elsewhere that would make me think of this as an extension or clarification of Aarseth’s work.

    On the other hand, what about things like generative algorithms? That, to me, seems like a direction from which we might want to push on Aarseth’s structure. Are there texts generated entirely, or mostly, from traversal functions rather than from textons?

    Of course, I should probably go back and reread some of this before opening my mouth too much.

Powered by WordPress