April 5, 2008

Blog-Based Peer Review: Some Preliminary Conclusions, part 2

[This is a continuation of part 1]

The version of the Expressive Processing manuscript used for both forms of peer review begins with an introductory chapter composed, in part, in response to a desire to let people know what is at stake right up front. I wrote it to let readers know, from the beginning, what I am advocating and why it matters to me. I also wanted a first chapter that could be assigned as a stand-alone class reading (as so many monograph chapters are) and function to make my case.

In the blog-based review I got a number of important comments on this chapter, especially on my discussion of process intensity and The Sims. In the course of that discussion I also learned a number of things about the blog-based review form that still hold true in my conclusions about this project. (more...)

April 3, 2008

Blog-Based Peer Review: Some Preliminary Conclusions, part 1

As many Grand Text Auto readers know, earlier this year I put a mostly-completed draft of my manuscript (for Expressive Processing) through two forms of peer review. One was a review by three anonymous field experts selected by my publisher, The MIT Press. The other was a blog-based review right here on Grand Text Auto. I posted each chapter, section by section, with a new addition each weekday morning — inviting paragraph-by-paragraph comments from the readers here.

April 2, 2008

Expressive Processing Review: A Question of Goals

from Grand Text Auto
by @ 10:30 am

I’m surprised to see the opening paragraph of Jeff Young’s piece in the Chronicle today, in which he’s offering one of the first post-experiment evaluations of the Expressive Processing blog-based peer review project. The lead and headline seem to focus on the idea that blog-based review will “not replace traditional blind peer review anytime soon.”

I’m not surprised because I disagree about blog-based review replacing press-solicited reviews, but rather because finding a replacement for press-solicited review was never a goal of the project. Rather, the project participants (the Institute for the Future of the Book, the MIT Press, UCSD’s Software Studies initiative, GTxA, and yours truly) had goals such as seeing what would take place in a blog-based form of review (this was, after all, the first known experiment), learning from comparing the results of the two forms of review, and (most importantly) garnering responses from the GTxA community that will help improve the book. (more...)

March 21, 2008

EP Meta: Milestones

This week we’ve passed two important milestones in the Expressive Processing project. First, the blog-based review has now covered most of the material included in the blind, press-solicited review — and some useful overall impressions have been collected from participants in the blog-based review. Second, MIT Press has sent me the blind reviews. To mark these milestones, Doug Ramsey from UCSD has put together a news release (including video).

Now, looking forward from here, three things have been set in motion. (more...)

March 19, 2008

EP Meta: Chapter Eight

At this point, with chapter eight concluded, we have nearly reached the end of the version of Expressive Processing sent out for anonymous peer review by MIT Press. So now is the time for me to ask for what Ian Bogost, and others, have identified as a real challenge for this blog-based review form: Are there any broad thoughts on the overall project? (more...)

March 18, 2008

EP 8.6: Learning from Façade

The surface experience produced by the Façade’s processes and data is shaped by a series of choices that have clear impacts in terms of the Eliza and Tale-Spin effects. The results are instructive. (more...)

March 17, 2008

EP 8.5: Façade

I first met Andrew Stern and Michael Mateas at a 1999 symposium on “Narrative Intelligence” sponsored by the Association for the Advancement of Artificial Intelligence. The symposium was organized by Mateas and Phoebe Sengers, two of the final Oz PhD students. They managed to bring together a number of their mentors, colleagues, and friends with a wide range of people pursuing different facets of the intersection of narrative, character, and AI. The Zoesis team was present, showing off their most advanced demo: The Penguin Who Wouldn’t Swim. Bringsjord and Ferrucci discussed active development of Brutus. Stern described his company’s newest commercial product based on believable agent work: Babyz. Mateas and his collaborators premiered Terminal Time. It felt like the field was blossoming with new projects, pushing the state of the art to new levels. (more...)

March 14, 2008

EP 8.4: Oz

The Oz Project at Carnegie Mellon University — led by Joe Bates from its inception in the late 1980s — has an unusual distinction. While Tale-Spin and Universe could be considered outliers among software systems for the fact that both are widely credited with outputs they did not produce, the Oz Project may be the only computer science research project most famous for an experiment that did not require computers. This was an experiment in interactive drama, carried out with a human director and actors, but aimed at understanding the requirements for software-driven systems (Kelso, Weyhrauch, and Bates, 1993). (more...)

March 13, 2008

EP 8.3: The Sims

The Sims are arguably the most popular human characters ever created in digital media. The game named after them — The Sims (Wright et al, 2000) — is one of the best-selling games ever released, and has produced chart-topping expansion packs, sequels, and ports to new platforms. Perhaps surprisingly, the game is focused entirely on interaction with and between these characters and their environment. There is no shooting, no platform-jumping, no puzzle-solving, and not a single test of speed or agility. (more...)

March 12, 2008

EP 8.2: Understanding Simulations

The concerns about work such as Wright’s get to the heart of what is involved when we use computer models to make non-abstract media. As Ian Bogost puts it in Unit Operations, “the relationship or feedback loop between the simulation game and its player are bound up with a set of values; no simulation can escape some ideological context” (2006, 99). Or, as Ted Nelson put it succinctly two years before SimCity’s release, “All simulations are political” (1987). (more...)

March 11, 2008

EP 8.1: Eliza and SimCity

In the early 1980s, Will Wright was working on his first game: Raid on Bungeling Bay (1984). Wright was crafting an attack helicopter simulation, focused on flying over islands and open water, attempting to destroy a set of factories working toward the creation of an unstoppable war machine. Then, reflecting on the landscape editor he created for authoring the game, Wright had a realization: “I was having more fun making the places than I was blowing them up” (2004). From this the idea for Wright’s genre-defining game SimCity (1989) was born. (more...)

March 10, 2008

EP Meta: Chapter Seven

I face a dilemma. As of today, the blog-based peer review of Expressive Processing has completed chapter seven (“Authoring Systems”) and is embarking on chapter eight (“The SimCity Effect”). But I’m not sure what follows after chapter eight.

In the version MIT Press sent out for blind peer review, the next chapter (“Playable Language”) is incomplete. (more...)

EP 7.5: Expressive Language Generation

From one perspective, the challenge faced by Terminal Time is the primary focus of the entire computer science research area of “natural language generation” (NLG). This work focuses on how to take a set of material (such as a story structure, a weather summary, or current traffic information) and communicate it to an audience in a human language such as English. On the other hand, very little NLG research has taken on the specific version of this challenge relevant for Terminal Time (and digital media more generally): shaping this communication so that the specific language chosen has the appropriate tone and nuance, in addition to communicating the correct information. Given this, digital media (such as games) have generally chosen very different approaches from NLG researchers for the central task of getting linguistic competence into software systems. (more...)

March 9, 2008

Gary Gygax, 69

The co-creator of Dungeons & Dragons rose in fame, in his lifetime, to the point of gathering an obituary in The New York Times. But Pat Harrigan pointed me to Paul La Farge’s 2006 essay in The Believer as a more substantial reflection on the history and hobby of D&D.

March 7, 2008

EP 7.4: Terminal Time

Picture a darkened theater. An audience watches, presumably somewhat disconcerted, as “a montage of Tibetan Buddhist imagery and Chinese soldiers holding monks at gunpoint” unfolds on screen. A computerized voice tells them that:

There were reports that Buddhist monks and nuns were tortured, maimed and executed. Unfortunately such actions can be necessary when battling the forces of religious intolerance. (Mateas, 2002, 138)

Underlying the words, one can hear a “happy, ‘optimistic’ music loop.” (more...)

March 6, 2008

EP 7.3: Brutus

Given its name, it is probably no surprise that Selmer Bringsjord and David Ferrucci’s Brutus system specializes in stories of betrayal. Here is the beginning of one:

Dave Striver loved the university. He loved its ivy-covered clocktowers, its ancient and sturdy brick, and its sun-splashed verdant greens and eager youth. He also loved the fact that the university is free of the stark unforgiving trials of the business world — only this isn’t a fact: academia has its own tests, and some are as merciless as any in the marketplace. A prime example is the dissertation defense: to earn the PhD, to become a doctor, one must pass an oral examination on one’s dissertation. This was a test Professort Edward Hart enjoyed giving. (Bringsjord and Ferrucci, 2000, 199–200)

The story continues for roughly another half page. (more...)

March 5, 2008

EP 7.2: Universe

Michael Lebowitz began work on Universe at around the same time that Scott Turner began his work on Minstrel, and the two systems bear a number of similarities.2 Both focus on the importance of authorial actions, rather than simply character actions. Both emerge from the scruffy AI tradition — Lebowitz had recently written his dissertation at Yale under Schank’s supervision, contributing to Schank’s model of dynamic memory, especially in relation to story understanding.3 Descriptions of both also emphasize the importance of the “point” or “theme” that the system is working to communicate through each act of generation (Lebowitz, 1984, 175). (more...)

March 4, 2008

EP 7.1: Writing Software

My early experiences of digital media were as an audience member. I remember playing text-only games like Hunt the Wumpus on mainframe terminals at my mother’s university — as well as interactive fictions like Zork I on my father’s early portable computers (a Kaypro and an Osbourne). I remember playing graphical games like Combat on a first-generation Atari console that belonged to my cousins — as well as Star Trek: Strategic Operations Simulator on my friend Brion’s first-generation Atari home computer. Brion would later guide me in more arcane explorations of computer code, as we attempted to creatively alter the binary files of games we played, saving them back to the Atari’s tape deck. But I think it was earlier, when I was ten years old, that I first sat down to program at a “blank slate.” (more...)

EP Meta: Chapter Six

Yesterday’s post finished up chapter six (“Character and Author Intelligence”) and today’s begins chapter seven (“Authoring Systems”). As it turns out, number six was another informative chapter, for me, in terms of the blog-based peer review process.

The best thing, undoubtedly, was the opportunity to hear comments from the creators of systems discussed in the chapter: Jeff Orkin and Scott Turner. Of course, many book authors are able to interview system authors when researching a book, but I suspect it’s unusual to get involved in a public conversation (before publication) around the specifics of how the manuscript characterizes the work. I’ve found this very helpful. (more...)

March 3, 2008

EP 6.5: Beyond Anthropomorphic Intelligence

Given the history of AI, it is no surprise that systems such as Tale-Spin and Minstrel were built to embody models of human cognition. The assumption that human and machine processes should — or must — resemble each other runs deep in AI. It continues to this day, despite the counter-example of statistical AI.

With Tale-Spin and Minstrel both emerging from the “scruffy” end of symbolic AI, we might assume that this area of AI was particularly given to building its systems on human models. And perhaps it is true that a neat researcher would not have made Turner’s opening assumption from his description of Minstrel: “To build a computer program to tell stories, we must understand and model the processes an author uses to achieve his goals” (1994, 3). (more...)

February 29, 2008

EP 6.4: Statistical AI

We can see, in Minstrel, symptoms of a much larger problem. One which Turner, alone, could have done little to address. By the late 1980s it was clear that AI systems in general were not living up to the expectations that had been created over the three previous decades. Many successful systems had been built — by both “neats” and “scruffies” — but all of these worked on very small sets of data. Based on these successes, significant funding had been dedicated to attempting to scale up to larger, more real-world amounts of data. But these attempts failed, perhaps most spectacularly in the once high-flying area of “expert systems.” The methods of AI had produced, rather than operational simulations of intelligence, a panoply of idiosyncratic encodings of researchers’ beliefs about parts of human intelligence — without any means of compensating for the non-simulation of the rest of human intelligence. Guy Steele and Richard Gabriel, in their history of the Lisp programming language (1993, 30), note that by 1988 the term “AI winter” had been introduced to describe the growing backlash and resulting loss of funding for many AI projects. In this vacuum, assisted by steadily increasing processing power, a new form of AI began to rise in prominence. (more...)

February 28, 2008

Philip M Parker’s Book Generator

from Grand Text Auto
by @ 11:21 am

Speaking of Scott Turner (author of Minstrel, a major subject of the EP chapter currently being discussed) he recently drew my attention to an article in the Guardian titled “Automatic writing.

Philip M Parker, a professor of management science at Insead, the international business school based in Fontainebleau, France, patented what he calls a “method and apparatus for automated authoring and marketing”.

EP 6.3: Modeling Human Creativity

Scott Turner, like many before and since, first became interested in story generation after running upon Vladmir Propp’s analysis of Russian folktales (1968). Propp provides a grammar that describes the structure of many folktales. As linguists and computer scientists know, grammars can be used for describing the structure of given things — and also for generating new things. But, as Turner soon discovered, this task is not easily accomplished with Propp’s grammar. Its elements are rather abstract, making them workable for analysis but insufficient for generation.5

Turner was a senior in college at the time. A few years later, while doing graduate research in UCLA’s Computer Science department, he began work on a radically different vision of story generation, embodied in his Minstrel system. This would culminate in an dissertation more than 800 pages long (setting a new record in his department) that he distilled down to less than 300 as the book The Creative Process: A Computer Model of Storytelling and Creativity (1994). (more...)

February 27, 2008

EP 6.2: Beyond Compartmentalized Actions

The finite-state machine is much like the quest flag or the dialogue tree. Each is conceptually simple, easy to implement, places low demand on system resources, and — over a certain level of complexity — becomes difficult to author and prone to breakdown. A quick look at the structure of FSMs shows the reasons for this.1

An FSM is composed of states and rules for transitioning between states. For example, an FSM could describe how to handle a telephone. In the initial state, the phone is sitting on the table. When the phone rings, the FSM rules dictate a transition to picking the phone up and saying “Hello.” If the caller asks for the character who answered, the rules could say to transition to a conversation state. If the caller asks for the character’s sister, the transition could be to calling the sister’s name aloud. When the conversation is over (if the call is for the character who answered the phone) or when the sister says “I’m coming” (if the call is for her) the phone goes back on the table. (more...)

February 26, 2008

EP 6.1: After Tale-Spin

As the previous chapter described, James Meehan’s Tale-Spin — built on a simulation embodying the “scruffy” artificial intelligence theories of Roger Schank and Robert Abelson — generated coherent accounts of character actions and interactions in a fictional world. This set the foundation for the field of story generation. Considered today, it also raises an inevitable question: What next?

This chapter considers two different responses. (more...)

<- Previous Page -- Next Page ->

Powered by WordPress