July 21, 2005

Reading Processes: Hartman’s Virtual Muse

by Noah Wardrip-Fruin · , 12:41 pm

Last week I wrote about my interest in reading processes (and discussed Marjorie Perloff’s Radical Artifice). Today, in the same vein, I’d like to discuss a rather different book: Charles O. Hartman’s Virtual Muse: Experiments in Computer Poetry (1996).

Hartman’s book is presented as a memoir — in which the author reflects on his experiments, as a poet and teacher, with computers. These include assembling his own Sinclair ZX81, designing new computer programs used in the process of composing poetry, employing a famous text generation program created by others, and implementing a program for performing (and student learning of) scansion for poems in iambic and anapestic feet. Hartman continues this work, a decade later, and in fact his scansion program is now available in a new version (Scandroid 1.1) which is GPLed, written in Python, and certified by the Open Source Initiative.

Early in Virtual Muse Hartman tells us of his poetic experiment for the ZX81, a BASIC program called RanLines that stored 20 lines in an internal array and then retrieved one randomly each time the user pressed a key. This sort of random arrangement of fixed possibilities is a common first experiment for those considering combinatory poetry. What Hartman offers in Virtual Muse, however, is an unusual attempt to think through this sort of randomness (chapter 3). He begins by reminding us that “One of the Greek oracles, the sibyl at Cumae, used to write the separate words of her prophecies on leaves and then fling them out of the mouth of her cave. It was up to the supplicants to gather the leaves and make what order they could” (p. 29). He compares his ZX81 program to the oracle’s technique and says that this “simple sort of randomness” has “always been the main contribution that computers have made to the writing of poetry” (p. 30). He describes a 1974 book titled Energy Crisis Poems (credited “poetry by program / program by rjs”) that notes, in its introduction, that the “odds against an identical set of poems being created using the same input & parameters are approximately 5919247325225209600000000000000000000 to 1.”

But Hartman’s discussion doesn’t latch onto the vastness of this number, of the space of combinatory possibilities, in interpreting such work. (This was surprising to me, in part, because I am so accustomed to this interpretive move from readings of Queneau’s One Hundred Thousand Billion Poems.) Instead, Hartman offers a set of practitioner’s insights from working with this sort of randomness. He points out that sparse random poems work well because “the nonsense factor is low for a tiny collocation of words that can be imbued with imagistic significance. It’s hard to put together two words that don’t make some kind of sense to the willing reader” (p. 31). Similarly, for line-based programs such as his first effort, “The more discrete and self-contained the syntax of the line (complete clause, complete prepositional phrase), the more easily it joins with lines before and after. Keeping verb tense the same increases the opportunities for coherence. Short sharp images stand alone better than bits of narrative or argument.” Without such techniques we have random poems in which “all sense of completeness, progress, or implication” is “strictly a reader’s ingenious doing.”

Hartman notes that there are means other than computers for introducing randomness into compositional processes, and cites the work of John Cage and Jackson Mac Low. He mentions that “as Buddhists they see the workings of the universe in ways that diverge from the Cartesian deterministic tradition of Western science” (p. 32). Freud, Jung, synchronicity, the I Ching, and post hoc ergo propter hoc come up — and then we find Hartman’s first formulation for thinking about randomness as a poetic process (p. 33-34):

“Happen” comes from a word that means “chance.” The idea of synchronicity (and even the Freudian idea of unconscious motivation) can be seen in two ways. Either nothing occurs at random, or random events are themselves meaningful. It’s the latter idea — acknowledging randomness and finding meaning in it — that strikes many Western people as strange, irresponsible, and even frightening.

But for thousands of years people have been consulting chance for advice: throwing the I Ching, inspecting birds’ entrails, opening the Aeneid or the Bible at random, and so on. However severely modern science condemns this as sloppy thinking, it has at least a firm old lineage.

And it turns out that science isn’t so single-minded about all this. Einstein wanted to think it was: “I shall never believe that God plays dice with the world.” But by rejecting the randomness at the heart of quantum mechanics, Einstein, who set the course of twentieth-century physics, cut himself off from its progress. Subatomic particles behave in ways that are radically indeterminate and unpredictable, random not just incidentally but in principle. That, the physicists now assure us, is how the world really is.

Attuning themselves to how the world really is, is an old ambition of poets.

In many ways this sounds similar to formulations of Cage’s, whose work employed aleatory methods and was often aimed at waking up to the moment we are in, to the real world around us. In fact, it echoes something from Perloff’s discussions of Cage in Radical Artifice that I didn’t quote last week:

Cage is too often misunderstood as the champion of the natural, the advocate of art as “purposeless play” that is “simply a way of waking up to the very life we’re living, which is so excellent once one gets one’s mind and one’s desires out of its way and lets it act of its own accord.” And in the “Preface to Lecture on the Weather,” Cage cites Thoreau as saying, “Music is continuous, only listening is intermittent.”

What Cage means by such statements is that the art construct must consistently tap into “life,” must use what really happens in the external world as its materials, and that, vice versa, “life” is only “lived” when we perceive it as form and structure.

So this is one way of looking at randomness in computer poetry — as a way of reflecting, and tapping into, the random reality that is happening right now. Each time the reader touches a key on the Sinclair ZX81 a pseudo-random number is generated, probably through some operation carried out on the system time, right at that moment. This determines which of the appropriately-crafted lines of poetry will be presented next.

But this is not Hartman’s only frame for thinking about randomness. His next sounds almost Oulipian, in its presentation of randomness as a tool for the writer (p. 34-35):

Poems are partly incubated in the warm matrix of tradition. Poets and readers share a half-tacit knowledge of this background. It supplies a context for the experience of poetry and a basis for communication. But this is a problem as well as a support. The same background of literary history that helps a reader to recognize a poem as a poem threatens to determine so much about it that it becomes boringly predictable. As Howard Nemerov puts it, “The poet’s task has generally been conceded to be hard, but it may be so described as to make it logically impossible: Make an object recognizable as an individual of the class p for poem, but make it in such a way that it resembles no other individual of that class.”

So a more direct use of randomness is to reduce the level of probability in the poems. If the next word in the line I’m writing comes at random, I can at least be sure that it won’t be coming from a cliché.

Part of my hope is to surprise the reader; part of it is to surprise myself.

We’ve heard before that writers should “make it strange” or “make it new” or “defamiliarize” — but this is the first time I’ve heard it expressed as a reduction of the level of probability of language. Using randomness for this is the opposite of the pseudo-randomness of the unconscious (the Surrealists) and more like the randomness found in elements of the procedures of the Oulipo (e.g., N+7 with different dictionaries). It’s randomness within a carefully constructed process, operating on explicitly chosen elements. Hartman may be talking as though he means reductions in the probability of language in a general sense, but his examples (both here and elsewhere in the book) make it clear that this is randomness at work in a highly crafted environment. Another way of saying this is that randomness, here, is not simply a key to surprise — it is, in combination with appropriate structures and selected elements, a way of searching for those interesting surprises.

And, in fact, setting up carefully crafted random processes and searching through the results for pleasing surprises turns out to be Hartman’s primary method of computer poetry. The first example of this comes in his encounter with the famous Travesty program described by Hugh Kenner and Joseph O’Rourke in a Byte magazine article titled “A Travesty Generator for Micros.” Travesty offered output of letter-level n-grams drawn from source texts selected by the user, and allowed the user to select a chain length from n=1 to n=9. (For those unfamiliar with this sort of technique, I have an explanation in my essay “Playable Media and Textual Instruments.“).

What Hartman creates using Travesty is motivated, in part, by his interpretation of its processes — one which differs from that of its creators (p. 56-57):

The authors use Travesty to make a number of paradoxical points about language. (We may hear Kenner’s voice as the dominant one in this section of the article.) The frequency distributions characteristic of English determine, without intervention from a writer’s conscious thought, a startlingly large proportion of what the writer writes. “In fact, the language makes three-quarters of your writing decisions for you.”

[. . .]

Naturally, the grip of statistics grows stronger and stronger as it increases. Of all the possible four-letter groups (zxiq, fmup, qtno), only a tiny minority are available to the writer of English.

And yet free choice remains. Indeed, free choice is all a writer is aware of while writing. But there’s more choice than awareness encompasses. Finally, “The significant [my emphasis (Hartman)] statistics derive from the personal habits of James, or Joyce, or Jack London, or J. D. Salinger. Each of these writers, amazingly, had his own way with trigrams, tetragrams, pentagrams, matters to which he surely gave no thought.” What interests Kenner and O’Rourke about their program is the emergence of these stylistic signatures: “the unexpected fact that essentially random nonsense can preserve many ‘personal’ characteristics of a source text.”

Before inviting us into this sophisticated examination of stylistics, however, Travesty offers more childish pleasures. One of them is implicit in the program’s name: It’s the wickedness of exploding revered literary scripture into babble. We can reduce Dr. Johnson to inarticulate imbecility, make Shakespeare talk very thickly through his hat, or exhibit Francis Bacon laying waste his own edifice of logic amid the pratfalls of n = 9.

Yet the other side of the same coin is a kind of awe. Here is the language creating itself out of nothing, out of mere statistical noise. As we raise n, we can watch sense evolve and meaning stagger up onto its own miraculous feet. We can share the sense of wonder that James Joyce aimed at in the “Oxen of the Sun” chapter of Ulysses, where the history of the language from grunts to Parlimentary orations unfolds like a morality play before our ears.

Interestingly, Kenner and O’Rourke’s interpretation of Travesty’s processes takes us back to the first known use of such techniques — the first application of Markov chains was modeling letter sequences in Pushkin’s poem “Eugene Onegin.” Hartman views them rather differently, and here we find two further interpretations of combinatory randomness. We find, in the last paragraph above, an awe at combinatorics. But, again, Hartman skips the awe at vastness such as that we often find in discussion of Queneau’s One Hundred Thousand Billion Poems. Instead, we might call it an awe at order, an awe at constrained combinatorics that take vast inchoate possibilities and narrow them to a few comparatively coherent possibilities simply through the application of relatively uncomplicated rules. If we are going to compare this to an Oulipian work, we’d find a better candidate in Calvino’s “Prose and Anticombinatorics” than Queneau’s poem. On the other hand, in addition to this awe at combinatory coherence, we also have in Hartman’s penultimate paragraph in the quote above the other side of the coin — the humor we find in the specifics, in the particular outputs of many of these processes. This, of course, is more commonly heard in interpretations of things like MadLibs.

Concentrating on the former of these, on the seeming emergence of sense in Travesty output (in his interpretation, rather than that of its authors) as n increases, Hartman created eight stanzas of poetry by using n = 2 through n = 9 on the same material. The material employed was poetry Hartman was writing that worked with Turing, Turing’s imitation game (the “Turing Test”), chess, AI, the cold war, and related topics. The Travesty-generated stanzas were interspersed with the source material that had generated them, in order of increasing n. But whatever Hartman’s interest in randomness as a way of accepting and working with the very moment we are in, he did not simply accept the first output for each n and then include that in his poem. Instead, as he notes on page 64, “In the end, I read nonsense all day for several long days; and when I couldn’t read any more, I stuck with the best I’d found.”

Hartman went on to develop text generators of his own, some quite interesting in their poetry-driven (rather than traditional NLG-driven) designs — the most notable of which is called Prose (a new version of which is apparently forthcoming). But his method stayed essentially the same. He used a text generator to create shaped random texts, read through for those that interested him, and employed them. The biggest differences being that (1) human revision also became an important part of later work, and (2) with the hand of the author now present in revision, there might be no text present that didn’t get its start in the computer processes. The end result was always (in the further examples presented in Virtual Muse) printed poems — making his experiment with the Sinclair the last in which poems were generated live or offered even trivial interaction as a possibility for the reader. We might conclude from this that we should read Hartman’s processes the way that Perloff reads most processes described in her book: as background useful for interpretation of the final print text, but not worthy of attention on their own.

The main counter-evidence would seem to be something that, while clearly important, goes almost without mention in Hartman’s text — Hartman makes his software and its source code available for others to experiment with and alter as they see fit. Nearly the only discussion of this move comes at the very start of the book, on a page titled “A Note to the Reader.” Here Hartman gives a (now defunct) web address for his programs (aren’t we glad there are search engines?) and then says:

I hope the book makes it clear that — for me and I hope for interested readers — the point isn’t the programs themselves (which are fairly simple and not particularly original) but the uses that can be made of them. For this reason, I will always include source-code files with comments on program structure so that more sophisticated programmers can easily alter them.

This is an interesting move. The interesting thing about his processes is not that they are elegant code, but that they open up a space of linguistic exploration and further exploration of processes (through their potential alteration). This isn’t like the Oulipian offering of processes — which were not literature but potential literature, and remained so even if no example were ever produced — but it still presents Hartman’s work as both the production of literature employing processes and the production of processes that can be employed toward literary ends.

One thing Hartman doesn’t do much, however, in the rest of the book is offer interpretations of his own processes (as he does, though briefly, with Travesty’s). In fact, at some points he begins to discuss processes as being more like forms and genres than like unique elements in the composition of particular works. Which brings us to a discussion that ends with an interesting question:

A new form begins as someone’s invention and then, maybe, proves useful to other poets. We know one Greek form as the “Sapphic stanza.” It may have been worked out originally by her contemporary, Alcaeus, but it was a favorite of Sappho’s, and she used it in writing some magnificent poems. We still give it her name, though for thousands of years poets from Catullus through James Merrill have been using it — and varying it. The differences between Greek and a language like English require some variations. (In the pattern that defines the form, we usually replace “quantity,” or syllable duration, with stress.) It has a distinctive shape on the page, so any more or less regular stanza that more or less resembles it is likely to register as a variation on the Sapphic stanza. These variations in a form can work like mutations in biology, providing the material for evolution. Ezra Pound claimed that the sonnet began as someone’s variation on the older canzone.

Will certain computer poetry methods catch on and establish themselves and evolve in similar ways? Or will we shift the demand for originality in poems toward meta-originality in method? Another decade or two should tell us.

This is certainly another way of looking at processes. For example, take the audience-interaction process that’s commonly called the “branching story.” Queneau is often credited with creating the first of these, “A Story as You Like It.” On the data/process continuum, the usual interpretations of this piece max out process and barely register data — the discussion is entirely of the branching, and the amusing text about peas and their dreams is hardly mentioned. From the point of view of the Oulipo there was no need to write another branching story, with different text, because an instantiation had already been supplied. Nevertheless, years later, the branching story arrived in a quite varied and mainstream fashion in the Choose Your Own Adventure books. There wasn’t just one of these — enough to demonstrate that the process could work in a book-length form — there were many. And when we talk about Choose Your Own Adventure books we, after making sure everyone understands the interaction process, talk about the data (text) in the individual books. And for those of us doing critical work, we also begin to discuss branching differently once we have many examples using the same process. Rather than branching itself being of interest, we’re interested in it as a technique, which can be used in different ways and lead to experiences of different shapes. (See Andrew’s example and the generalization as Matt’s assignment.) And we also begin to see how this process, as a technique, can be extended and/or combined with other ones. I remember being struck the first time that I saw a CYOA that combined its mechanics with certain elements drawn from role-playing games — so that, for example, the rolling of dice and fluctuation of character statistics could play a role in determining the next branch of the story. And, of course, branching stories also exist in other media — before the technique became a staple of 1990s CD-ROMs it was used in interactive video installations by artists and entertainment companies, and Ted Nelson reminds us (in Computer Lib / Dream Machines) of the branching movie at the Czech pavilion of the 1967 Montreal World’s Fair (p. DM 44). Of course, Nelson says little about the film’s images or characters, and instead focuses on the fact of branching and the branching structure — these being the primary areas of interest when such a technique is novel.

All this said, my perspective is somewhat different from Hartman’s — and perhaps influenced by my years of working in a computer science lab. I don’t think the question is best seen as whether “certain computer poetry methods catch on” or if we will “shift the demand for originality in poems toward meta-originality in method.” Rather, I think the important point is that we are increasingly, especially as we do more work with digital media, acknowledging the definition of processes as an area of literary creativity — along with use of language, structure of narrative, etc. Not all works will be innovative in terms of processes, and when we interpret those works that don’t innovate on the process level we won’t have much to say about their processes. Of course, some pieces of writing will work with existing processes in new and innovative ways — so in some cases data may turn our attention anew to existing processes, just as innovative uses of language can interact with, and turn our attention anew to, existing stanza forms. And just as we now recognize the value of innovative language in a work that has a pedestrian narrative, so we will come to recognize innovative processes even if they are combined with (or produce) pedestrian uses of language.

But, of course, we are only beginning (or perhaps are only warming up to begin) to develop the critical vocabulary and concepts that will help us interpret processes. And, toward that end, my search for examples of “reading processes” will continue.

One Response to “Reading Processes: Hartman’s Virtual Muse


  1. nick Says:

    Noah, I didn’t remember much of the discussion of randomness from The Virtual Muse, so it was was good to read these notes. While there are some good thoughts in the book, I think the overall discussion of randomness in the arts could benefit greatly if it moved beyond simply making the distinction between determinism and non-determinism and looked into what randomness is. Is there any theory of randomly generated literature that even acknowledges that different probability distributions exist? Should I create some sorts of random literature if I’m a frequentist and other sorts if I’m a Bayesian?

    I was also curious about something you write regarding Hartman’s sharing of code: “This isn’t like the Oulipian offering of processes.” But Hartman’s code does seem to be similar to some Oulipan processes. Just as I can take Hartman’s code and modify it so that it produces different output, I can take Jean Lescure’s N+7 procedure and use a different dictionary, or use some number other that 7, if I like. I can modify the Mathews Algorithm if I want to and use my new version of it to produce different sorts of texts.

    his scansion program is now available in a new version (Scandroid 1.1) which is GPLed, written in Python, and certified by the Open Source Initiative

    This is very cool – I downloaded it and checked it out. Besides making it easier to modify, this program’s being open source also can only makes it easier to integrate it into a larger (GPL) system.

    I think anyone who uses an OSI-approved license can use the “OSI Certified Open Source Software” notice/logo if they like, by the way. A minor detail, but I thought I’d mention it because you or anyone else can use the mark on your appropriately-licensed software, if you like; you don’t have to send your code in for approval.

Powered by WordPress