August 1, 2005
Christopher Strachey is rightly viewed as a pioneer of modern computing. He’s not usually, however, viewed as the creator of the first work of digital literature. Research toward my submission for DAC, however, has lead me to believe that he was — and that his initial digital literature project was also, quite probably, the first piece of digital art. I’d be quite interested to hear any thoughts (or refutations) from GTxA readers.
To begin with, however, I should explain that when I use the terms “digital literature” and “digital art” I mean something in particular by them.
Of course, a phrase like “digital literature” could refer to finger-oriented literature (fingers are “digits”) or numerically-displayed literature (numbers are “digits”) — but I mean “digital” in relation to computers, specifically as it appears in phrases such as “stored program digital computer.” I mean literary work that requires the digital computation performed by laptops, desktops, servers, cellphones, game consoles, interactive environments, or any of the other computers that surround us. I think that’s what most of us mean.
To take the other term in my initial phrase, “digital literature” could be used in the sense of “the literature” (the body of scholarly work on a topic) or it could be rearranged as “literary digital” (perhaps to distinguish “literary” digital fictions from “genre” digital fictions) — but I mean “literature” (and “literary”) as a way of referring to those arts we sometimes call fiction, poetry, and drama (as well as their close cousins). I mean the arts that call our attention to language, present us with characters, tell us stories, and make us reflect on the structures and common practices of such activities. I should probably also say that I don’t view the literary arts as a citadel, separate (and perhaps in need of defense) from, say, visual or performing arts. Much of the best drama, for example, brings together the literary, performing, and visual arts.
To me, “digital art” is the larger category of which “digital literature” is a part. It encompasses all the arts that require digital computation, not just the literary arts.
Turing Machines get Electronic
When I say that I mean “digital” as in “stored program digital computer,” what does that mean, more precisely?
In 1937 everyone who used the term “computer” knew what it meant. A computer was a person who calculated answers to mathematical problems. These computers weren’t expected to develop new, creative methods to prove outstanding mathematical problems. Rather, they were expected to follow a known and specified set of instructions which, together, formed an effective procedure for solving a particular kind of problem. We call such sets of instructions algorithms (from the name of Arabian mathematician al-Khwarizmi).
But with the publication, in 1937, of Alan Turing’s “On Computable Numbers” the world was quietly introduced to the mathematical thought experiment that we call a “Turing machine” — a concept that lay the groundwork for the kinds of non-human computers we have today. Turing’s paper wasn’t remarkable for imagining a machine that could carry out the work of human computers. In the 1930s there were already in operation a number of such machines (including Vannevar Bush’s Differential Analyzer) and at least 100 years earlier (by 1837) Charles Babbage had conceived of an Analytical Engine, capable of mechanizing any mathematical operation (and programmed via punched cards such as those used for automated looms, making it possible for his collaborator Ada Lovelace to be called by some the first programmer of a universal computer, even though the Analytical Engine was never constructed). Two things, however, separated Turing machines from all calculating machines in operation in the 1930s (and most of the 1940s) as well as all previous thought experiments (including Babbage’s).
First, according to Turing’s most prominent biographer, Andrew Hodges, the Turing machine was developed in response to a mathematical question (posed by Hilbert) as to whether mathematics was decidable. That is, was there a method that could be applied to any assertion that would correctly determine whether that assertion was true? The Turing machine was a formalization that made it possible to discuss what could and couldn’t be calculated — answering Hilbert’s question in the negative, and establishing one of the primary foundations for computer science as a science (the investigation of what can and can’t be computed).
Second, the imagined design of the Turing machine was in terms of a potentially implementable (if inefficient) mechanism. This mechanism was such that it could not only logically branch while following its instructions (doing one thing or another based on results to that point), and not only act as a universal machine (simulating the activities of any other calculating machine), but also store its instructions in the same read/write memory as the data on which it acted. This would make it possible, for example, for the machine to alter its own instructions while in operation. And it is from this type of capability that we get the words “stored program” in the phrase “stored program digital computer.” This lies at the heart of the computers we use each day.
This leaves us with the word “digital” — which, as it turns out, is not specific to computers, despite the fact that it’s the word we’ve latched onto in order to represent computers. “Digital” information, as opposed to “analog” information, is represented by discrete rather than continuous values. It’s actually related, according to the Oxford English Dictionary, to the sense of fingers and numbers as “digits.” Each of the first nine Arabic numbers (or ten, if one includes zero) can be expressed with one figure, a digit, and these were originally counted on the fingers, or digits. Charles Babbage’s Analytical Engine called for representing decimal numbers using ten-spoke wheels — which made it a design for a digital computer, because each of the ten wheel positions was discrete. In contrast, may early 20th century computers used analog, continuous representations — such as varying electrical currents or mechanisms that turned at varying speed. These analog computers could perform some tasks very quickly. For example, adding two quantities represented by electrical currents could be accomplished simply by allowing flow onto particular wires, rather than by actually establishing the two values and numerically calculating their sum. However, because of the lack of discrete states, analog computers were inflexible in their orders of precision and prone to noise-induced errors. During WWII Konrad Zuse built the first program controlled digital computer that, instead of Babbage’s decimal arithmetic, used binary arithmetic implemented in on/off electronics. This was a considerable simplification and made possible advances in increased speed and precision — important to our “digital” computers. Working independently (and very secretly) the British government cryptanalysis group of which Turing was part (and where he was instrumental in cracking the German Enigma code) created the Colossus, which has been characterized as the first fully functioning electronic digital computer. Many other projects and incremental advances took place, and especially notable of these was the University of Pennsylvania ENIAC (believed, while the Colossus was still secret, to have been the first fully functioning electronic digital computer). A 1945 report of future design plans — based on insights from ENIAC designers J. Presper Eckert and John Mauchly, working together with John von Neumann — was very influential on the design of future stored program digital computers (leading to the perhaps inappropriate name “von Neumann architecture” for such systems).
It was only after the war that a number of successful efforts were made toward stored program digital computers. The first was the Manchester University “Baby” in 1948 (it used a CRT display for its storage) which was followed by a more complex Manchester prototype in 1949 and then replaced by an industrially manufactured version, the Ferranti Mark I, in 1951 (for which Turing wrote the programming manual and constructed a random number generator that produced truly random digits from noise). Similar efforts include the University of Cambridge EDSAC (1949), the University of Pennsylvania EDVAC (1951), the MIT Whirlwind I (1949), and others.
Strachey’s Next Step
Once there were stored program digital computers, all that remained (for our field to take its first step) was for someone to make literary or other artistic use of one. I believe that — in 1952, working on the Manchester Mark I — Christopher Strachey was the first to do so.
Strachey, born in 1916, grew up on Gordon Square, which was then center of the Bloomsbury group. His father was English cryptographer Oliver Strachey, his mother American suffragist Ray Costelloe, his uncle the famous Giles Lytton Strachey, and his neighbors included Virginia and Leonard Woolf, Clive and Vanessa Bell, and John Maynard Keynes.
He went up to King’s College, Cambridge, in 1935. While this is the same time and place where Turing was doing his fundamental work on computable numbers (as a recently-graduated junior research fellow) it is likely that the two knew each other only socially, and never discussed computing. Strachey worked as a physicist and schoolmaster after graduating from Cambridge, becoming increasingly interested in computing during the late 1940s. In January 1951 he was first exposed to a stored-program computer: the Pilot ACE computer under construction at the National Physical Laboratory. He began writing a program to make it play draughts (checkers), inspired by a June 1950 article in Penguin Science News.
That spring Strachey learned of the Mark I computer that had just been installed at Manchester — he had known Turing just well enough at Cambridge to ask for, and receive, a copy of the programmer’s manual. He visited for the first time in July, and discussed his ideas for a draughts-playing program with Turing, who was much impressed and suggested that the problem of making the machine simulate itself using interpretive trace routines would also be interesting. Strachey, taken with Turing’s suggestion, went away and wrote such a program. As Strachey biographer Martin Campbell-Kelly writes:
The final trace program was some 1000 instructions long — by far the longest program that had yet been written for the machine, although Strachey was unaware of this. Some weeks later he visited Manchester for a second time to try out the program. He arrived in the evening, and after a “typical high-speed high-pitched” introduction from Turing, he was left to it. By the morning, the program was mostly working, and it finished with a characteristic flourish by playing the national anthem on the “hooter.” This was a considerably tour-de-force: an unknown amateur, he had got the longest program yet written for the machine working in a single session; his reputation was established overnight.
A year later, in June 1952, Strachey had wound up his responsibilities as a schoolmaster and officially began full-time computing work as an employee of the National Research and Development Corporation. That summer he developed — with some aesthetic advice from his sister Barbara, using Turing’s random number generator, and perhaps in collaboration with Turing — a Mark I program that created combinatory love letters. This was the first piece of digital literature, and of digital art, predating by a decade the earliest examples of digital computer art from recent surveys (e.g., quite useful books such as Christiane Paul’s Digital Art and Stephen Wilson’s Information Arts).
Strachey described the operations of this program in a 1954 essay in the art journal Enounter (immediately following texts by William Faulkner and P. G. Wodehouse):
Apart from the beginning and the ending of the letters, there are only two basic types of sentence. The first is “My — (adj.) — (noun) — (adv.) — (verb) your — (adj.) — (noun).” There are lists of appropriate adjectives, nouns, adverbs, and verbs from which the blanks are filled in at random. There is also a further random choice as to whether or not the adjectives and adverb are included at all. The second type is simply “You are my — (adj.) — (noun),” and in this case the adjective is always present. There is a random choice of which type of sentence is to be used, but if there are two consecutive sentences of the second type, the first ends with a colon (unfortunately the teleprinter of the computer had no comma) and the initial “You are” of the second is omitted. The letter starts with two words chosen from the special lists; there are then five sentences of one of the two basic types, and the letter ends “Yours — (adv.) M. U. C.”
As Jeremy Douglass notes in his essay “Machine Writing and the Turing Test,” the love letter generator has often been discussed in terms of queer identity, rather than in literary terms. Certainly there are reasons for this — Turing and Strachey were both gay, and at least Turing openly so. (In fact, it was only a few years later that Turing committed suicide — after arrest and conviction for homosexual activities, followed by a sentence of hormone injections that caused him to grow breasts.) It might also seem from the most widely-reproduced outputs of the generator (e.g., that found in Hodges’s bio) that it was a love-letter generator that “could not speak its name” (the word “love” being conspicuously absent). But I suspect that the primary reason for the lack of literary discussion of Strachey’s generator is that the output simply isn’t very compelling. Here, for example, are the two outputs reproduced in Encounter:
You are my avid fellow feeling. My affection curiously clings to your passionate wish. My liking yearns for your heart. You are my wistful sympathy: my tender liking.
M. U. C.
My sympathetic affection beautifully attracts your affectionate enthusiasm. You are my loving adoration: my breathless adoration. My fellow feeling breathlessly hopes for your dear eagerness. My lovesick adoration cherishes your avid ardour.
M. U. C.
I would like to suggest, however, that examination of individual outputs will not reveal what is interesting about Strachey’s project. As he wrote in Encounter: “The chief point of interest, however, is not the obvious crudity of the scheme, nor even in the ways in which it might be improved, but in the remarkable simplicity of the plan when compared with the diversity of the letters it produces.” That is to say, Strachey had discovered, and created an example of, the basic principles of combinatory literature (10 years before Raymond Queneau’s One Hundred Thousand Billion Poems) — which still lie at the heart of much digital literature today. Combinatory techniques allow a relatively small number of initial materials to be arranged, following certain rules, into a vast number of possible configurations. In relatively unconstrained systems such as Strachey’s, each individual output is more likely to induce a humorous reaction than deep literary consideration. In fact, Turing biographer Hodges writes of the love letter generator that “Those doing real men’s jobs on the computer, concerned with optics or aerodynamics, thought this silly, but … it greatly amused Alan and Christopher” (p. 478). In the amusing nature of individual outputs, Strachey’s system could be said to anticipate Roger Price and Leonard Stern’s Mad Libs (conceived in 1953, but not published until 1958, see Nick’s Poems that Go intro), though the love letter generator’s more restrained combinatory vocabulary made it possible for most (rather than only a few) words to change from output to output. It is clear, however, from Strachey’s contribution to Encounter, that he also understood the other side of combinatory literature — the view of the system itself when one steps back from the individual outputs, the remarkable diversity that can be produced by a simple plan. The production from such a simple plan, as has been pointed out with other combinatory texts, of more potentially different outputs than any of us could run our eyes across in a lifetime devoted to reading its output. It is a work that can only be understood, in fact, as a system — never by an exhaustive reading of its texts.
And it is not surprising that Strachey’s effort is mostly of interest in terms of how it operates, rather than in the text it produces. After all, designing interesting ways for computers to operate — algorithms, processes — is at the heart of what most computer scientists and creative programmers do, from Turing and Strachey’s moment to this day. And, as I have been arguing under the heading of “reading processes” (1, 2, 3), we need to recognize this as a potential site of literary creativity.