April 23, 2004
The Poet Laureate and the Machine
Thomas Lux, the Bourne chair of poetry within LCC, has organized a series of poetry readings, performances and discussions at Georgia Tech. I recently attended a reading and discussion featuring British poet George Szirtes, and the US poet laureate (2001-2003) Billy Collins. Their discussion of their own creative process as poets led me to think about poetry generation, and particularly my discomfort with purely statistical approaches to poetry generation employed by systems such as gnoetry (1 2 3).
It was great to hear George and Billy both read poems and discuss the process of writing poetry, using their poems as examples. One issue they discussed was the problem of finding a balance between revealing and concealing. A poem that conceals too much from the reader becomes private language, something the reader is completely unable to enter. But a poem that reveals too much, that wears all of its meanings on its sleeve, in some sense fails to be poetry, fails to lead the reader to meanings not capturable in everyday language, fails to underlay meaning with mystery. One analogy they used for this was eye charts. On an eye chart, everyone can read the big “E” at the top of the chart. Eventually you get lines that are hard, and then impossible to read. A poem shouldn’t consist of only big Es or tiny small lines, but, like the eye chart, should have layers.
George read a love poem, and talked about the difficulty of writing love poems. On the one hand, the poet is trying to evoke an emotion that feels uniquely their own, on the other hand billions of people have felt the same emotion and tens of thousands have written poetry about it. How do you handle a topic like this without becoming trite and cliched? Billy commented that a poet is defining ideas and experiences, almost in the sense of dictionary definition, for which there are no dictionary terms. The poet is defining specific and precise senses of sadness, or love, or of an idea (e.g. of being young or old) or a response to an event, etc. An audience member commented that she was only able to write poetry during moments of excessive emotion, and asked if it’s possible to write poetry in other states. Both of them responded that it is not only possible, but necessary – that, when trying to capture a precise experience, they both work in a state of somewhat abstracted calm, sitting above the potentially traumatic or ecstatic experience they are capturing (offering a definition for), driving their minds around to understand the contours of the specific something (“Is it this, no. A bit more of this, yes…”).
Both of them talked about the difficulty of figuring out where to end a poem. Often, after getting down the main body of the poem, there something more it needs, something to truly complete it. This is related to the layering of meaning (the eye chart); if you stop too soon the poem doesn’t have enough depth, if you stop too late, the poem starts becoming confused, incoherent.
Also striking was the shear volume of poetry they could quote from memory. During the discussion they would often quote poems as examples. Obviously their own poems are deeply informed by broad and deep reading of poetry.
Now, returning to poetry generation, notice that a probablistic walk of a learned n-gram model (word co-occurrence probabilities) bears no relation to the strategies and problems described by George and Billy. What would it mean to build a poetry generator that actively reasons about the layering of meaning, about what’s been revealed and what’s been concealed? First of all, such a generator would need a notion of meaning, which n-gram models don’t have. What would it mean to have a generator that actively tries to capture and define a precise experience? Such a generator would have to have a model of experience, certainly an emotion model. But the emotion model would need to be more complex than the typical “small collection of real-valued knobs” emotion architecture in AI systems. If the system’s subjective state consisted of only a few real-valued knobs with labels like “anger”, “sadness” and so forth, then, in a model of generation in which a poem tries to capture a subjective state, what could the system say?
With a sadness of 3.2
I feel so blue and yet
goal failure has induced in me
an anger of 5.3, undirected
as I can not infer the cause
of this unlikely turn
What you’d really want is a model in which subjective state (including emotional state) is globally distributed throughout the architecture, in the manner that Aaron Sloman has been talking about for years.
Probabilistic models do capture, to some degree, the idea that good poets have read a lot of poetry. In the case of n-gram models, the word co-occurrence probabilities are learned from a corpus of poems. But unlike George and Billy, who have extracted strategies, topics, a sense of historical progression, turns of phrase and style, from the poems they’ve read, probabilistic generators at best have extracted a sense of style, but a sense of style divorced from meaning.
This is not to say that a probabilistic word selection model wouldn’t potentially have a place within a larger poetry generation architecture. But, without additional processes and structures that actively manipulate revealing and concealing, that have the goal of expressing a precise experience, generated poems won’t reliably mean anything, won’t have interesting layers of meaning, won’t have a distinct voice (which is more than style). A larger, more heterogeneous architecture is also an interesting procedural portrait of poetry creation.
April 23rd, 2004 at 1:29 pm
Hmm, to think I thought Em was named after Dorothy’s Aunt Em, not Emily Dickinson… ;-)
April 24th, 2004 at 3:32 pm
Michael > This is an area I’m (perhaps embarrassingly) not so familiar with, so please forgive in advance this question:
Why would you want to create an n-gram poetry generator? As a viable generative model, even if one could construe the output of such systems as “poetry,” wouldn’t it be poetry only within the framework of the pattern-matching algorithm, or else in the same way that one might construe a urinal as art (found art, a la Duchamp)? Am I wrong in thinking that an n-gram poetry would be quite different from visual grammars such as those used to recreate works in the style of certain painters?
I think your answer is, “maybe you wouldn’t,” thus your comment that subjective state should be globally distributed throughout the architecture.
There are a few obvious points to make in this regard, such as the fact that the “user” is also implicated in that subjective state machine. Perhaps a more interesting question to ask is this:
Is this an architectural problem or a linguistic problem? That is to say, do we need better algorithms (e.g. better Markovian implementations), or entirely new procedural languages (e.g. HAP and ABL)? I’m tempted to to say that the answer is the latter.
If you think about the evolution of writing in natural languages, it only became possible to use written syllabaries and alphabets for myth, poetry, and drama after they moved beyond logographic writing, primarily used in accounting and other bureaucratic affairs (e.g. early cuneiform and linear B). Is this the same situation we find ourselves in with current object languages, which have primarily been used for our modern day equivalent of accounting and bureaucratic affairs? Are languages such as C++ and Java inherently “incompatible” with poetry generation, or am I propagating a false analogy?
April 27th, 2004 at 2:00 pm
Michael, I suppose the real question is whether we’re trying to create another human poetry author (of which we already have a few, and some of them are pretty good) or we’re trying to create something else — something that provides a recognizably-poetic experience that is not possible with a human poet.
Sure, we could say that one difference in experience is that an automated poet produces poems on command. But there are so many poems being written and published at any given moment that we don’t need a computer to have poems on command. We could experience the same effect by seeking out paper and electronic poetry publications and just reading another published poem instead of hitting the “generate” button and reading the output.
I think part of what people like about n-gram text is the literal recycling. It automates unexpected intertextualities. For this enjoyment to take place people either need to be able to feed in their own text or they need to have some familiarity with the text being used. I’m not sure what’s enjoyable about the systems where this is hidden — perhaps the mental play between the recognition of style, and of markers of meaning, while simultaneously knowing there’s no direct human authorship?
I wonder what might provide an enjoyment analogous to n-gram intertextualities when interacting with the type of system you’ve outlined. Perhaps being able to tweak the emotional model and then generate the “same” poem? Is it about exploring the emotional model, like the n-gram approach is on some level about exploring the textual space?
BTW, I like that poem. Goal failure makes me feel that way too.
April 27th, 2004 at 2:45 pm
But there are so many poems being written and published at any given moment that we don’t need a computer to have poems on command.
To me, there are several interesting reasons to build a creative AI, one that could reason about its own knowledge, feelings etc. during the creation process — but one of those reasons is probably not to simply have just another author in the world indistinguishable from other typical human authors, of which there are plenty. Better reasons would be:
You could have your own personal creative conversation / experience / collaboration with this creative artificial intelligence. There may be many human poets out there, but not necessarily ones willing to work with / collaborate with you as you wish, so having your own personal creative collaborator could be nice.
A creative AI could potentially pull off some creative stunts that are pretty difficult to do even by humans, e.g. write and manage a multi-character drama in real-time.
A creative AI might in fact generate some unusual writing that humans typically don’t come up with, and so the resulting work might be novel and entertaining.
And finally, the process of building such an AI would reveal many interesting things about the creative process itself, just as how the construction of grandmaster chess AI’s touch a nerve in most of us.
April 28th, 2004 at 1:30 pm
Given that gnoetry is named as a arbiter of what makes Michael discomfited by “statistical approaches to poetry” I thought I’d weigh in on the discussion here which hinges on a particular understanding of “meaning.” To put it harshly, for a moment, poets like Collins and Lux are prostitutes of the narratival and the noumenal. The analogy of the eye-chart presupposes that sight (that is, understanding) is hierarchically situated, and poetry tries to fix (place and correct) the vision between the big E and the smaller, indecipherable letters. As this analogy places a premium on an idea of “correct” vision, one can only assume that poetry must always be a corrective lens, never obfuscating or making strange. The poets’ worry, too, over finding a proper ending seems to me to be a worry over finding the “right” way to direct the reader as to the over-all “meaning” of the poem–this seems to me to be a rather tyrannical displacement of the reader as arbiter of possible meanings. There seems to be one way to read the poem and poets like Lux and Collins want to be sure to strangehold their poems into a single, epiphantic, theoretical jacket.
Also, style is never “divorced” from meaning–style, I think, IS meaning, or at least a form of meaning. The advantage of gnoetry is that its use of extant texts to build a corpus of language to be statistically analysed is that certain authorial styles do survive the stochastic process, as does emotive data. (One must also wonder whether the process of a human writing poetry is not itself a statistically modelled activity–Lux and Collins being able to quote poetry from memory is a matter of calling up data at an appropriate moment, laced with ideas of probability and statistical models).
Given that consciousness itself acts to find order in disorder, to create meaning from the chaos of the world, a system like gnoetry privileges the reader’s ability to supply what meaning she’s always already supplying by merely being a thinking being. Poems created with gnoetry are open systems that do not impose a strictly narratival sense of sense given that the reader might do so on her own if she wishes.
But, in poetry, sense must not be straight-jacketed into story, or one would have to dismiss at least half of all poetry written since 1850. Collins and Lux would never be interested in, say, the Symbolist, the dada, the Steinian, the Oulipian or the Language poets who attempt to inform other ways to mean in language unless there happens to be a storied meaning in the work.
Poems include within themselves theories about how they mean–certain poets want there to be a single “meaning” and see only one way to arrive at that meaning. Poems genertated by gnoetry fall into a contra-tradition which presumes that language is always meaning something, even if the surface of the language is “incoherent,” the language does not find an emotive “common ground,” or the language does not “go forth/to plant as many kisses upon the world/as the world can bear!” (Lux, “Render, Render,” from his latest book The Cradle Palace).
April 29th, 2004 at 5:17 pm
Hmm, to think I thought Em was named after Dorothy’s Aunt Em, not Emily Dickinson… ;-)
Just to expand Andrew’s joke a bit (I know, explaining a joke kills all the humor), Andrew’s referring to the Em model of emotion that was integrated into Hap (the language ABL, used to author the characters in Facade, is based on). Em is an instance of a goal appraisal model of emotion, in which emotions are generated as a function of goal processing – goal success (happiness), failure (anger), inferred probabilities of success (hope) and failure (fear) and so forth, are the causes of emotional state.
BTW, I like that poem. Goal failure makes me feel that way too.
So my bit of doggerel was intended to show that a goal appraisal model, which is an instance of a “knobby” model of emotion, doesn’t provide a rich enough subjective state for a poetry generator. I did have fun writing it, though – I was thinking of the poetry generated by the Electropoet in Stanislaw Lem’s collection The Cyberiad.
Am I wrong in thinking that an n-gram poetry would be quite different from visual grammars such as those used to recreate works in the style of certain painters?
An n-gram model could be viewed as a kind of stochastic grammar (though it isn’t quite that). Both make use of random elaboration at choice points. In the n-gram model, the distribution over choices depends on the local neighborhood of word choices (so probabilities chain), while in grammar-based generation systems, random choices occur wherever there are multiple allowed decompositions specified in the grammar. Grammars can be learned, and distributions can be placed over decomposition choices, so there are some interesting relationships.
Is this the same situation we find ourselves in with current object languages, which have primarily been used for our modern day equivalent of accounting and bureaucratic affairs? Are languages such as C++ and Java inherently “incompatible” with poetry generation, or am I propagating a false analogy?
It’s not so much that a language like C++ or Java is incompatible, but that, on their own, they don’t provide an architecture that supports thinking about poetry generation. One might use C++ or Java to build such an architecture, but the authorial affordances, the “hooks” that support the human author in thinking about the conceptual space of poetry generation in a manner grounded in code, inhere in the architecture, not in the underlying implementation language. Of course, a tried and true approach to knowledge representation is to define new languages (implemented in other languages) that support talking in code in a way more directly related to the domain (e.g. poetry generation). This was the approach we took with Facade (there’s actually three custom languages in Facade – one for the characters, one for the drama manager, and one for the natural language understanding system). But architectures don’t necessarily incorporate new languages.
But there are so many poems being written and published at any given moment that we don’t need a computer to have poems on command.
As Andrew pointed out, there are many different reasons to explore poetry generation. For me, I’m fascinated by AI models of creative processes, not so much because I want to make claims about having captured human creativity, but because such models serve as interesting representations of ourselves (procedural portraits) and because the resulting architectural ideas can be used in building new genres of AI-based interactive art, pieces that deeply and generatively respond to audience interaction. As a particular case, I find poetry generation interesting because of the complex intertwining of meaning and style, and because of the (traditional) function of poetry as evocative of subjective states, states which an AI program presumably lacks. As a problem, poetry generation simultaneously pushes on the architectural ramifications of meaning, style and subjective state.
To put it harshly, for a moment, poets like Collins and Lux are prostitutes of the narratival and the noumenal. … –style, I think, IS meaning, or at least a form of meaning.
I agree that in poetry, style and meaning are tightly wound together (one of the reasons poetry generation is interesting to think about). I’m familiar with the post-modernist assertion that meaning is style, that there is no depth, only surface. Without getting into a barren theoretical discussion about whether this is true, it’s interesting to think about what this would mean in the context of building a poetry generator. If it was true that there is only style, then a statistical approach, which can be thought of as a “pure style” approach, should be able to produce not only language poetry, but also poetry like Collins’ – the “narratival” and “noumenal” is just style as well. Yet statistical approaches can’t pull off this kind of poetry – the generated poetry contains interesting turns of phrase, islands of imagery, but you can tell it’s all surface. So, using the distinction between style and meaning as a heuristic for system design, one can begin thinking about architectures which add more depth to the generation process. Yes, style and meaning are strongly intertwined, but collapsing the distinction to a single pole reduces the space of possibility for machinic architectures.
… Poems include within themselves theories about how they mean–certain poets want there to be a single “meaning” and see only one way to arrive at that meaning. Poems genertated by gnoetry fall into a contra-tradition which presumes that language is always meaning something, even if the surface of the language is “incoherent,” the language does not find an emotive “common ground,” or the language does not “go forth/to plant as many kisses upon the world/as the world can bear!”
I’m not “anti-gnoetry”. Gnoetry is an interesting system precisely because it’s part of a well articulated agenda of text production. The architecture and the conceptual project mutually define each other (good authorial affordances). But in my own work, I’m leery of relying too much on a random number generator. I’m interested in generative architectures in which things happen for reasons other than numbers being drawn from a distribution; when things happen for a reason, the author of the system has hooks for intervening in the generative process.
The contra-tradition you pursue is not the whole of poetry. I’m interested in looking at other poetry traditions (even if they are “old fashioned”) as ways to think about new generative architectures. Stochastic poetry (dadism, language poetry, Oulipian techniques) are some of the easiest to imagine mechanizing, precisely because of the dependence on random variation. But where can this approach go in the future? How can it be applied in interactive contexts? I don’t see purely statistical approaches as supporting much future exploration in generative architectures, though such approaches will certainly be part of larger architectures.
April 30th, 2004 at 1:45 pm
Style is never “just surface,” and so I wouldn’t align myself with the Post-Modernists, whoever they happen to be now. Style–and yes, Collins and Lux have a style, but one that they consistently mitigate in order to have “meaning”–is an aspect of class, ideology, psychology, history, what have you. Styles are not just idiosyncratic voices or surface features but thesauri of how one is trying to mean.
And so: I’m not sure there is anything that can be called “pure style.”
What I meant, I believe, when bringing up the contra-tradition was not that it is THE tradtion, but the tradition to look to to think about gnoetry and other statistically based poetries. Personally, of course, I wouldn’t really be interested in seeing a program that could mimic a Collins or a Lux or a Mark Strand.
Statistics and algorithms and constraints ARE good enough “reasons” for poetic things to happen, and Gnoetry 0.2 is set up to facilitate a true collaboration between equal poetic partners. I think that gnoetry is closer to the kind of architecture you’re outlining than it may seem.
That said, I’m just the pin-up girl for gnoetry. Mr. Jon Trowbridge, the coder, could say a lot more about the technical aspects and the reasons behind the interface.
Anyone who is running Linux can, ostensibly, use gnoetry. If anyone’s interested in trying to run Gnoetry 0.2, which is not completely finished but is up and running, just email jon@trowbridge.org.
May 4th, 2004 at 7:06 pm
You can find more reactions to this thread in a new top level blog post, “Unconscious Thinking“
May 6th, 2004 at 9:29 am
Whats a poet leurette and what are it origins?
May 6th, 2004 at 10:25 am
It’s spelled “poet laureate” and information about title is online in the usual places.
April 13th, 2006 at 1:57 am
[…] extauto.gatech.edu/2004/02/18/the-dublin-of-dr-moreau/ Mateas, Michael and various (2004b) The Poet Laurete and the Machine, Grand Text Auto, […]