September 9, 2003
Machine Learning and Literary Work
I’ve been thinking about machine learning more acutely than usual recently, since I’m part of a seminar on the topic this semester. And I’ve been wondering about literary applications of support vector machines and kernel methods and so on. (Sounds fun, doesn’t it?)
One of my fellow denizens of ifMUD put together a short article a while back about how recent AI techniques, including machine learning, might be used in IF. The suggestion that the NPCs might learn about the game world reminded me of some work in the Oz project (“I am learning my voice”). Still, that’s one of several ideas I have heard about that seem to specifically focus on optimzing parts of the virtual world, in relation to one another rather than in relation to the user. I’m interested in how systems can be enhanced to improve the interactive literary qualities of the work. While the article introduces statistical AI to those who may have different AI associations, it doesn’t list existing literary work that has been done using, for instance, machine learning techniques. I can think of a few creative computer systems (Black & White, for instance) that use techniques like reinforcement learning, but except for the poetry generator Gnoetry I can’t think of much literary work along these lines, and nothing interactive.
Surely the field of research that brought us face recognition, credit card fraud detection, and handwritten digit classification must have borne some literary fruit?
September 9th, 2003 at 10:48 pm
Well, the metaphor behind Darwinian Poetry is sort of the opposite of AI, but it might be worth examining…
The idea is the computer shows you two poems, and you choose which of the two you like. The poems that get chosen are then “bred” with each other, to create new poems that are then served out for more voting. If it were somehow possible to find out the characteristics of surviving poems, maybe the action of “breeding” the poems could be made more intelligent.
September 10th, 2003 at 12:48 am
The idea of evolutionary improvement is an intriguing one that has received a good amount of attention in AI.
Even when automated (by running strategies against one another according to rules and seeing which ones persist) evolutionary methods are the indeed the “opposite” of machine learning. Machine learning systems improve their performance over time, while evolutionary systems, considered individually, don’t change. It’s only by selecting particular systems, the winners, that such an approach results in overall improvement.
September 10th, 2003 at 4:52 am
So does that mean machine learning rules out a collective system and only wants to deal with individual objects or cells or whatever?
Isn’t that like saying that we want individual ants to learn, while ignoring the fact that an anthill, the community of ants, already learns plenty in the course of its 15 year lifetime? Surely that’s a bit of a waste?
Yes, I read Steven Johnson’s Emergence and loved the stories about the anthills! I had had no idea that anthills go through puberty.
September 10th, 2003 at 10:39 am
Like all definitions, the definition of machine learning does rule out some things. Machine learning is the study of how a particular computer program can improve its performance on a task based on experience. The field does not consider computer programs that, for instance, perform well to begin with and keep performing at the same level.
Evolutionary approaches run a large number of programs against one another to select the one that best accomplishes the task. Typically, these programs that participate do not learn, as learning is defined in ML. They simply don’t improve their performance on a task based on experience, so they don’t learn.
Although you can draw the boundary of “computer program” at different places, it’s not at all a waste to distinguish these approaches. It’s the type of precision that allows us to make advances in the field, just as scholars of new media writing benefit from distinguishing ergodic and non-ergodic texts so they can develop specific techniques for understanding specific works. If we don’t understand this distinction in the evolutionary/machine learning case, it will be much harder, for instance, to understand the work that’s been done on combining machine learning approaches with evolutionary ones.
September 10th, 2003 at 11:37 am
Learning is an interesting topic. A question I’d like to throw in is, from a interactive story design standpoint, when is learning useful?
Learning clearly makes sense for long-term, open-ended experiences with intelligent computer characters. Learning takes time, and as you interact with a character, it could alter its attitudes and associations to other characters and objects in the world, make new connections, etc., which results in altering its behavior. For example, in Creatures, interaction over time modifies connections in each Norn’s neural net. Petz and Babyz can learn associations by way of positive and negative reinforcement. Far more ambitiously, one could imagine characters that learn new brand-new behaviors altogether.
But when thinking about ways to innovate and improve more tightly-plotted interactive stories, the idea of applying learning techniques hasn’t seemed as obviously useful to me as, say, new reactive planning techniques, NLP, or knowledge representation. Again, learning takes time; without creating stories that last a long time (weeks?), I’m not sure that enough could be learned during the play of an interactive story to have much effect. Unless it’s a short story designed to be played over and over for some reason, e.g., Facade; then theoretically you could alter subsequent plays with what you learned in previous plays.
Even if you find a good design reason for using learning, how do you implement it? How does learning “hook” into the system exactly? I think there’s a tension between top-down and bottom-up approaches here. Bottom-up approaches to characters and worlds, e.g. Creatures, the Sims, run open-ended simulations in which (fragments of) what we call “narrative” may or may not emerge over time. Whereas more top-down, authored approaches (e.g., some IF, drama managers) explicitly cause narratives to occur. From what I understand, learning techniques are more easily applied to bottom-up approaches, because they are more synthetic by nature; the bottom-up rules of a simulation are excellent “hooks” to apply learning. That is, make a small change in a rule or neural net weight, see big changes in behavior. But, IMO, bottom-up approaches, by their open-ended nature, run the risk of long streches of dullness where nothing much is happening — a problem if you’re trying to keep things interesting and dramatic. (I talk about this a bit in this essay from a few years back.)
September 11th, 2003 at 12:47 am
Good points, Andrew – thanks for the reminder that Petz, Babyz, and Creatures are learning systems, something which I should have recalled more easily.
“when is learning useful?” is a key question for desginers, and one reason I was looking for examples to refer to. Another issue is “what is supposed to learn?” The systems mentioned so far involve an agent within a simulated environment (a virtual baby on the desktop, or creature in the Creatures world, or creature in the Black & White world) and a situation in which the agent is learning. What about just having the entire program, considered as a whole, learn to improve its performance on a task such as “elict text input from the interactor”?
Perhaps you’ve answered that already: this would at least suggest less direct control over the experience for the designer/author, making this less welcoming of an approach for most people…
September 11th, 2003 at 9:06 am
Well, I think that “less direct control over the experience for the designer/author” is a general issue we have to deal with anyway. As authors of interactive work, especially work that offers the interactor agency, we willingly share control; furthermore, if the system itself learns, adapts, or otherwise acts autonomously, we share even more control. So I would hope that adding learning to a system wouldn’t be too much of a drawback to an author, since they’re (hopefully) already in the mindset of sharing.
Sure, I’m with you — having the entire program as a whole improve its performance on a task, such as eliciting text input from the interactor, would be really great. That answers the “useful” question. Then I think, how would learning hook into that task? Ideally the “task”, whatever it is, would have to have some sort of bottom-up-ish rules that accomplish its behavior, some sort of hooks, e.g., at a minimum, a set of parameters that can be tweaked, that a learning process could modify, to cause changes in behavior. That is, to make learning possible, you’d have to originally code the task itself in such a way to expose hooks to a learning process. You couldn’t write that task’s code in the “normal” top-down way.
Or, perhaps more ambitiously, one could imagine (and I’m sure examples have been built) implementing tasks, such as eliciting text from a user, in a kind of top-down, simple, clean script or plan-like structure for controlling behavior. Then, you separately write AI that can truly reason about those scripts / plans / behaviors, and can actually either modify them, or write new ones. AI code that is smart enough to write new code. I believe case-based reasoning is an example of something that at least modifies already-written control structures, in an attempt to create new ones.
Michael is well read up on this kind of research, when he’s back from Cosign he can hopefully point to some examples for us. Or, maybe some of our readers can point to examples?
October 8th, 2003 at 6:20 am
While not a literary application, Microsoft Research’s work on image-based realities is certainly a worthy machine learning contribution to interactivity in virtual worlds, and well worth checking out. I attended a talk on this topic yesterday; a discussion of the project can be found here.
February 25th, 2004 at 12:11 pm
i have one question, could you explain to me, how machine learning works?..the process of machine learning works
February 25th, 2004 at 1:14 pm
Not in a blog comment; you might take a look at this quick definition or this book, though.
September 12th, 2003 at 10:01 am
These folks are doing what I’ve been dreaming about. And are thinking about what I’m thinking about. How can I use gaming with my MSers to harness their native environments without killing myself? Think looks very interesting but I have