May 4, 2004

Unconscious Thinking

by Andrew Stern · , 6:45 pm

I’ve been thinking about chatterbots, as well as the recent discussion about poetry generation using statistical methods. I’ve thought about what these systems do, and what they don’t do.

I recently played with and read up on ALICE, a state-of-the-art text-based chatterbot. Primarily authored by Richard Wallace, ALICE has twice won an annual Turing test-like competition called the Loebner Prize. To create ALICE, Wallace developed AIML, a publicly-available language for implementing text-based chatterbots.

Gnoetry has been discussed several times here on GTxA, most recently here. From its website, “Gnoetry synthesizes language randomly based on its analysis of existing texts. Any machine-readable text or texts, in any language, can serve as the basis of the Gnoetic process. Gnoetry generates sentences that mimic the local statistical properties of the source texts. This language is filtered subject to additional constraints (syllable counts, rhyming, etc.) to produce a poem.”

In my experience with them, ALICE and Gnoetry are entertaining at times, sometimes even surprising. They clearly have some intelligence.

But something feels unduly missing about these artificial minds. I decided to try to understand, why do I have trouble caring about what they have to say? What precisely would they need to do, beyond or instead of what they currently do, to make me care? (Is it just me? :-)

Both systems have a lot of raw content they draw upon — data, or database, if you like. Gnoetry works from a corpus of raw text you supply it, as small or large as you wish, such as hand-authored poems, lyrics or stories. ALICE contains several thousand hand-built, relatively simple language patterns — empirically-derived templates of how people typically speak — that get matched to the player’s typed-text input. Along with the templates, ALICE has a large database of hand-authored, one-liner responses to say back to the player.

Data is part of what knowledge is, and so we can definitely say that each of these programs have non-trivial amounts of knowledge in them. But another part of knowledge is how you use that data — your algorithms for processing (“thinking about”) your data. Furthermore, how a mind thinks about its data is greatly influenced by how the data itself is organized / represented in the first place.

ALICE doesn’t reason very much over its data, nor is its data represented in a way that would allow for much reasoning. In ALICE there is essentially a mapping between its many, many pattern matches and its many responses. This mapping is often simple, but can get less simple, as the AIML language allows the pattern matches to combine with each other, allowing for more robust matching. Also, AIML offers a bit of ability to keep track of recent conversational state, allowing to ALICE engage in short discourses, e.g. ask a question and wait for an answer. But overall, these capabilities are limited. (I believe Wallace intended for AIML to have a simple but powerful set of capabilities, to better allow novice programmers to use it.)

Gnoetry arguably does more sophisticated processing of its data than ALICE, but from what I understand only pays attention to particular features of its data — its local statistical properties, e.g. what words tend to follow other words. Furthermore, a human assists the program to help polish and edit the final poems (the system’s creators want the poems to be human-machine collaborations).

Because of ALICE’s primarily stimulus-response mapping, and Gnoetry’s statistical processing methods, at first I’m tempted to say these systems are “faking it”, but that would be wrong. Once you have enough knowledge represented, even in these somewhat straightforward forms, you can achieve some interesting things. Again, ALICE and Gnoetry are intelligent in certain ways; in fact, people sometimes act in similar ways. As Wallace points out,

Experience with A.L.I.C.E. indicates that most casual conversation is “stateless,” that is, each reply depends only on the current query, without any knowledge of the history of the conversation required to formulate the reply. Indeed in human conversation it often seems that we have the reply “on the tip of the tongue” even before the interlocutor has completed his query. Occasionally following the dialogue requires a conversational memory of one more level, implemented in AIML with <that>. When asking a question, the question must be remembered long enough to be combined with the answer. These same remarks are not necessarily true in situations requiring highly structured dialogue, such as courtrooms or classrooms. But in the informal party situation human conversation does not appear to go beyond simple stimulus-response, at least not very often.

Likewise, in our most recent Gnoetry discussion I appreciated Eric’s suggestion that when speaking or writing, people might in fact do some form of statistically-based creation if they draw upon their own memorized corpus of texts or phrases, i.e., stuff they’ve read and remembered rote. (Of course to the extent they’ve understood and conceptualized what they’ve read, they would be moving away from statistical creative processes.)

It occurred to me, perhaps AIML’s and Gnoetry’s AI techniques are analagous to certain types of unconscious thinking, such as stimulus-response behavior, or pattern matching, or somewhat random word association / juxtaposition. These are different than AI techniques that are analagous to conscious thinking: an explicit will to act, including the deliberate decision to act or not; explicitly trying to be creative, perhaps with some (loose) criteria for what you’re intending to create, including reflecting upon and evaluating what you are creating, which feeds back into the creative process.

As we know, the conscious act of being creative can be greatly assisted by unconscious processes. From what I understand about how people’s minds work, conscious thinking is often affected by what’s going on unconsciously. Emotions seem to be an example of this. People typically can’t create or control their emotions; at best we consciously try to suppress the behaviors they induce. Emotions have a major effect on our conscious thinking. This suggests, to build a creative AI, as Michael pointed out, it may be useful to create large, heteregenous architectures — systems that have several subsystems working concurrently, directly or indirectly influencing one another.

On the one hand it’s amazing that systems like ALICE and Gnoetry have performed as well as they have. On the other hand, it just shows how far you can get with a relatively large amount of data and relatively straightforward, “unconscious” processing of that data.

I’ll go out on a limb and make a rough and unpolished guess at what, for me anyway, is missing in systems like ALICE and Gnoetry1. I want AI’s to want. Not to be “conscious” (whatever that is), but to have explicit intentions: things they need to do, that they are actively, intentionally doing, and have been deliberated over, that they could have chosen *not* to do.

These wants can’t be simple though; they can’t just be represented as straightforward goals. Nor should the wants be rational goals that are robotically pursued, like a autonomous vacuum cleaner, or even the Sims. The things a creative AI wants to have, or to do, can sometimes be, and should be, irrational, even unresolvable. In fact it would be most interesting if (just like people, like me) it had a conflicting sets of wants. By its nature, its wants could never be satisfied. Such internal conflict, by the way, would be a very good generator of emotion, which would then feedback to influence its creative processes.

It would be hard to care about a mind that is static, either. It would need to change and progress to some extent. This doesn’t mean the agent has to be so sophisticated that it can learn new behaviors, but it does need to have its own internal state, its own little understanding of its world, that changes significantly over time, influenced by its reactions to external events and/or internal thinking that happens over time.

Without wants and change, how could I ever empathize with a mind? If it doesn’t need anything, and doesn’t try to get it, how can I care about what it’s saying? If an AI agent produces responses in an essentially stimulus-response way, or performs a sophisticated regurgitation / recycling of a corpus of data someone else wrote, the surface text they produce may be human-like and perfectly enjoyable, but I have trouble caring about the agent itself, and therefore care significantly less about the text itself. Unwanting systems can end up more like fancy mirrors than minds: relying heavily on other people’s data, the images they produce can be interesting, but have little reason to be cared about. I believe I’d care much more about what I’m reading when I know the text has come from a wanting and changing mind, real or artificial.


Creative Commons License The text of this blog entry is licensed under a Creative Commons License.