May 4, 2004

Unconscious Thinking

by Andrew Stern · , 6:45 pm

I’ve been thinking about chatterbots, as well as the recent discussion about poetry generation using statistical methods. I’ve thought about what these systems do, and what they don’t do.

I recently played with and read up on ALICE, a state-of-the-art text-based chatterbot. Primarily authored by Richard Wallace, ALICE has twice won an annual Turing test-like competition called the Loebner Prize. To create ALICE, Wallace developed AIML, a publicly-available language for implementing text-based chatterbots.

Gnoetry has been discussed several times here on GTxA, most recently here. From its website, “Gnoetry synthesizes language randomly based on its analysis of existing texts. Any machine-readable text or texts, in any language, can serve as the basis of the Gnoetic process. Gnoetry generates sentences that mimic the local statistical properties of the source texts. This language is filtered subject to additional constraints (syllable counts, rhyming, etc.) to produce a poem.”

In my experience with them, ALICE and Gnoetry are entertaining at times, sometimes even surprising. They clearly have some intelligence.

But something feels unduly missing about these artificial minds. I decided to try to understand, why do I have trouble caring about what they have to say? What precisely would they need to do, beyond or instead of what they currently do, to make me care? (Is it just me? :-)

Both systems have a lot of raw content they draw upon — data, or database, if you like. Gnoetry works from a corpus of raw text you supply it, as small or large as you wish, such as hand-authored poems, lyrics or stories. ALICE contains several thousand hand-built, relatively simple language patterns — empirically-derived templates of how people typically speak — that get matched to the player’s typed-text input. Along with the templates, ALICE has a large database of hand-authored, one-liner responses to say back to the player.

Data is part of what knowledge is, and so we can definitely say that each of these programs have non-trivial amounts of knowledge in them. But another part of knowledge is how you use that data — your algorithms for processing (“thinking about”) your data. Furthermore, how a mind thinks about its data is greatly influenced by how the data itself is organized / represented in the first place.

ALICE doesn’t reason very much over its data, nor is its data represented in a way that would allow for much reasoning. In ALICE there is essentially a mapping between its many, many pattern matches and its many responses. This mapping is often simple, but can get less simple, as the AIML language allows the pattern matches to combine with each other, allowing for more robust matching. Also, AIML offers a bit of ability to keep track of recent conversational state, allowing to ALICE engage in short discourses, e.g. ask a question and wait for an answer. But overall, these capabilities are limited. (I believe Wallace intended for AIML to have a simple but powerful set of capabilities, to better allow novice programmers to use it.)

Gnoetry arguably does more sophisticated processing of its data than ALICE, but from what I understand only pays attention to particular features of its data — its local statistical properties, e.g. what words tend to follow other words. Furthermore, a human assists the program to help polish and edit the final poems (the system’s creators want the poems to be human-machine collaborations).

Because of ALICE’s primarily stimulus-response mapping, and Gnoetry’s statistical processing methods, at first I’m tempted to say these systems are “faking it”, but that would be wrong. Once you have enough knowledge represented, even in these somewhat straightforward forms, you can achieve some interesting things. Again, ALICE and Gnoetry are intelligent in certain ways; in fact, people sometimes act in similar ways. As Wallace points out,

Experience with A.L.I.C.E. indicates that most casual conversation is “stateless,” that is, each reply depends only on the current query, without any knowledge of the history of the conversation required to formulate the reply. Indeed in human conversation it often seems that we have the reply “on the tip of the tongue” even before the interlocutor has completed his query. Occasionally following the dialogue requires a conversational memory of one more level, implemented in AIML with <that>. When asking a question, the question must be remembered long enough to be combined with the answer. These same remarks are not necessarily true in situations requiring highly structured dialogue, such as courtrooms or classrooms. But in the informal party situation human conversation does not appear to go beyond simple stimulus-response, at least not very often.

Likewise, in our most recent Gnoetry discussion I appreciated Eric’s suggestion that when speaking or writing, people might in fact do some form of statistically-based creation if they draw upon their own memorized corpus of texts or phrases, i.e., stuff they’ve read and remembered rote. (Of course to the extent they’ve understood and conceptualized what they’ve read, they would be moving away from statistical creative processes.)

It occurred to me, perhaps AIML’s and Gnoetry’s AI techniques are analagous to certain types of unconscious thinking, such as stimulus-response behavior, or pattern matching, or somewhat random word association / juxtaposition. These are different than AI techniques that are analagous to conscious thinking: an explicit will to act, including the deliberate decision to act or not; explicitly trying to be creative, perhaps with some (loose) criteria for what you’re intending to create, including reflecting upon and evaluating what you are creating, which feeds back into the creative process.

As we know, the conscious act of being creative can be greatly assisted by unconscious processes. From what I understand about how people’s minds work, conscious thinking is often affected by what’s going on unconsciously. Emotions seem to be an example of this. People typically can’t create or control their emotions; at best we consciously try to suppress the behaviors they induce. Emotions have a major effect on our conscious thinking. This suggests, to build a creative AI, as Michael pointed out, it may be useful to create large, heteregenous architectures — systems that have several subsystems working concurrently, directly or indirectly influencing one another.

On the one hand it’s amazing that systems like ALICE and Gnoetry have performed as well as they have. On the other hand, it just shows how far you can get with a relatively large amount of data and relatively straightforward, “unconscious” processing of that data.

I’ll go out on a limb and make a rough and unpolished guess at what, for me anyway, is missing in systems like ALICE and Gnoetry1. I want AI’s to want. Not to be “conscious” (whatever that is), but to have explicit intentions: things they need to do, that they are actively, intentionally doing, and have been deliberated over, that they could have chosen *not* to do.

These wants can’t be simple though; they can’t just be represented as straightforward goals. Nor should the wants be rational goals that are robotically pursued, like a autonomous vacuum cleaner, or even the Sims. The things a creative AI wants to have, or to do, can sometimes be, and should be, irrational, even unresolvable. In fact it would be most interesting if (just like people, like me) it had a conflicting sets of wants. By its nature, its wants could never be satisfied. Such internal conflict, by the way, would be a very good generator of emotion, which would then feedback to influence its creative processes.

It would be hard to care about a mind that is static, either. It would need to change and progress to some extent. This doesn’t mean the agent has to be so sophisticated that it can learn new behaviors, but it does need to have its own internal state, its own little understanding of its world, that changes significantly over time, influenced by its reactions to external events and/or internal thinking that happens over time.

Without wants and change, how could I ever empathize with a mind? If it doesn’t need anything, and doesn’t try to get it, how can I care about what it’s saying? If an AI agent produces responses in an essentially stimulus-response way, or performs a sophisticated regurgitation / recycling of a corpus of data someone else wrote, the surface text they produce may be human-like and perfectly enjoyable, but I have trouble caring about the agent itself, and therefore care significantly less about the text itself. Unwanting systems can end up more like fancy mirrors than minds: relying heavily on other people’s data, the images they produce can be interesting, but have little reason to be cared about. I believe I’d care much more about what I’m reading when I know the text has come from a wanting and changing mind, real or artificial.


Creative Commons License The text of this blog entry is licensed under a Creative Commons License.

32 Responses to “Unconscious Thinking”


  1. andrew Says:

    Footnotes:

    1) I’m speaking specifically about systems that claim to be, or are obviously presented as, “minds of their own” — where I’m supposed to believe the what the agent speaks comes from its own mind, not from a human author’s mind. Generally I don’t have that trouble-with-caring feeling with characters in games or interactive fiction, since those experiences are presented as authored by people.

    Additional notes:

    A new resource for understanding the issues involved in authoring chatterbots, and design guidelines for doing so, can be found in Peter Plantec’s new book, Virtual Humans. I saw Peter present Sylvie, a reasonably broad chatterbot with an animated face and voice synthesis, at a Virtual Humans conference in the late nineties (around the time I was developing my own virtual human project, Babyz). At the time Peter had teamed up with Michael Mauldin of Julia fame, to create a company called Virtual Personalities. These days, that company has turned into / been succeeded by Conversive. … Read more about Virtual Humans the book at Kurzweil’s site. (Animator Ed Hooks is mentioned here; coincidentally I lived a few blocks away from his studio, in Chicago’s Lakeview district. I went to a Chicago game dev sig meeting and met him, told him about Facade.) Also see Peter’s not-frequently-updated blog.

    Wallace has his own book / guide, Be Your Own Botmaster.

    Check out this prosthetic head art project by Stelarc, whose behavior (the head, not Stelarc’s) is driven by ALICE.

    Rob Zubek, PhD candidate at Northwestern, is doing research towards greatly extending the capabilities of text-based bots — check out his excellent research here. The initial scenario he’s building is called “The Breakup Conversation”.

    On the topic of natural language understanding techniques, Michael and I have a new paper we’re presenting at TIDSE next month, called “Natural Language Understanding in Façade: Surface-text Processing”. We detail our method for understanding the players’ typed text, which we believe offers some advances over AIML’s techniques.

  2. Mark Says:

    P.A.U.L.A. SG 1.1 (Paolo’s Artificial Intelligence Limited to Assimilation) is a chatbot by Paulo Vacirca. You can download a version here.

    I agree with your assertions about the narcissistic character of most chatbots (and users).

    Some scattered thoughts follow:

    Since many chatbots are ecommerce tools (receptionist-bots, and what not), their only “want” is to help you, a subordination of want, if you will. They are telephone menus without the tree structure, allowing us to use words instead of telephone numbers.

    Of course, the ELIZA model (along with its followers), true to Rogerian psychoanalysis, offers a chatbot who is the tabula rasa, as means through which we can come to know our wants. The later chatbots forget to add that goal-seeking or motivation so familiar to us in drama, fiction, and game.

    I wonder what you think of Jack the Ripper:

    Or rather Saucy Jack: http://www.triumphpc.com/saucyjacky/ He doesn’t have wants exactly, but there is a personality effect, though perhaps one we don’t want to get to know. I would imagine you’d this bot equally frustrating.

    As I presented at the Digital Narr@tives conference, most users of the chatbots I developed mentioned elsewhere on GTxA(http://www.bunkmag.com/dandg/dating/)seemed to want only for the chatbots to respond to them and their input. They didn’t seem to care what the chatbots “wanted” to talk about.

    Julie of Phoenix Quest wants to be rescued and wants you to get back to playing the game. Her Engine was a Loebner Prize contestant. Does this count as a want? It seems perhaps a straightforward goal.

    I wonder, too, if it is a question of reading. I still contend that our interaction with chatbots is collaborative, just as much in our typing as in our work to make a meaningful exchange, similar to a volley in tennis or a satisfactory improv.

    Here’s an exchange between myself and P.A.U.L.A.

    Me: Are you human?

    Paula: Yes.

    Me: You are cute

    Paula: I have to go.

    Me: bye bye

    Paula: NOOOOO.

    Me: Okay

    To me, P.A.U.L.A., throughout this exchange, is alternately shy, reluctant, sad, desperate.

    These are wants I see in P.A.U.A.’s responses as I “narrate the interface” (This phrase comes from James Tobias).

  3. christy Says:

    AS:

    ‘It occurred to me, perhaps AIML’s and Gnoetry’s AI techniques are analagous to certain types of unconscious thinking, such as stimulus-response behavior, or pattern matching, or somewhat random word association / juxtaposition.’

    Definitely! AI, of course, reflects the diversity of our own human ability at the time (can we create beyond ourselves?). So we have AI at differing levels of ‘intelligence’, ‘awareness’, ‘physical ability’, ‘consciousness’ and ‘spirituality’. Different programs, architecture, hardware and user expectations (current state of human-computer interaction and human-agent interaction) impact on a work and how it is read. There is a whole community of bot users (usually botmasters) that have certain expectations and knowledge of the programs limitations. As Mark mentioned users ‘seemed to want only for the chatbots to respond to them and their input. They didn’t seem to care what the chatbots “wanted” to talk about.’ So, the unconscious or stimulus-response activity is not just on the side of the bot but also on the user. See Wallace’s Zipf Curve analysis of user responses. .

    Those unfamiliar with AI find bots amazing or entertaining or a good therapist. Those acclimatised to game agents do not expect a game agent to ask questions about the meaning of life and are usually uninterested in bots with such concerns. If one does create a new use for a program or a new program then the market for the product needs to be created (eg: Bot Fiction market) or needs to entice those who would enjoy it from traditional media (Detective Fiction?, Soapies?). Bots are a particular type of program that reflect or enact a particular type of conversation and thinking. This is why I see the stimulus-response type of agent used in fictional works (what I term Bot Fiction) as a sub-set or lateral category to Mateas’ ‘Expressive AI’. [Technology + Fiction + User = Genre?]

    Andrew, I don’t think stimulus-response is just unconscious action but also [for want of a better word] robotic behaviour (ah, the loop). Some jobs and situations facilitate and actually require it, therefore making it conscious action: help-desk personnel, receptionist roles, sex workers.

    AS:

    ‘Not to be “conscious” (whatever that is), but to have explicit intentions: things they need to do, that they are actively, intentionally doing, and have been deliberated over, that they could have chosen *not* to do.’

    For me fictional worlds, specifically a character shell around a bot (a fictional aura?/authored soul?), is one factor that facilitates such traits. This relates to motivation and plausibility. Motivation is part of acting and writing characters – they need to have a reason behind doing things, acting in certain ways. Plausibility is about adhering to the rules set up in the storyworld but also, for me, when suddenly beyond the written text and embodied on the Net they need to have a plausible technical and psychological reason for being there. How is an artificially intelligent creation suddenly here and able to talk to me? Or how is the character suddenly in my world, my time zone and on the Net? Why does the character need to talk with me?

    In my creative work I’ve loaded the current context around bots together with the current types of conversational and thinking ability of humans together with a fictional world to create a bot that is a character who suffers from an inability to respond-in-moment. She is a human sex worker who works remotely as a sexbot. The technology used, the current state of users of a particular technology, and the current state of human ability all feed into character creation.

    FYI: Some more Bot Fiction:

    Simon Groth’s ‘Hemmingway’

    Timothy Marsh’s Elevator Bot

    Alan

    Agent Ruby

  4. Dirk Scheuring Says:

    Concerning AIML, it might be helpful to know that the language, though simple, is Turing-complete. It can be viewed as a radically reduced version of LISP; alternatively, as the Assembler of Natural Language Processing. It can be (self-)recursivly extended to provide generalized functions, reasoning capabilities, an arbitrary amount of context sensitivity, and even – for those who need it – access to the OpenCyc engine (although I actually think it’s less trouble to write the inference stuff directly in AIML). It’s a bit of a laborious process (that’s the Assembler similarity), but since it’s possible to program reusable higher-level objects, that can be alleviated somewhat. But the best thing, to me, is that it’s so simple to get into that I figured out all of the above without having any prior experience in programming. For anybody who looks for a powerful string manipulation language that enables the much-discussed transition from artist to programmer, AIML sure should be worth checking out.

  5. Rob Says:

    Andrew’s posting reminded me of some old chatterbots debates. And brought back the issue of believable behavior.

    All chatterbots share the same, notorious failure mode. (Indeed, for me it’s this failure mode that defines ‘chatterbots’.) They don’t keep track of what they’re talking about. Or, as Andrew would say it, they only perform stimulus-response mapping.

    If a chatterbot makes a statement (e.g. “This winter was very mild”), and you reply to it tersely (e.g. “No it wasn’t”), the bot will no longer know to what are you replying. They carry almost no extended information about the topic or the progression of the overall conversation. Consequently, they can only restore the missing state if they can mine it from user input. So when you ask ALICE, “wasn’t it?”, it will completely fail to answer – even though you were talking about winter being mild just two turns ago.

    And the problem is that we, as humans, do not like to encode the entire state of the conversation in every sentence we produce. We’re amazingly economical in the way we speak; we tend to say very little, but rely on everyone knowing what the situation is all about. Explicit speech sounds stilted, guarded, and hostile; and people who don’t understand implicit meanings are perceived as less smart, ill-mannered, and badly socialized.

    Maybe this is why chatterbot conversations always seem to leave human users unfulfilled. Chatterbots can’t access meaning implicit in the context. They’re doomed to taking everything immediately and literally.

    It really is amazing that the simple systems can go as far as they do. They can be quite believable in narrowly prescribed settings (e.g. the tt14m project). But to think they can pass the Turing test is unrealistic. The Turing test appears to be AI-complete – a solution that satisfies the test would also satisfy the general problems of AI. And the general problems of AI are really, really difficult. :)

    Still, it’s possible to add better structure to behavior, without trying to solve Strong AI. Goal-directed behavior is one way to do it – since goals and goal decomposition provides reasons and larger structure to one’s actions. Emotions would be another, and they influence behavior, reasoning, and memory in numerous and difficult ways – especially the immortal duo of fear and desire. And then there’s my personal approach, that of modeling the structure of social interaction as external to the particular agents…

    But either way, simple stimulus response doesn’t seem like the answer, even if described in a Turing-complete language. Believable behavior clearly exhibits meaningful temporal structure. And we’re only beginning to understand it.

    Also, a side note.

    Stateless conversation stems directly from our technological limitations – we simply don’t have a good idea how to represent ‘knowledge’ and ‘context’ on computers. Fortunately, stateless and limited chatterbot conversation doesn’t require either.

    But why is this technological limitation getting elevated into a ‘natural’ aspect of human conversation? To quote Wallace from The Anatomy of ALICE:

    “Experience with A.L.I.C.E. indicates that most casual conversation is “stateless”, that is, each reply depends only on the current query, without any knowledge of the history of the conversation required to formulate the reply.”

    If this were true, then casual conversations with chatterbots would seem realistic and believable. And they don’t.

    [from my blog]

  6. nick Says:

    most casual conversation is “stateless”

    I just wanted to point out that even if this is true, it’s not a manifesto that suggests artists/writers/programmers should go and develop stateless chatterbots.

    Most casual conversation is semantically pointless, brief, and not initated for the sake of learning anythnig or so that one might discover beauty through language and exchange. Casual conversations usually leave the participants with no interest in pursuing further conversation for its own sake; they take place for pragmatic rather than aesthetic reasons. This is hardly a slur against casual conversation: we participate in it to ackowledge others as being members of our society, as people we are willing to talk to, and to receive such recognition from them.

    Which perhaps goes some way toward explaining why casual chatterbots are often better understood by the population at large as insults rather than as artworks…

  7. Christy Says:

    Hello Rob and Nick,

    Rob [and Nick] — are you saying that because chatbots are ‘stateless’ they will never be believable and thus cannot function as characters?

    And Nick, I’m unsure what you mean by:

    ‘…casual chatterbots are often better understood by the population at large as insults rather than as artworks…’

  8. Rob Says:

    Hi guys,

    Nick – heh, I think I did read that as a quasi-manifesto, since I wouldn’t dare to even go so far as to call casual conversation pointless. :)

    But I think we’re in agreement on this. Even if casual conversation accomplishes minute and mundane things – a verbal version of primate ‘grooming’, or whatever other pragmatic reason – chatterbots still can’t even get that far. Which would make them better as insult bots, indeed. :)

    Christy – yes, but with a caveat. I’m saying that because they’re stateless, they will never be believable and cannot function as characters outside of narrowly defined contexts.

    Believability is a function of the participants and the situation, so we can modulate the situation to make up for the shortcomings of the participants. For example, I’ve had some success building trash-talking teenager bots for Half-Life using simple Eliza – thanks to the highly constrained setting. But the situations where a memory-less bot fits believably are few and far between.

    Which is why I want to figure out what is needed to work in those other situations as well. :)

  9. Dirk Scheuring Says:

    On my first encounter with bots about three and a half years ago, I thought that they were just another medium, like novels or movies. As fictional characters, they struck me as particulary badly written ones, and my gut feeling was: “I should be able to figure this out”. I had no idea about computer programming back then, but a strong hunch about how I wanted to attack the problem on a conceptual level.

    In a nutshell: I reckon that any goal-oriented process – and therefore, any goal-oriented dialogue – can be represented as a story. My two principal characters are the bot and the client; the story is the conversation log that emerges. If my bot can figure out at any point of the conversation which story is currently active and what the state of the active story is w/r/t the dimensions of genre, plot, theme, and character, then it should have a pretty good idea about how to act/what to say in this situation.

    In particular, it should be able to answer the ‘w-questions’ – what, who, where, when, how, why. I think that no conversational system will ever be considered believable by many people if it doesn’t answer those questions, so my requirements for the prototype say: “Make a bot that can give a plausible answer to the question ‘Why?’ at any time”. To achive this, the bot works differently from the common stimulus-response model in that it takes an input and the current state and produces an output and the next state. The writer works differently, too; the aim is to write a text in which any sentence is motivated by another sentence (this is a bit of a simplified description, but I hope it transmits the idea). This summer, I’ll get to find out whether this works as good as I would like it to.

  10. Christy Says:

    Hello Rob and Dirk,

    Yes, as I posted originally, the effectiveness of the bot as a believable participant is contextual. And the context does indeed to be ‘narrowly defined’ – this is where botmasters are challenged to structure an elaborate world where the stimulus-response interaction with the bot is qualified and accepted. I find Herbert H. Clark’s theory on how people interact with virtual partners quite useful:

    ‘My proposal is that we interpret disembodied language in two layers of coordinated activity. In the first layer, we join the producer of the disembodied language in creating a pretense. In the second layer, which represents that joint pretense, we communication with a virtual partner. I argue that layering like this is recruited whenever we interpret forms of disembodied communication in computers.’

    [Clark, H. H. (1999). How do real people communicate with virtual partners? Proceedings of 1999 AAAI Fall Symposium, Psychological Models of Communication in Collaborative Systems, North Falmouth, MA., American Association for Artificial Intelligence.]

    So, if the user agrees to coordinate with you the creator on the pretense of the artificial character then there is immersion at play. Such an agreement would require, I believe, a storyworld that the user wants to be a part of. I prefer this angle than the ‘suspension of disbelief’ since a bot is so clunky it doesn’t give the user a chance to suspend any disbelief (a few seconds at the beginning of a conversation though).

    For me, as an untrained technician of AI :( I find it wonderful to be able to work with bots in a creative way and am willing to beat a storyworld around the capabilities of the software I’m using. Who doesn’t do this though? What we’d all like to have to work with is different to what is currently available. But I think facilitating an entertaining experience, a storyworld you want to be part of, out of a tissue box or bot is what writers/designers can do; and their work can help develop understanding of what are the phenomenological or interaction-induced factors that do promote believability and immersion. Writers create believable characters with words on a page and so I believe the same can be done with a bot.

    Thankyou for posting your paper on ‘Inexpensive AI’. Just as Andrew originally posted earlier ‘I want AI’s to want’ you also mentioned that ‘[w]e anticipate that creating bots that mimic human players’ involvement in the game would make them more appealing as opponents’. The question is: can only strong AI do this, can storyworld-injected bots do this, or can only humans? As I said, I like to think we can create AI that wants and bots that are relevant. But I also believe that artificial characters and players will work best when modeled on fantastical roles and creatures — which is somewhat of a contradiction that I have not solved in myself yet.

    Also, as an off-side, can anything be done about the ping factor?

    As for the situations in which it works well – I’ve outlined some in my previous post but there are also other tactics that go outside of the goal of holding a conversation that utilise rather than try and hide stimulus-response. For instance, if you have a conversation with the Alan Bot mentioned earlier you will be rewarded with many nifty things (spoiler occurring): the bot changes its skin, provides logs, provides a website panel and so on. These are quite simplistic but they are immediate quantifiable responses to input. If the user knows what to enter then all is well. This is what I’m working with in my bot: I lay down clues in the print story that the reader then enters into the bot. Different versions of events are provided, changes happen to the interface, secret information is given and so on.

    And re Nick ‘insult’ comment – I thought he was saying that all bots are insults to art, so I’ll go with your interpretation Rob. :) Incidently, here is an Insult Bot.

    Dirk, I enjoyed reading your blog. And I really look forward to seeing/reading what you do with your bot. Hopefully you’ll post it here, or on the Alicebot list or on your blog?

    Also, your article on ‘Argument Agents’ was interesting and am would like to find out more. Firstly, you are adhering to the narrative grammar developed by Melanie Anne Phillips and Chris Huntley. From this you are creating a storyworld of a salesman trying to sell the user pizza. Each response is captured by a which is written to ensure that the user progresses towards the ultimate goal. Since the goal is singular – that of selling/or buying a pizza – this can be indeed the subject of every conversation. The therefore are discrete ‘scenes’ as you call them. You’ve scripted the bot to handle an ‘uncooperative user’ as well all know is common, but the script also trains the user to understand the goal of the story (to buy a pizza) and not to discuss other stuff. What I see you’ve cleverly avoided by using the Dramatica grammar is a linear progression – instead the story is created by the user experiencing different perspectives (Objective Story, Subjective Story, Main Character and Obstacle Character), but you also say that you use ‘three dynamic acts’.

    ‘All Throughlines start simultaneously at Signpost 1. Following the User’s whims, some Events from this, that or the other Throughline might get triggered. But: as soon a one Throughline reaches its Act Break at Signpost 2, the context shifts. According to his reaction to the Agent’s End-Of-Act query, he will get transferred to the first Scene of the matching Throughline’s second Act, the Throughlines will get synchronized again, and the Story gets advanced from there. The same happens at Signpost 3, starting the third act.’

    Is each scripted to have the user experience a ‘scene’, each possible ‘throughline’ (main and obstacle character) and ‘act’ but also have sprinklings of info on the objective and subjective story? Is an ‘act’ encapsulated in a single or multiple ? Because the user can enter anything they want does this mean the user could activate a previous act? It appears, that according to the response from the user the activated is like the branching structure of first-generation hypertext where the choices of the user decide which way the conversation and thus story will go?

    Basically what you’ve done, quite well, is ensure that the user and bot have well defined characters within a storyworld, that all conversation is centered towards a storyworld goal, and that all the elements needed for the storyworld to be facilitated in the users mind (setting, props) but you’ve also got history between the user’s character and the bot’s with the ‘subjective story’. What is the ‘objective story’?

    Also, in your previous post you said that the bot works differently from the common stimulus-response model in that it takes an input and the current state and produces an output and the next state’. Does this mean each (the bot’s response) also starts another or the is within a already that is prescripted to progress the conversation?

    BTW: Where is Andrew?

  11. Dirk Scheuring Says:

    Christy,

    the two-layer model where the client explicitely agrees to be part of a story won’t work for my purposes. It’s too indirect for my liking; I want the dialogue to be the story, period. The drama arises from the limitations of the characters w/r/t their respective interests – just as it always does…

    What’s crucial to me is to give the bot an awareness of the rules she’s subject to, so that she knows – and can communicate – why she says and does certain things and not others. She also has to know the difference between her own limitations and those of the client, and the reasons for those limitations, and the reasons for those reasons, etc. There’s an ontology underlying it all, grounded in a set of axioms; in that respect it all works just like Cyc and comparable systems. The difference is one of scope: where conventional AI projects try to answer questions about the world as it is (supposedly) percieved by the client (an impossible feat, IMHO), my bot only deals with the world as she herself percieves it. And that world just happens to include clients who, wittingly or unwittingly, try all sorts of things to make her break the rules, and get her “out of character”. The trick is to write her so that, like any well-trained employee, she knows how to deal with provocation without loosing her cool. When it comes to humans vs. computers, nobody has to “invent” any drama – it’s already there. What we need to do instead, I think, is to enable machines to simulate a discussion about the dramatic aspects of their relationship with us.

    The “Argument Agents” paper is to be read with a degree of caution – I was purely an artist three years ago, and the theory is far ahead of any implementation. I cannot say that it’s wrong what I wrote there, but it sure is a bit, um, theoretical. These days, the program is well ahead of the theory, and I want to arrive at an application I’m satisfied with and have actually tested out in the wild before writing another paper. What I wanted to say in this forum, though, is that, contrary to some people’s belief, AIML sure can be used to keep track of the state of the conversation. Even more than I was when I started out with forming the concept, I’m convinced that the key to writing better interactive characters is in the writing itself, not in the computer language that gets used (although I’m also sure that I wouldn’t want to work with a language that isn’t Turing-complete).

    What I meant when I said that the bot “takes an input and the current state and produces an output and the next state” is that there are a lot of statistically salient inputs, like “What’s that?”, “That’s wrong!”, “How does it work?”, “Why?”, etc., where the response depends on what the bot has said previously. AIML’s inbuilt tools for capturing some context (&#60that/&#62 and &#60topic/&#62) are rudimentary, but it’s possible for the bot to record much more detailed information about what it is getting at – its state -, and to check this information together with the next input. Code examples for this were discussed at several times on the alicebot general, developer, and style newslists; I’m sure you’ll find them in the archives.

  12. Christy Says:

    Hello Dirk,

    I understand that you want the interaction to be the story — that is what is different about the use of agents in storytelling/enarrative/..! I don’t think Clark’s agreement between the virtual partners is meant to be explicit. Instead the user needs to want to be a part of the fantasy in order to participate, otherwise they wouldn’t bother. Unless your bot is really selling pizza (which I guess is the point behind the needing of an address) the user is agreeing to participate in a fiction.

    It’s great to hear you are shaping the character and the storyworld in the same manner as the bot design, indeed treating programming as an enunciation of these abstract traits.

    You paper was very interesting despite the dated state it may be. I am much much greener than you in this area and am only discussing such things in public because I too know that bots can work in fiction, that AIML can be used well, and that there are exciting times ahead for work in this area. It’s all in the application and not necessarily the complexity of the tool.

    Thankyou too for the heads-up on further techniques I could use.

  13. andrew Says:

    Sorry to wait so long to join the discussion!

    I see two distinct directions in the comments so far: one, about the need for bots to keep more state and potentially use story as a framework to structure interaction, and two, that perhaps more than technology, writing and artistry have fundamental, even primary, importance. This first comment will focus on the state & story theme; my next comment (tomorrow) will respond to the artistry-technology theme.

    Rob wrote, Stateless conversation stems directly from our technological limitations – we simply don’t have a good idea how to represent ‘knowledge’ and ‘context’ on computers.

    Yes, knowledge representation, especially for interactive characters / stories / fiction, is one of those frontiers just waiting to be explored. (There’s been lots of KR research to date of course, but mostly in drier, less entertaining domains.) I’m really interested to find out what better KR can offer, and plan to study up on and think about a lot over the next few years. But KR seems relatively easy compared to the problem of what to do with the knowledge, even if it’s well represented; that is, even if our systems become much better at knowing what’s going on, what will they do or say with that knowledge? Procedurally generating content (behavior, dialog, animation) is the even bigger big nut to pound on; doing so I think would go the furthest towards solving the interactive story conundrum than anything. How else to solve the combinatorial explosion of story nodes than to be able to generate story nodes themselves. KR’s a piece of that puzzle for sure. KR, content-chunk size, and generativity all go together.

    One could spend an entire research/art career on this problem… gulp…

    Christy wrote, For me fictional worlds, specifically a character shell around a bot (a fictional aura?/authored soul?), is one factor that facilitates [active, intentional, wanting -ness]

    Dirk wrote, In a nutshell: I reckon that any goal-oriented process – and therefore, any goal-oriented dialogue – can be represented as a story. My two principal characters are the bot and the client; the story is the conversation log that emerges.

    It’s interesting, when suggesting that I wanted AI’s to want, I wondered if that would inevitably lead towards story as a framework. One question I’d ask is, if an AI wants things, and acts towards those wants, and changes in the process, does that automatically become a story? Perhaps; or at least those are probably key elements of a good story; necessary, but sufficient?

    Note I suggested those traits not because I’m necessarily advocating making bot fiction, but because I wanted to be able to relate to the bot, to understand what it would take to make me care about what it’s saying. I want to be able to emphathize with it. Maybe the requirements for empathy and for story-ness overlap so much that they always happen together.

    Dirk wrote, In particular, it should be able to answer the ‘w-questions’ – what, who, where, when, how, why. I think that no conversational system will ever be considered believable by many people if it doesn’t answer those questions, so my requirements for the prototype say: “Make a bot that can give a plausible answer to the question ‘Why?’ at any time”.

    I think that’s a great design goal, and one that goes a long way towards fully-realizing a character. We loosely followed that principle in building Facade (we’d have strictly followed it, if it wasn’t so much work!) In fact, in that spirit, my past month of Facade work has been spent simply making our existing content richer, to improve believability. (This is in response to some of the problems we saw at our big user test last March.)

    Again, what it so often comes back to is content, content, content (see the last line of the first section here). Again, with better KR and generativity, the system could do a lot of the work for you — helping to procedurally generate a response at any time to “why”.

    Christy wrote: the user needs to want to be a part of the fantasy in order to participate, otherwise they wouldn’t bother. … the user is agreeing to participate in a fiction.

    Yes, but as a reader/player, it’s so pleasurable when I don’t have to do too much work to feel immersed… (The term ergodic literature has always seemed a bit tiring to me!) As a reader/player, I want my participation to feel almost effortless, even if it’s challenging or disturbing.

    Christy wrote: But I also believe that artificial characters and players will work best when modeled on fantastical roles and creatures — which is somewhat of a contradiction that I have not solved in myself yet.

    That’s interesting… do you find that’s because when the characters aren’t normal people, that as an author, it’s easier to be more stylized, abstract or less-realistic with them? If so, what about doing an abstract, “experimental-theater version” of “normal people”? Not melodrama, but more absurdist or something? It would probably require framing the scenario in such a way that players “get it”.

  14. Dirk Scheuring Says:

    Andrew wrote:“One question I’d ask is, if an AI wants things, and acts towards those wants, and changes in the process, does that automatically become a story?” I think that a story (at least one that would be of interest to me personally) needs at least two characters (which might inhabit the same body), because the point of a – dramatic – story is to present two conflicting ways of looking at the world, of which one wins out (yes, drama is usually about winning and loosing ;-). I’m pretty sure that two opposing characters are necessary and sufficient as a foundation for a dramatic story.

    I’d also like to note that usually only one of those two characters changes during the course of a story, and that this doesn’t have to be the main character. Clarice Starling in “Silence of The Lambs”, Jake Giddes in “Chinatown”, George in “Who’s Afraid of Virginia Woolf?”, Alex in “A Clockwork Orange” are examples for main characters that remain steadfast in their respective stories. I’m pointing this out to indicate that an AI that changes isn’t a necessity to have in an effective story, even if the AI is the main character. An interaction where the human has to change behavior to successfully complete the story could actually be quite interesting.

    The easy part is to come up with an AI character that has its own and controversal way of looking at the world – that’s already been done in non-interactive media. Also, making humans care about/for such a character has been done in non-interactive media. The other question Andrew has posed recently is: “How do I get people to care about an interactive character?” The answer, I think, is: By making the interactive character care about the people.

    Now I’m not trying to say that a successful AI character has to be the caring, sharing type, all nice, cuddly and non-abraisive. But even my biggest enemy would probably tell me why he is my enemy – in other words, he would tell me about what I, and our relationship, mean to him. He cares, even if he only cares about seeing me dead. Thus, I can relate to that guy.

    So if I want to design an AI that people can relate to, I better try to make sure that this character can show those people that they mean something to him/her/it. But also, that he/it/it cares about himself/herself/itself, has a self-interest (what Andrew calls “wants”), and is able to balance this against the interests of friends/enemies/interactors. There should be somebody home, an “I”, and this “I” should be able to tell stories, of which the shortest is: “I am.” And then you might ask: “Why?” And I might say: “So I can talk to you.”

    It was an incredible surprise to me when I learned that during 50 years of AI research, those scientists – who live off asking the w-questions – never saw that anwering them is the first ability an AI needs. It might not be sufficient to win Turings “Imitation Game”, but it sure as hell is necessary. What, where, who, when, how, why? Any AI that people are supposed to care about must be able to plausibly answer these questions about anything it claims to be or to know. If it’s told that it’s wrong, I want it to say why it thinks it’s right, and if it doesn’t know, I want it to know why it doesn’t know. Since if you ask me those simple questions, and I don’t answer (or consistently answer by deflection, as in “Why do you want to know ‘why’?”, I bet you’ll quickly conclude that I don’t care about you, and will stop caring about me, too.

    Knowing the boundaries of one’s own being and being able to relate them to the knowledge of other beings about themselves also is part of the equation: an AI that we expect people to relate to has to be subjective and of finite knowledge, not objective and of infinite knowledge. Objectivity is for the birds; an AI that’s helpful to me is of human scale, is unashamedly subjective, and destined to find its ultima ratio within itself, not outside in the universe. It is, as Robert Rosen would say, “closed to efficient causation” – and if it isn’t, I’ll have to write it to appear that way.

    The common way to represent knowledge so that the w-questions can be answered is by using stories. Why? Because then it’s easy to see where the holes are. Anyone who has ever pitched a story to an editor, a publisher, a film producer, knows the routine: “Why does he go there?” “What does she love about him?” “How come 20 people shoot at them with machine guns, but they don’t get hit?” People want consistency and reliability, and stories are a form of knowledge representation that’s easy to check for those qualities.

    Furthermore, stories are finite, and they offer finite functional roles to their characters, with clear rules and reasons why those characters cannot cross the boundaries of those roles. And since not only the number of concepts needed in a story, but also the number of relationships those concepts need to have for the story to make sense is finite, the combinatorial explosion can be avoided to a large extent.

    Still, it’s mad difficult, and I’ve no idea whether I’m talented enough to pull it off. But what I’m sure about is that I’ll only ever care about an AI if it cares about me. Why? Because I’m human.

  15. Christy Says:

    Andrew said:

    ‘Yes, but as a reader/player, it’s so pleasurable when I don’t have to do too much work to feel immersed… (The term ergodic literature has always seemed a bit tiring to me!)’

    lol

    Andrew said:

    ‘As a reader/player, I want my participation to feel almost effortless, even if it’s challenging or disturbing.’

    It’s interesting that you and Dirk interpret an ‘agreeing to participate’ as an effort or explicit act. What I mean is, if the user doesn’t care about the character or the story, or even what they can get out of the experience then they will just close the window, go to another webpage. It’s as simple as: I like the idea of this, I like the look of this, this is interesting, that is written well, clever, funny, curiosity, surprise… A fictional character in a book doesn’t really exist in flesh in blood (though chunks are dispersed in real-life inspirations), an embodied agent is just a program with cartoon skin. So, the agreement is an unconscious or internal decision to go with the fantasy. Perhaps the decision is actually based on a weighing up of how much reward the user will get for their temporary cooperation. These are processes that occur in seconds, like choosing a book by its cover.

    However, I do like the point you make about not having to do too much work to feel immersed. Which is why you made the post originally about the frustrating experience of bots – they really do require a lot of effort on the users behalf. We’ll see…

    And I must say as an addendum that characters, agents and the like are all very real, a different kind of real. There is something between the inanimate and animate. I have heard that Sherry Turkle has found that kids (my memory of the specifics of the conversation may betray me here), when asked what a tamagotchi/game character(?) is they said is wasn’t a toy or a pet. Unfortunately I haven’t found anything on this research yet. But this topic is a nice lead into your response to my comment on artificial characters and players working better if modeled on fantastical roles and creatures:

    Andrew said:

    ‘That’s interesting… do you find that’s because when the characters aren’t normal people, that as an author, it’s easier to be more stylized, abstract or less-realistic with them? If so, what about doing an abstract, “experimental-theater version” of “normal people”? Not melodrama, but more absurdist or something? It would probably require framing the scenario in such a way that players “get it”.’

    My view isn’t based on an authorial approach but on the creation of artificial life. I’ve always thought that the creative (as in creating) potential of technology could be used to manifest ANYTHING. I think we are already learning how to create what we imagine, and that we could experiment and take it a step further and create beyond what we know. (Bear with me on this.) Create completely new forms of life with thinking structures that we would like to have or imagine we have, or image could be on another planet, whatever. That’s exciting to me, going for it and designing thought patterns different from what we observe in ourselves (consciously of course). But to get back to what you suggested and tie this up: I’m a writer who likes to abstractly represent the thought processes of my characters. And so the bot, as an embodiment of ‘unconscious thinking’ is part of the big design!

    And Dirk, you’ve got some great approaches that can be directly applied — neat.

    As a pass back to the ‘unconscious thinking’ theme (that you don’t have to respond to) I found this interesting comment made by Gregory Ulmer on intelligence phases of AI:

    Phase 1. logical processes

    Phase 2. intelligence of patterning

    Phase 3. “artifical stupidity” (AS,pronounced “ass,” related to ATH) that takes into account unconscious mentality.

    Ulmer, G. (2003). Reality Tables: Virtual Furniture. Prefiguring Cyberculture: an intellectual history. D. Tofts, A. Jonson and A. Cavallaro. Cambridge, Mass., MIT Press: 110-129

  16. andrew Says:

    Here’s my belated comment on, “it’s about writing and artistry, not technology”.

    Dirk wrote, I’m convinced that the key to writing better interactive characters is in the writing itself, not in the computer language that gets used (although I’m also sure that I wouldn’t want to work with a language that isn’t Turing-complete).

    Christy wrote, For me, as an untrained technician of AI :( I find it wonderful to be able to work with bots in a creative way and am willing to beat a storyworld around the capabilities of the software I’m using. Who doesn’t do this though? What we’d all like to have to work with is different to what is currently available.

    That makes perfect sense, of course. When it comes down to it, like both of you I think, I care most about the end result, the final playable experience. I would spend most or all my time on authoring if the tools and technology supported enough of I wanted to do — but they don’t. So, as a means to an end, I find myself spending up to 50% of my time trying to create new technology. While I sometimes enjoy technology R&D, it’s only by imagining how I’ll use it to make a new kind of playable experience that helps me slog through it. Unlike some researchers, I don’t enjoy technology-building enough on its own to motivate me to develop it; I must also be the one to apply it. While very grueling, this has worked pretty well for me, since it also helps me focus my technology effort on only what is actually needed for authoring, and little more.

    Christy wrote, I think facilitating an entertaining experience, a storyworld you want to be part of, out of a tissue box or bot is what writers/designers can do… Writers create believable characters with words on a page and so I believe the same can be done with a bot.

    Definitely — with the addition that deeply interactive characters also require programming, not “just” writing and design. As tools improve, such as the recent creation of AIML, it will become easier to create ever-more powerful characters with minimal programming, but still, even with AIML, it’s clear to me that programming and/or collaborations with programmers are essential.

    I could however imagine some ways to skirt this, to some extent. For example, just as Gnoetry allows authors to feed a corpus of raw text to a program, and then tune and edit its output to create poems, one could imagine feeding a corpus of raw text to a bot-generating program. You’d be indirectly influencing the bot, by giving it examples of the kinds of things you’d want it to say, the “ways” it should speak. Your control over what and when it speaks is more limited than if you had programmed it, but I wonder if it could in fact still be a satisfying experience, for both player and author.

    Dirk wrote: …clients who, wittingly or unwittingly, try all sorts of things to make her break the rules, and get her “out of character”. The trick is to write her so that, like any well-trained employee, she knows how to deal with provocation without loosing her cool. When it comes to humans vs. computers, nobody has to “invent” any drama – it’s already there. What we need to do instead, I think, is to enable machines to simulate a discussion about the dramatic aspects of their relationship with us.

    Generally speaking I really like this approach. I too have suggested that an ongoing, progressing relationship that occurs between an interactive character and human player(s) could serve as the basis for interactive “story”. But it’s tough to sustain this without some external events occurring, to give us something to talk about and react to. These could be current events going on in the world; the events going on in the player’s life, that she brings to the discussion; fictional events in the bot’s life, that happen “offscreen” in between our chat sessions…

    Again let me point to Richard Powers’ recent short story, “Literary Devices“, for a vision of this type of interactive story.

    Christy wrote, I too know that bots can work in fiction, that AIML can be used well, and that there are exciting times ahead for work in this area. It’s all in the application and not necessarily the complexity of the tool.

    Dirk wrote, AIML sure can be used to keep track of the state of the conversation. … AIML’s inbuilt tools for capturing some context … are rudimentary, but it’s possible for the bot to record much more detailed information about what it is getting at – its state -, and to check this information together with the next input.

    Everything you’re describing is essential, but a critical step is making all of that capability easy enough and “natural” enough to use, so that authors can achieve richness and complexity with minimally convoluted effort. As I’m sure you’re aware, interactive characters get very complicated very fast; C++ or Java can theoretically be used to do everything that ABL can (in fact ABL compiles to Java), but it’s very cumbersome to do so.

    In other words, how a language or technology is organized, how its capabilities are made available to the author, will greatly affect what you can do with it; it’s not enough to say a language is Turing-complete. In theory, I suppose any size brush can be used to paint almost picture, but in practice, your brush is going to heavily influence what’s possible.

    But all that said, just like with any technology applied to art, there is so much that can be done with what already exists. While many of the playable experiences I’m imagining cannot be practically achieved without advances in technology, there’s surely infinite numbers of experiences that I’m *not* imagining, that can.

  17. Eric Elshtain Says:

    I’ve spent some time the last few days “talking” with some of the bots mentioned and linked here, and find them frustrating in the same way that I’m frustrated with the idea that robots must look and/or act “like” humans for them to seem more “natural” or “real”: that is, they are made to fit into the human story by seeming “human” in nature, whether that nature be “want” or “consciousness” or whatever quality one sees as uniquely “human.” In gaming matters, I can see why that is important to create a seamless and realistic narrative–but this anxiety over mimesis need not apply to programs like Gnoetry: it is a prosthetic device, and some of the most successful prosthetic devices are the ones that exactly do not look like their human phantoms, the parts that they replace. That said, Gnoetry is as human oriented as is a chair, which is best designed with the human in mind, but does not need to look like a parent’s knee or lap or be effective and comfortable and I’m sure glad my chair doesn’t “want” anything. For me, something like Gnoetry is exciting becuse it is a liberation from the purely human, a liberation from smoe of the the very processes that dog us as humans, what Delmore Schwartz calls “the heavy bear who goes with me.” Too much is already all too human. The fact that Gnoetry uses language is human enough–that it doesn’t necessarily help make poetry that seems “human” (though there are many human precedents for the kind of poetry Gnoetry helps create) is the reward.

  18. Michael Says:

    Yesterday someone sent me this link for subservient chicken. Even though it’s a BK ad (urgh), it’s an interesting and funny bot. Like any bot that works, the context strongly conditions the interactor’s expectations, leading the interactor to use language the bot can deal with.

    The site appears to be down at this moment. Here are two articles about it: snopes.com, CNN Money

  19. andrew Says:

    Thought of as a prosthetic device, of course it makes no sense for Gnoetry to “want” anything. Maybe I’m putting Gnoetry itself on too-high of a pedestal for the sake of my argument; perhaps Gnoetry is merely “assisting” the humans that wrote the original raw corpus text to create a new text, just like a prosthetic limb helps a person lift a glass of water (or even do something they could never have done with their original human limb). Gnoetry need be no more than a prosthetic, because actual minds (the writers of the raw corpus) have done the “real” creative work.

    Or, maybe it’s a bit more than that: Gnoetry could be a full-fledged part of a mind. A part of a joint cyborg mind composed of the original humans who wrote the raw corpus that got fed to Gnoetry, plus Gnoetry’s human operator who assists the process, plus Gnoetry itself. That would be the equivalent of elevating the status of a prothestic limb to truly be part of a body. I think this is a closer fit to my original suggestion that Gnoetry and ALICE are analagous to cognition at the unconscious or subconscious level.

    Then again, people read other people’s writing in order to create new writing. Gnoetry, if it could operate as a stand-alone program, is (crudely) reading others’ writing to create new writing. In this way, Gnoetry could be thought of as an independent, albeit relatively simplistic, mind.

    Gnoetry (or something not much more sophisticated that isn’t hard to imagine, call it Gnoetry++) is (or will be) exhibiting seemingly intelligent behavior. Its programmers and authors may not intend it to be an “intelligence”, but its output is novel, non-trivial generation of language, matching the form of communication people use.

    Another way to frame Gnoetry is not as a prosthetic, or as part of a mind, but as an alien intelligence. Thought of this way, it can shrug off the “bear”, but be more “mind”-ful than a chair.

    Returning to my original point: to enjoy poetry / stories / what-have-you in the ways that give me the most pleasure, I need to consider the mind that created them, alien or not. Framing a creative AI merely as a device has the effect of emphasizing its mechanical-ness, and deemphasizing its “mind”-fulness, especially if the AI truly has “mind”-ful qualities. Framed as a device, I don’t get to think, “who is this mind that wrote this, I wonder what they were thinking as they wrote this, what do we have in common”, etc. Instead I end up thinking more dehumanizing thoughts, such as “wow, we’re all pretty mechanistic after all, these new ideas were created merely by mechanically transforming other humans’ writing”. If the latter were true, then who am I to argue with it, I might as well accept it; but I’m not so sure it’s true.

  20. Eric Elshtain Says:

    I’d like initially to mention how interesting and productive this conversation has been and is. And to add this: I am, aesthetically, fairly biased against, tho not entirely opposed to, the questions Andrew finds at the heart of what’s pleasurable abt reading (“who is this mind that wrote this, I wonder what they were thinking as they wrote this, what do we have in common”…) given that I’m interested in what abt a piece of literature, esp. poetry, is NOT tied to author’s thought or intent or even tied to the external world. A poem is a theory in large part abt itself and how it behaves–Gnoetry, it seems to me, facilitates this approach to reading a poem. The language becomes abt itself more so than with purely human creations–Gnoetry helps lift what Wittgenstein called the typical “meaning governors” (narrative, culture, psychology) off of any given stew of linguistic utterances. The collaborative aspect of Gnoetry may also offer (along w/ a slew of other similar programs) a middle position b/t the “mechanistic” and the “humanistic.”

  21. Christy Says:

    Well, here is a much belated response to Andrew’s comments (been v. busy). I feel like I’m talking to a vacant hall but I’ll pop these few words in anyway for the record.

    I’ve thought about your comments about ‘collaboration with programmers’ and indeed being a developer as necessary. You’re right. I’ve just put these thoughts into the ‘when I can’ future rather than strive towards them now. It’s difficult at this point because my research is more on the side of the poetics, of the navigation design, than the actual software so I’d be spreading my time too thin at this stage. I also believe, as we all do, that developing the application of software – stretching the use and manipulating the user to come along for the ride – is good training as well. I think there is room for a two-pronged approached with software and hardware creation as well as inventive use.

    I will check out ‘Literary Devices’ most definitely. And Michael, here is another example of a bot that defines the users mode of input: Virtual Comedian.

  22. razors Says:
    “levels” of text = levels of game
    I wonder what kinds of already existing “unconscious thinking” might actually be useful.

  23. Mark Marino Says:

    I want to return to the notion of chatting as a tennis match in which the player can create successful volleys. In Aarseth’s terms, this seems to suggest chatbots as systems of preprocessing and coprocessing (as he suggests with Eliza), although I believe there is some postprocessing involved, as users selectively remember the interactions that they prefer.

    The question I have is: what interesting developments could their be of chatbots, beyond the Rogerian-based and A.L.I.C.E. models, that produce (or offer the means of producing) cyborg literature, evaluated within a poetics suited to chatbots, not traditional lit.?

    Are there examples of chatbots out there the significantly recontextualize the exchange between user and chatbot beyond the Turing, Rogerian, or talking chicken models?

  24. Christy Dena Says:

    Hello Mark,
    I like you query. I’d like to understand more of where you’re coming from though.

    The question I have is: what interesting developments could their be of chatbots, beyond the Rogerian-based and A.L.I.C.E. models, that produce (or offer the means of producing) cyborg literature…

    By asking about developments for chatbots that go beyond the Rogerian-based models am I right to presume you’re asking for bots that don’t continually respond as a therapist would? If so, then there are plenty of ‘personalities’ of bots out there that don’t *explicitly* play the role of a therapist (but perhaps do so because of the nature of the human-bot-interaction). And by ALICE model am I right in presuming you’re referring to the design of pattern and template construction? If so, then of course there are other bot software programs out there that have a different software design. Bots are, however, ‘simple-reflex’ agents. There are other agent types. It depends if you refer to chatbots as being ‘simple reflex’ agents or to chatbots as being bots that you can chat with.
    An example of a software agent that uses ‘Beliefs, Desires and Intentions’ for instance is those used in the Black and White game.
    But if you asking about bots that can more than just fun chat programs then there are developments in storytelling happening — where bots are characters.

    By ‘cyborg literature’ do you mean a story that emerges from the interaction between the human contributor and the chabot?

    Are there examples of chatbots out there the significantly recontextualize the exchange between user and chatbot beyond the Turing, Rogerian, or talking chicken models?

    If you’re talking about chatbots in the ‘simple reflex’ sense then there are some in development. Bots are used in teaching for example. And I am writing a print story with a bot on the Net that is a character in the story.

    …evaluated within a poetics suited to chatbots, not traditional lit.?

    I’m using a mix of reader response, human-computer interaction and human-robot interaction, speech-act theory and so on to develop bot-reader interaction understanding. But this brings us back to Janet Murray’s idea of the ‘cyberbard’.

    Tell me more about what you’re thinking…

  25. andrew Says:

    I’d certainly offer our project Facade as a new example of conversational characters (you type to them, they speak back with audio) that significantly recontextualize the exchange between user and chatbot — specifically in a dramatic, theatrical context. One of the major differences between Facade characters and typical chatbots, besides being animated and more intelligent in understanding and response, is that the interaction is continuous and real-time, not turn-based; characters including the player often interrupt and talk over one another. On the website you can find several papers describing the project in detail; also we’ve discussed it extensively on this blog, here’s some links to explore. We’re almost at beta; it’ll be released as a free download as soon as it’s done.

    Similar work is being done at Zoesis, but the only public information available about this is this press release and this demo we gave at GDC (search for “Zoesis”).

  26. Mark Marino Says:

    Thank you for all your answers about contextualized chatbots, Andrew and Christy. Here are some further thoughts, or maybe just primitive speculations…

    Christy, when you write:
    By ‘cyborg literature’ do you mean a story that emerges from the interaction between the human contributor and the chabot?

    Yes: “Literary texts produced by a combination of human and mechanical activities” (Aarseth Cybertext 134). Specifically re:chatbots.

    Christy, when you write:
    It depends if you refer to chatbots as being ‘simple reflex’ agents or to chatbots as being bots that you can chat with.

    Yes, I am referring to bots you can chat with.

    Andrew: Facade seems to take us out of the turn-based rules of the Turing Test, a big step, especially when approximating chatters like me, who tend to interrupt, and also developing substantially our representation of consciousness and conversation.

    My mind is moving in a slightly different direction these days. I still feel like the chatbot experience is beholden to the Turing Test in as much as we evaluate chatbots on their approximation of human interaction, which again goes against Aarseth’s call for evaluating cybertexts and cyborg literature on their own terms.

    The Turing Test seems predicated on a Cartesian cogito, which has been critiqued and refigured. Nonetheless, we continue to subject chatbots to a Turing Test every time we use them. The experience of chatbots, the Loebner prize, and some of the other examples you mentioned, keep “chatbot” in the world of positivistic humanism (as I under stand them).

    When I speculate about a poetics outside of lit. I wonder how we can appreciate or react to bots, not as failing models of human subjectivity or poor approximations of ideal chatbots (such as Flatline in Neuromancer), but as they are.

    So what of bots that speak to a schizophrenic or fragmented model of subjectivity? What about contexts that remove us from the game by which we ask the chatbots to be more coherent than we are? Perhaps I will need to turn our attention to agents that we don’t consider chatbots because the very name “chatbot” might assume certain presumptions, proposed by Turing, interrupted by Weizenbaum, and carried out by those that followed. How can we recontextualize the experience of chatting outside of notions of dialogue and exchange that demand a certain pretense of coherent, legible subjectivity?

    Suggestions?

    Mark

  27. andrew Says:

    I wonder how we can appreciate or react to bots, not as failing models of human subjectivity or poor approximations of ideal chatbots (such as Flatline in Neuromancer), but as they are.

    I think that’s easy to imagine — make a bot who’s identity is a bot. Bots who know they are bots, and don’t try to pretend otherwise. Like Asimov’s robots, or Data from Star Trek.

    If you like, such bots could at times explicitly “pretend” they’re not bots. They could enter conversations and try to pass themselves off as real people, for short periods of time, like someone pretending to be someone else. But if revealed to be bots, they should stop pretending and come clean.

    How can we recontextualize the experience of chatting outside of notions of dialogue and exchange that demand a certain pretense of coherent, legible subjectivity?

    I’m not sure we’d ever want to depart from coherency. For me, a strict requirement for any art to be “good” is for it to be internally consistent and coherent within its own system. An absurd avant-garde play, while divorced from reality, can still be coherent within its owns rules, its own system of operation.

    And I see no reason for builders of conversational agents to be forced to comply only to purely natural dialog. (Ian briefly suggested poetic language, over at a related discussion.) One could imagine a bot that speaks in some other odd, formalized (but internally consistent, coherent) style. To engage with the bot, players must adopt that style, of course.

    Just in time for this discussion, this new book looks quite good: The Turing Test: Verbal Behavior as the Hallmark of Intelligence, edited by Stuart Shieber. I’m going to the MIT Press bookstore sometime in the next few weeks; I’ll flip through it, and report back. (Update: Michael’s read it, he recommends it.)

  28. andrew Says:

    Ah! A new workshop on agents that want and like! Abstracts due Oct 31.

  29. andrew Says:

    Clive Thompson (collision detection) writes about how he was recently fooled by a chatbot. Clive links back to an earlier post about how he believes AI succeeds when it aims low rather than high, and links to the excellent short piece he wrote for the NYTimes magazine last year about Richard Wallace.

    The friend who sicced the chatbot on him is also part of the discussion, and suggests that the initial touch of human intelligence he gave to the chatbot is what really suckered Clive in…

  30. WRT: Writer Response Theory » Blog Archive » Gnoetry: interview with Eric Elshtain Says:

    […] .edu/2004/03/11/the-coding-and-execution-of-the-author/  Stern, Andrew and various (2004) ‘Unconscious Thinking’, Grand Text Auto, 4 May [Onli […]

  31. andrew Says:

    Here’s an interesting case of unconscious thinking by a human writer

  32. We Revise Together: Blogging on Writer Response Theory at WRT: Writer Response Theory Says:

    […] pated in at Grand Text Auto after Andrew Stern’s post ‘Unconscious Thinking’ on the 4th May, 200 […]

Powered by WordPress