February 1, 2008

EP 2.3: Revisiting the Eliza effect

by Noah Wardrip-Fruin · , 6:00 am

Being a teenager, after my initial interaction with Eliza/Doctor, I did the natural thing: I started to play with it. (Software, of course, doesn’t need to be a game to inspire play.) I don’t remember the exact exchange, but I know that I soon discovered the reason Murray doesn’t go far into talking about ongoing interaction with Eliza/Doctor, just as Suchman and Weizenbaum don’t.5 If they did, it would complicate their discussions. This is because — while the initial experience of Eliza/Doctor can create the surface impression of an incredibly complex internal system — sustained interaction with the system, the verbal back-and-forth, invites play . . . and linguistic play with Eliza/Doctor quickly begins to destroy the illusion. In other words, precisely the open-ended textual interaction that helped foster the illusion of internal complexity and intelligence enables play that draws attention to the system’s rote simplicity, its distance from human interaction. Before long, many exchanges begin to look like this:

You are going to repeat what I say in the form of a question
WHAT MAKES YOU THINK I AM GOING TO REPEAT WHAT YOU SAY IN THE FORM OF A QUESTION

Sherry Turkle, in The Second Self (1984), notes this phenomenon. She writes, “As one becomes experienced with the ways of Eliza, one can direct one’s remarks either to ‘help’ the program make seemingly pertinent responses or to provoke nonsense” (39). Turkle’s book is based on interviews and observations — and she found that some of her subjects worked to keep the illusion going, while others sought to maximize Eliza’s nonsense.6 Both strategies, however, depended on coming to understand something of the internal processes at work and shaping surface interaction accordingly. Even working to maintain the illusion required a type of seeing past it, something which those who discuss the Eliza effect rarely acknowledge.

The Eliza breakdown

From my point of view, what Turkle describes above points toward a further lesson of Garfinkel’s yes/no therapy experiment. For Suchman, this experiment demonstrates the importance of ethnomethodology, and the documentary hypothesis, for understanding Eliza/Doctor and human-computer interaction.7 And certainly it is essential to understand that Eliza/Doctor succeeds, to the extent it does, because it plays on the interpretive expectations brought to each interaction by audience members. But for my purposes here Garfinkel’s experiment also serves to demonstrate something rather different: The Eliza effect can be shielded from breakdown by severely restricting interaction. The experiment allowed the subjects to maintain the illusion that something much more complex was going on inside the system (a human considering their problems seriously, and answering questions thoughtfully, rather than random yes/no answers) because the scope of possible responses was so limited. If it had been expanded only slightly — say, to random choice between the responses available in a “magic 8-ball” — almost any period of sustained interaction would have shattered the illusion through too many inappropriate responses.

When breakdown in the Eliza effect occurs its shape is often determined by that of the underlying processes. If the output is of a legible form, the audience can then begin to develop a model of the processes. This is what Turkle notes in those interacting with Eliza/Doctor: from the shape of the breakdown they begin to understand something of the processes of the system — and then employ that knowledge to help maintain or further compromise the illusion.

In this context it is interesting to note that most systems of control that are meant to appear intelligent have extremely restricted methods of interaction. In some cases the reasons for this are quite obvious. If the public were allowed playful interaction with software that identifies possible targets for financial surveillance, the shape of the underlying system would become relatively apparent, making it possible to “game” the system. At the same time, this restricted interaction also serves to maintain the Eliza effect for software that is not nearly as intelligent as the public has been asked to believe.

Further, within a rather different community, this choice — between severely restricted interaction and the boom/bust of illusion followed by breakdown — presents no good options to those with an interest in creating digital fictions.8 So while some have argued that it is best to capitalize on the Eliza effect, depending on temporary illusion and the willful suspension of disbelief to carry the day, most digital fiction authors employ a different approach: exposing important elements of the structures of their processes to the audience from the outset. This allows for interaction that matches the process employed and avoids the Eliza illusion and breakdown. However, as I will discuss next, the most common of these approaches suffer from limitations of their own.

Finally, I should note that some authors — such as Jeremy Douglass (2007) — argue that breakdown can be an interesting mode for digital fictions. And certainly breakdowns can be fascinating. On a linguistic level, for example, we’re attracted to study every form of breakdown from occasional slips of the tongue to hemmorage-induced aphasia.

But what breakdown can do — in the case of Eliza/Doctor, linguistic slips, and neurological problems alike — is give us some insight into the shape of the underlying system processes. This points to the reason that I still talk with people online (even if I no longer dial in to a BBS to do so) but I no longer play with Eliza in my spare time: a system prone to breakdown is only as interesting as the shape of the processes that the breakdowns partially reveal. And, as demonstrated earlier in this chapter, the Eliza system processes are mostly a relatively uninteresting set of substitutions. We can do better.

Notes

5To be fair, at the time of Weizenbaum’s initial observations, almost no one could experience ongoing interaction with Eliza/Doctor, due to the limited availability of computing resources. As Weizenbaum notes, “since the subject cannot probe the true limits of Eliza’s capabilities (he has, after all, only a limited time to play with it, and it is constantly getting new material from him), he cannot help but attribute more power to it than it actually has” (1976, 191).

6Turkle notes that “Some people embark on an all-out effort to ‘psych out’ the program, to understand its structure in order to trick it and expose it as a ‘mere machine.’ Many more do the opposite…. They didn’t ask questions that they knew would ‘confuse’ the program, that would make it ‘talk nonsense’ ” (40). Turkle attributes this to a desire to “maintain the illusion that Eliza was able to respond to them.” However, it is also entirely in line with Murray’s interpretation of Eliza as a media experience, with the audience shaping their interaction to help maintain the willful suspension of disbelief.

7Suchman argues that Garfinkel’s experiment lends support to Weizenbaum’s view that the feeling of intelligence in conversations with Eliza/Doctor comes from the work of the audience. Further, she argues that the strongly situated understandings of the students (they interpreted the random series of yes/no answers based on assumed context) is a challenge not only to the strong structure-oriented assumptions of the social sciences but also those of cognitive science.

8Except for that limited number of fictions which might want to explore one of these effects.