May 31, 2008
Provocation by Program: Imagining a Next-Revolution Eliza
By Nick Montfort and Andrew Stern
(This is the text of the talk we gave at the ELO Visionary Landscapes conference just now. Mark Marino already has a reply online.)
Introduction
In the 1960s, Eliza, and specifically that system running the famous Doctor script to impersonate a psychotherapist, prompted conversations and controversies about anthropomorphic concepts of the computer, artificial intelligence and natural language understanding, user interfaces, and even psychotherapy. Decades later, Janet Murray hailed the system as the first electronic literature work, saying it was at that point still the most important one. All this was the result of a rather small amount of code that lacked multimedia elements, contained very little pre-written text, and was developed by a single person, Joseph Weizenbaum.
We begin by assuming that computation and literary art are inherently very powerful. That is, we assume it is not essential to have recourse to networked communication, massive knowledge bases, or even graphics capabilities to develop a provocative, affecting project that inquires about important issues. In thinking about a such a project, we are seeking an antidote to today’s ever larger and complex computer applications — sixty-hour game quests within expansive virtual worlds, mashups of intricate Web technologies, and massively feature-bloated operating systems. A small yet powerful and surprising computer program would be both pleasurable and provocative because of its simplicity and clean concept. So we simply assume, rather than trying to prove, that while more elaborate systems may be interesting in some ways, a new system on the scale of Eliza can still have the sort of broad impact today that Weizenbaum’s computer character did more than forty years ago. Given that, we ask, what specific qualities would this system have?
Of course, there are plenty of programs that are more or less directly descended from Eliza. These include conversational characters (such as A.L.I.C.E.) as well as task-oriented dialogue systems (such the automated Amtrak agent Julie that you can call and speak to right now at 1-800-USA-RAIL). There are also digital artworks that specifically refer to and rework Weizenbaum’s concept, such as Adrianne Wortzel’s Eliza Redux. We are not imagining programs along these lines as we think about a possible Eliza for the 2000s or 2010s. Instead, we will focus on the qualities that made Eliza a provocative and influential piece of literary art in its time and context. We are not thinking of a new chatterbot; we are interested in imagining a system that would introduce a new form, like that of the chatterbot, and that would inspire reworking and reimagining by artists, as Eliza did.
Six Important Aspects of Eliza
While different writers, artists, and programmers might identify different aspects of Eliza as central, we believe that we have identified six that would be shared by a similarly high-impact program today. Some of these may be fairly obvious; others seem to us to much much less evident and much less frequently discussed, if they have been discussed at all. The important properties of Eliza that we have identified are:
- Engaging deeply with language.
- Dealing with a fundamental issue, concern, anxiety, or question about computing and technology.
- Being interactive and immediate — impressing the interactor in an instant.
- Being understandable after more effort is applied and the program is explored further.
- Being general to different computer platforms and easily ported.
- Being process-intensive — driven by computation rather than data.
Next, we discuss each of these aspects of Eliza in a bit more detail, explaining why we think each of these was important to Eliza’s effect:
-
Engaging deeply with language: Email communication, instant messaging, and blogging have become pervasive, so language has hardly been left behind on today’s networked computer. Yet there are few programs that perform process-intensive understanding, manipulation, or generation of language. Words are certainly part of contemporary computing, but almost invariably, they are human-authored ones that computers simply transport to each other through a series of tubes. Eliza did something more by using computation to model the way some type of people in certain roles might engage in dialogue. Eliza was a computational model of the way a person produces and understands language in conversation. It was not complete or perfect model, of course, but it was a model, and so it could be understood, modified, critiqued, and argued over — and all of these things have indeed happened. We believe that it was important that Eliza modeled linguistic behavior rather than something else, such as the way people walk around cities. Language is socially engaged (as is walking around in a city) but it is also expressive of and implicated in thought, with semantic, syntactic, and pragmatic dimensions. While we don’t think an Eliza-like program today would have to be a chatterbot, we do think that it should work on language and provoke us to think through and about language.
-
Dealing with a fundamental concern about computing and technology: The computation done by the program will offer itself as a conversation piece, allowing discussion of what computers are essentially like or what they can and will exceed. Or, that computation will be discussed as being a proper use of computing power or an abuse of it, given our particular views about the role of computing in our culture, just as Eliza was discussed by Weizenbaum himself as a negative example of how people and computers can relate. Today’s concerns about computing are not the same as those in the 1960s, of course, so a program as interesting as Eliza would engage a different set of issues and questions. In the 1960s, there was anxiety about computers replacing people and taking away jobs and questions about how intelligent and human-like computers could become. Today, thanks to the nature of the information technology industry and the capabilities of the global information infrastructure, there are more vocal concerns about outsourcing than about automation. We can add to the register of fears fairly recent worries about privacy, security, fraud, and identity management; the anxiety about video games and violence; and the fear of Internet addiction.
-
Having an effect on the interactor in an instant: This means that an Eliza-like system would invite people to interact in a mode that is familiar, without requiring tutorials, in-game training, or initiation into specific conventions. (That means, painful as it is to admit, that the next Eliza will not be a command-line interactive fiction program where you type in compass directions to move around.) Extensive sessions will also not be required to understand what is going on, although an Eliza-like program should be able to sustain them. As Weizenbaum wrote: “DOCTOR, as ELIZA playing psychiatrist came to be known, soon became famous around the Massachusetts Institute of Technology, where it first came into existence, mainly because it was an easy program to demonstrate. Most other programs could not vividly demonstrate the information-processing power of a computer to visitors who did not already have some specialized knowledge, say, of some branch of mathematics. DOCTOR, on the other hand, could be appreciated on some level by anyone. Its power as a demonstration vehicle was further enhanced by the fact that the visitor could actually participate in its operation.” What follows from this straightforwardness of concept and ease of interaction is that the immediate effect of interacting with the program is itself easy to discuss and describe. People should able to understand the basics of what an Eliza-like program does from things like screenshots, transcripts, and conversational explanations, just as people can understand the basic idea behind Eliza in this way.
-
Yielding to understanding: The most famous transcripts of interaction with Eliza, besides the canonical transcript that Weizenbaum included in his paper and book, are ones in which some rube attempts to hit on the bot, making a fool out of himself because he cannot understand that he is talking to a computer program. The prevalence of such transcripts suggests that Eliza is only good for being fooled or making fun of those who are fooled. But, even though certain very dense people do not figure out how Eliza works, people can learn about how the program models a psychotherapist by interacting with it. Code is available, detailed documentation of the program is available, but a great deal can be understood simply by interacting with the program and discerning how it reflects questions back and how it fails to reply appropriately. As Noah Wardrip-Fruin noted, “what breakdown can do — in the case of Eliza/Doctor, linguistic slips, and neurological problems alike — is give us some insight into the shape of the underlying system processes.” Having understood something about how Eliza works as a model of an English speaker, we can then dismiss the model as an uninteresting trick, if we like, or we can critique it, elaborate it, modify it, and further our understanding of human conversation by doing so.
-
Being easily ported and even capable of being implemented based on detailed documentation: Eliza was a phenomenon in the late 1960s because Weizenbaum made it work at MIT; it remained a phenomenon into the 1970s, 1980s, 1990s, and 2000s only because people were able to implement their own versions in LISP, BASIC, C, Java, on the Web, in Emacs, and in many other contexts. Its popularity beyond MIT in the 1960s was not based on some proto-open-source sort of code distribution, but was because it was possible to reimplement Eliza based on the academic paper about it, as Weizenbaum explained: “Soon copies of DOCTOR, constructed on the basis of my published description of it, began appearing at other institutions in the United States. The program became nationally known and even, in certain circles, a national plaything.” The quality of portability, based either on code or on specification and the just-mentioned quality of understandability, is one that many famous computer programs of different sorts have had, including Hammurabi, Adventure, Tetris, and Doom — the same sort of programs that have been identified as benchmarks by Christy Dena, Jeremy Douglass, and Mark Marino.
-
Being process-intensive, that is, relying on computation rather than art assets, pre-written texts, and other stores of information: As Chris Crawford explained, “Process intensity is the degree to which a program emphasizes processes instead of data.” For instance, imagine a chess program that can play against a user and that displays a two-dimensional representation of a chess board as its interface. If you were to improve the system’s ability to play chess by causing it to think harder about each move, the program would be made more process intensive. If, instead, you were to add new chessboards and chess pieces of different designs so that the game can be skinned in different ways, the balance would shift toward data and the program would become less process intensive. In a process intensive electronic literature program, the function of the program, not the “content” that it displays, will be at the core of its literary effect and the provocation it makes. If someone changes the content slightly without changing the core functioning of the program, the effect of this modified program will remain largely the same. This aspect relates to the previous one: a program that is highly dependent upon maintaining text and typography perfectly will not survive being ported and reimplemented.
Other Important Systems and the Qualities They Share
We now look to several other computer systems, small-scale and large-scale, that have become part of the zeitgeist, and we consider which of these qualities they have and which they lack. These high-impact systems are not restricted to literary ones; in fact, we did not select any system that was mainly framed as literary.
-
Conway’s Game of Life. This is a simple set of rules that define the behavior of a cellular automaton and that can easily be implemented on the computer, even by a beginning programmer. The Game of Life demonstrates how highly complex behavior can arise from a simple, deterministic, and completely understandable system. This system shares five of the six important aspects of Eliza — the only one it lacks is the first, engagement with language. (2,3,4,5,6)
-
Tetris and Doom. These two famous computer games succeed in three of the six ways that Eliza did, but for different reasons. Tetris was not visually spectacular; it was just easy to learn and lots of fun to play. Doom, on the other hand, immediately impressed with its interactive three-dimensional world, so that even people who didn’t want to play it could appreciate it. Both were very portable, but Tetris was simple enough to be reimplemented while Doom’s appearance on different platforms was due, in part, to the source code being made available. (3,5,6)
-
SimCity. Will Wright’s game offered a model of a city where policies could be set by the player and development would happen according to rules that could be understood but were not explicitly stated. The game showed that computer simulations, even those that ran on home computers and were sold as entertainment software, could teach us about complexity and engage with issues of public policy and urban planning. Although proprietary and complex, it is to some extent portable, as Maxis’s own reissues of the game and an open source clone demonstrate. (2,4,~5,6)
-
Deep Blue. While Deep Blue was technically an interactive computer system, and while it did make an impression, only the world chess champion was allowed to interact with it. The esoteric system harkened back to the mainframe days, was deployed in a context that suggested Cold War polarities in addition to the opposition of man against machine. Deep Blue actually dealt mostly with the concerns and anxieties that people had about computing in the 1960s rather than those facing us in the late 1990s, but it dealt very powerfully with concerns that had persisted and continually been articulated over decades. (2,6)
-
The Google search engine. Finally we arrive at a system that engages language, showing how something like topicality and similarity of documents can be determined by applying computation to a massive store of data. The impression that effective search makes the Web surfer is almost instantaneous, even if it takes a while for a complete novice to get any sense of the extent of the Web as it is represented in Google’s index. Even more than Deep Blue, the Google search engine is a massive apparatus sutured together from trade secrets — the company depends on it being non-portable to outsiders. Because this search engine is the most data-driven and because it would not be the same if it worked on something other than the Web, we consider it to be the least process-intensive of the examples we have considered. (1,3,~6)
Of course, these systems had some important characteristics that Eliza didn’t — in certain cases they could be sold as computer games, or connected to advertising to form the basis of a profitable business, and so on. It does seem, however, that elements of their success and influence were shared in common with Eliza. If we were looking to stage a Cold War demonstration or deploy a hit video game, we might consider a different set of attributes, but we believe Eliza’s qualities are particularly worth considering for those of us who are involved with digital writing, with imaginative and poetic uses of language.
Directions for Impactful E-Lit
To conclude, we will identify the types of electronic literary practice that we believe most likely to have Eliza-like impact. All sorts of literary practices are well situated to engage language, and many practices also deal with questions about computing technologies and their situation in society. The remaining four aspects are not shared equally by works coming from different electronic literature practices.
Electronic literature authors often try to invite deep reading and to reward lengthy exploration; perhaps because the involvement of a committed reader is valued over an initial glance, impressing the interactor right away is seldom an important goal. But literary art on the computer can make an immediate impression: While the hypertexts of the Eastgate school offer a great deal to the careful reader, William Gillespie, Scott Rettberg, and Dirk Stratton’s The Unknown grabs and slaps the reader more or less immediately with its humor and metafiction, no matter what page is selected.
In terms of yielding to understanding, some of the art and literary works that are initially the least understandable actually display this quality the most clearly. Jodi’s maze of broken websites can increasingly be seen to display the logic of the Web; Ben Benjamin’s Superbad can be understood as an ecstasy of mid-1990s Web design; the malfunctions of Dan Shiovitz’s Bad Machine give information about the virtual situation as the interactor learns more about them.
Generality and portability are very seldom valued in electronic literature practice, but there are some signs that this is changing. Rob Kendall’s X-Literature prototype is an important step toward abstracting the functioning of hypertext-like works, and, aside from offering preservation benefits, should urge electronic literature authors of all sorts to think about the essentials of how their works function.
The least process intensive electronic literature works are often held up as paradigms, but Eliza is not alone in being computational and working in literary ways. With regard to emphasizing computation over data, computational poetry, interactive fiction, interactive drama, and creative text generation practices are more process intensive, more Eliza-like, and most likely to connect computing and culture in the way that Weizenbaum’s program did.
Of course, many electronic literature authors seek to express an imaginative world or poetic concept in other ways, without modeling aspects of language in an interactive program. Eliza-scale provocation is not a goal shared by everyone. But we hope that our analysis of Eliza, performed from our standpoint as computer literary artists, is nevertheless of interest to certain provocateurs.
May 31st, 2008 at 12:34 pm
[…] “Provocation by Program” Imagining a Next-Revolution ELIZA Nick Montfort and Andrew Stern http://grandtextauto.org […]
May 31st, 2008 at 2:23 pm
> While the hypertexts of the Eastgate school offer a great deal to the careful reader, William Gillespie, Scott Rettberg,
> and Dirk Stratton’s The Unknown grabs and slaps the reader more or less immediately with its humor and metafiction,
> no matter what page is selected.
Though this is perhaps an archetypical ELO conference provocation, I wonder how far we’d want to extend the argument. Let’s face it: humor and metafiction are an odd couple! And do we really want to honor primarily, or exclusively, the grab and slap? If grabbing and slapping on every page is the aesthetic goal of The Unknown, why is The Unknown so long?
It might also bear consideration that the interpretation of Weizenbaum’s literary work you offer in the opening section, while shared by a number of writers including Murray, is directly contrary to the interpretation Weizenbaum himself offered. I mention this not to revive the intentional fallacy (though in this specific case it might have some bearing) but because this circumstance seems as pertinent to the reception and success of ELIZA as do the others you adduce.
Finally, might the true explanation for the ready dissemination of this program lie not so much in ease of portability and documentation as in the trivial nature of the underlying code — and the ubiquitous foundation narrative of the discursive computer, for which ELIZA users had been prepared by decades of literature?
May 31st, 2008 at 3:34 pm
I agree. Many pages of the Unknown have no grabbing, slapping, or even heavy petting. And some of them aren’t even funny. I weep nearly every time I read of Dirk’s death.
May 31st, 2008 at 4:51 pm
Mark raises a good point. Disowning our works once we’re are done might be a good move toward pushing the conversation along. I’ve been having some misgivings about the humanistic implications for some of the work we’ve been doing over at Bunk Magazine lately. The Los Wikiless Timespedia seems to have caused people to misunderstand the nature of newspapers.
I think your last point is right on and speaks to Nick and Andrew’s second point of criteria. Of course, I can only imagine that at least some of the literary works we are discussing, from Blade Runner to Galatea 2.2 are in direct dialogue with ELIZA.
Mark, come over to WRT when you are done here. I’ve got some assertions about Storyspace I’d like to run past you.
June 1st, 2008 at 8:10 am
Check out the Chatbot Game, a Web 2.0 approach to building a chatbot: http://chatbotgame.com.
June 1st, 2008 at 10:48 am
Mashups and the network strike back.
June 2nd, 2008 at 12:43 pm
Very interesting article.
There is one other property of Eliza which was also important to its success: the simulated environment was carefully chosen to mask the weaknesses of the simulation. The Rogerian therapist (or a caricature thereof) doesn’t really need to understand what the patient is saying – he just makes a merely syntactical manipulation to turn an assertion into a question.
Facade is another example where the simulated environment was carefully chosen to cover up the simulation’s problems. The characters are neurotic and self-absorbed; they are acutely aware of the modernist worry about a lack of meaning. These aspects of their personalities are perfect for covering up some of the cases where the parser doesn’t quite understand what you are saying!
(NB I do not mean this as a criticism – it is part of the success of Facade that the authors anticipated in advance the inevitable limitations of the technology, and designed a dramatic situation which de-emphasized them).
June 2nd, 2008 at 1:14 pm
Hi Richard, you make great points, as usual!
What you say is certainly true about Eliza (and hopefully Facade too, for that matter).
In terms of this discussion, though, identifying general traits that made Eliza important and high-impact, if we were to name another high-level characteristic, how would we describe the one you mention?
We could call it something very general like “good design” or “cleverly covers technical weaknesses”, but those are universal design maxims, the kinds of traits one would want in any production.
I think the traits we have identified are perhaps a bit higher-level than the particular feature of Eliza you point out?
June 2nd, 2008 at 4:18 pm
Um… good question… perhaps something like “The computational weaknesses of the simulation are mirrored in the psychological weaknesses of the simulated protagonist” ?
June 3rd, 2008 at 10:19 am
Another striking example of this is Emily Short’s Galatea. The NPC character is an animate statue that has only been alive for a few days. These two aspects of the fiction perfectly excuse the fact that computer implementations of natural language understanding lack emotional understanding and real-world knowledge: she lacks emotional understanding because she is a statue, and she lacks real-world knowledge because she was only born yesterday!
In all these cases (Eliza, Facade, Galatea), the inadequacy is both highlighted and also excused. It is made explicit so that we can forget about it.
June 4th, 2008 at 11:13 am
Apropos of little, did anyone catch the Eliza reference on an episode of Terminator: Sarah Connor Chronicles?
June 4th, 2008 at 2:12 pm
I didn’t, but it’s not off topic at all. If you let me in on what the reference is, and if Andrew and I do some sort of for-real publication based on this work, we might even cite this as an example of Eliza’s influence.
June 5th, 2008 at 8:28 am
I do watch and enjoy the show, and I too remember a reference to Eliza in one of the episodes. Part of me thought, okay, yeah, the writers are just trying to show us they’re all knowledgeable-and-shit about computing and AI. But actually you’re right, it probably is a notable sign of Eliza’s influence on today’s popular consciousness.
Of course, one of the things I like most about Terminator: The Sarah Connor Chronicles is the fact that it has a handsome young male character who almost single-handedly invents the most powerful AI in the world in his bedroom, and happens to be named Andrew.
June 5th, 2008 at 9:42 am
Richard, Andrew: I’m not sure I understand how Facade does something similar to Eliza and Galatea in terms of modulating the player’s expectations of the agents’ inadequacies. In fact, isn’t one of the common critiques of the aesthetic effects of the NLP+ABL infrastructure precisely that it fails to do this?
June 5th, 2008 at 10:40 am
As Richard describes, it’s in Grace and Trip’s character design — their neurotic and self-absorbed nature helps players suspend disbelief when the NLP fails. (And of course we write dialog responses to try to cover up as best we can, the times the NLP knows it’s failing.)
Sure, there is plenty of complaint that the NLP fails — but there would perhaps been even more complaint if the characters were supposed to be good listeners, say, friends trying to be supportive of the player’s own problems.
June 5th, 2008 at 11:31 am
The Eliza ref in Terminator was in one of the last episodes — the second last, maybe? — where she and John have a conversation in his room (sorry I can’t be more specific). Basically, she talks to him as though she were an Eliza program — inverting things he says to her, and saying things like “And how do you feel about that?” It was subtle, but there was one key phrase that was such an Eliza mainstay it clinched it for me. (Alas, I’ve forgotten what the phrase was!)
June 5th, 2008 at 1:10 pm
Those of us who have experienced the “hot coffee” sequence in Facade know that it already kicks the shit out of ELIZA. Thanks for that easter egg. It keeps me coming back for more.
June 5th, 2008 at 1:30 pm
Ah, Scott, that explains why you were “banging air” in the AR Facade exhibit at the Beall show.
June 5th, 2008 at 1:40 pm
Teledildonics rock!
June 5th, 2008 at 9:11 pm
So the question remains:
Are these features of context part of the data or part of the processes? I think these comments prove that you can’t have one at the expense of the other. In my book project, I develop a reading of Michael Mateas’ addressing the context as the realm of formal affordances and the code logic as the realm of material affordances. The latter is the set of possible input. The former is the set of input that will make sense given the developing scenario. (You can see some of this in my dissertation).
Again, for me the literary nature of this context and data is one of the barriers to entry into my category of electronic literature. Once that criteria is met (and in light of that context), I can consider the roll of the processes. But again, I feel like my position is just an inverted set of values of the one you presented in your paper.
June 6th, 2008 at 2:58 pm
Mark, this seems like an important issue, but I’m not sure what “features of context” you’re talking about. Terminator references? Help me out here?
June 7th, 2008 at 4:12 pm
The features of context would be any aspect of the bot that signal establish its context. So we’ve been talking a lot about the ways in which bots situate themselves (and their failings). ELIZA as Doctor offers a conversational context of an exchange with a Rogerian therapist. The Prayer bot presents the exchange as prayers offered to a deity. Facade, and my chapter goes into more detail here, lays out a very particular constraint of an encounter with two friends from college at the apartment 10 years after college within the confines of a 15-20 minute 2 act tragedy. As you can see with my description — that context is signaled by data, logic, and presentation layers together.
My guess (and my experience from trying to create bots) is that the conceit of this narrative context — scripted into all the data and built into the presentation layer (to give meaning to the logic or processes of the bot) is what’s key to making a satisfying bot. Satisfying to whom? the reception theorist might ask. Satisfying as literature? we might ask. But for now, I’ll just say satisfying to me.
June 8th, 2008 at 8:05 am
I see – you mean that Eliza enacts/impersonates/plays/parodies a Rogerian psychotherapist. I think I would probably call that Eliza’s role or script rather than its context. That seems to me to be part of the “text” of Eliza/Doctor, broadly speaking, not what surrounds and situates the text. On a higher level, Eliza is an interactive computer program rather than, for instance, a character in a play who is a psychotherapist. But I think even at this point we are talking about the form and not about the context.
The context seems to me to be the overall cultural and individual situation in which a person has some idea of what a psychotherapist is. This could come from being in therapy, from popular representations of therapy, or perhaps from being a therapist. And, at the level of form rather than a particular role, there is a context as well. We also have expectations about and understandings of what a computer program is and what a play is.
As for the question, What makes Eliza impersonate a psychotherapist – data or process?, both are clearly important. By making point 6 about process intensity, we certainly didn’t mean to say that data is irrelevant, only that process is more relevant than has been acknowledged and more relevant than it is in most e-lit. Eliza/Doctor has a particular role because the program reflects the interactor’s language back (mainly due to process) but also because the program produces neutral, gentle language (mainly due to data). In contrast to most electronic literature, “content” is not king with Eliza. Modified versions of the program with different response texts, not preserving the original wording of Weizenbaum’s first program, are still “Elizas.” But that isn’t to say that data is irrelevant. If you were to change just the data and have Eliza hurl obscenities or obsessively mention cheese, that would scuttle the program as a psychotherapist impersonator and make it into something else.
June 8th, 2008 at 10:37 am
One important aspect of Eliza, which informs both data and process, is its material context. The teletype interface that Eliza uses already sets a whole range of expectations from the user, including the idea that he or she might be talking to a human agent via a timeshare computing system. I think Nick also once observed (somewhere, where NIck?) that the program’s slow output via teletype encouraged even more gap-filling than do the modern, immediate screen versions.
What modulates the user’s expectations here is not just the design of the system, but the tight coupling between that design, its material context, and its social/cultural situation. Thus the (possibly apocryphal) story of the VP who mistakenly thought Eliza was a truculent employee, or that of Weizenbaum’s secretary, who eventually asked him to leave the room while she “spoke” to the program, despite knowing full well it was a program.
June 9th, 2008 at 7:25 am
Ian. that’s in my paper “Continuous Paper” which is online in a few different versions … here’s the ISEA “Continuous Paper,” which Scott graciously presented for me.
Interestingly, a slower pace of response, more imitative of human conversation than modern-day computer reply, can also be found in a system like the Apple II running some interactive fiction program that has to go to disk all the time. But it’s certainly a noticeable feature of systems with a teletype interface.
June 15th, 2008 at 5:17 pm
[…] Provocation by Program: Imagining a Next-Revolution Eliza […]
July 17th, 2008 at 11:18 am
[…] http://grandtextauto.org/2008/05/31/provocation-by-program-imagining-a-next-revolution-eliza/#more-1… […]