December 9, 2005


by Andrew Stern · , 3:33 am

The ALICE bot developers (Richard Wallace and co.) are making an interesting resource available for purchase: based on 10 years of conversation logs between their award-winning chatterbot and thousands of users, they have compiled a list of ALICE’s most common 10,000 user inputs. (Their bot is in fact comprised of short responses to 4x that number of inputs.) Further, they’ve abstracted this raw data into the top 10,000 patterns of input, which I’d guess is drawn from the top 30,000 inputs or more. This “Superbot” data, in the form of Excel spreadsheets, can be yours for $999.

I actually think that’s a decent value for such data, even if it’s somewhat tied to the general design and interface of ALICE. That is, there’d be all kinds of new inputs users would say that are not on the list, once you make a conversational agent that can have deeper conversations than the very broad but shallow ALICE, or if you made agents with a more focused, less generic domain, such as Grace and Trip in Fa├žade. Still, I’m sure many items on this list would be said to most any bot, at least in this early era of overall bot intelligence.

Speaking of bots and what people say to them, I came across the webpage for an intriguing symposium held at Interact 2005 as well as a sequel to be held at CHI 2006: Agent Abuse, the dark side of human computer interaction. Here’s the symposia’s abstract:

The goal of this workshop is to address the darker side of HCI by examining how computers sometimes bring about the expression of negative emotions. In particular, we are interested in the phenomena of human beings abusing computers. Such behavior can take many forms, ranging from the verbal abuse of conversational agents to physically attacking the hardware. In some cases, particularly in the case of embodied conversational agents, there are questions about how the machine should respond to verbal assaults. This workshop is also interested in understanding the psychological underpinnings of negative behavior involving computers. In this regard, we are interested in exploring how HCI factors influence human-to-human abuse in computer mediated communication. The overarching objective of this workshop is to sketch a research agenda on the topic of the misuse and abuse of interactive technologies that will lead to design solutions capable of protecting users and restraining disinhibited behaviors.

Papers from 2005 can be found here, and the 2006 cfp here.

We’ve discussed the abuse of agents on GTxA a couple of times (1, 2); also ALICE several times, including this discussion.