I’ve been thinking about chatterbots, as well as the recent discussion about poetry generation using statistical methods. I’ve thought about what these systems do, and what they don’t do.
I recently played with and read up on ALICE, a state-of-the-art text-based chatterbot. Primarily authored by Richard Wallace, ALICE has twice won an annual Turing test-like competition called the Loebner Prize. To create ALICE, Wallace developed AIML, a publicly-available language for implementing text-based chatterbots.
Gnoetry has been discussed several times here on GTxA, most recently here. From its website, “Gnoetry synthesizes language randomly based on its analysis of existing texts. Any machine-readable text or texts, in any language, can serve as the basis of the Gnoetic process. Gnoetry generates sentences that mimic the local statistical properties of the source texts. This language is filtered subject to additional constraints (syllable counts, rhyming, etc.) to produce a poem.”
In my experience with them, ALICE and Gnoetry are entertaining at times, sometimes even surprising. They clearly have some intelligence.
But something feels unduly missing about these artificial minds. I decided to try to understand, why do I have trouble caring about what they have to say? What precisely would they need to do, beyond or instead of what they currently do, to make me care? (Is it just me? :-)
This is a preview of
. Read the full post.