February 14, 2008
EP 4.4: AI, Neat and Scruffy
A name that does appear in Weizenbaum’s book, however, is that of Roger Schank, Abelson’s most famous collaborator. When Schank arrived from Stanford to join Abelson at Yale, together they represented the most identifiable center for a particular approach to artificial intelligence: what would later (in the early 1980s) come to be known as the “scruffy” approach.7 Meanwhile, perhaps the most identifiable proponent of what would later be called the “neat” approach, John McCarthy, remained at Stanford.
McCarthy had coined the term “artificial intelligence” in the application for the field-defining workshop he organized at Dartmouth in 1956. Howard Gardner, in his influential reflection on the field, The Mind’s New Science (1985), characterized McCarthy’s neat approach this way: “McCarthy believes that the route to making machines intelligent is through a rigorous formal approach in which the acts that make up intelligence are reduced to a set of logical relationships or axioms that can be expressed precisely in mathematical terms” (154).
This sort of approach lent itself well to problems easily cast in formal and mathematical terms. But the scruffy branch of AI, growing out of fields such as linguistics and psychology, wanted to tackle problems of a different nature. Scruffy AI built systems for tasks as diverse as rephrasing newspaper reports, generating fictions, translating between languages, and (as we have seen) modeling ideological reasoning. In order to accomplish this, Abelson, Schank, and their collaborators developed an approach quite unlike formal reasoning from first principles. One foundation for their work was Schank’s “conceptual dependency” structure for language-independent semantic representation. Another foundation was the notion of “scripts” (later “cases”) an embryonic form of which could be seen in the calling sequence of the ideology machine’s executive. Both of these will be considered in more detail in the next chapter.
Scruffy AI got attention because it achieved results in areas that seemed much more “real world” than those of other approaches. For comparison’s sake, consider that the MIT AI lab, at the time of Schank’s move to Yale, was celebrating success at building systems that could understand the relationships in stacks of children’s wooden blocks. But scruffy AI was also critiqued — both within and outside the AI field — for its “unscientific” ad-hoc approach. Weizenbaum was unimpressed, in particular, with the conceptual dependency structures underlying many of the projects, writing, “Schank provides no demonstration that his scheme is more than a collection of heuristics that happen to work on specific classes of examples” (199). Whichever side one took in the debate, there can be no doubt that scruffy projects depending on coding large amounts of human knowledge into AI systems — often more than the authors acknowledged, and perhaps much more than they realized.
Ideology revisited
The signs of unacknowledged over-encoding are present at the very roots of scruffy work, as we can see by returning to Abelson’s ideology machine. The system was presented as a structure and set of processes for modeling human ideology generally. It could then be populated with data (concept-predicate pairs, evaluations of the elements, and connections between them) to represent a particular ideology. If Abelson and his collaborators had succeeded in building such a system, the machine’s processes would be ideologically neutral — only the data would carry a particular position.
As examples cited in the earlier discussion revealed, there certainly is a strong ideological position encoded in the data of what some simply called the “Goldwater machine.” But, returning to the specifics of the system’s operations, one can also see that the same ideology is encoded in its processes. This begins at the center of its operations, with the calling sequence “motivated to dismiss” any statement by a negatively-viewed source, even a statement with which the system data is in agreement. It is also found in the design of the processes for denial and rationalization, which are predicated on a world divided into “good actors” and “bad actors.” Further, in addition to being designed to operate in terms of good and bad, the primary processes for interaction are dedicated to finding routes to deny even the smallest positive action by the bad guys and seeking means to rationalize away even minimally negative actions by the good guys on the basis of paranoid fantasies, apologetic reinterpretations, and Polyannic misdirections.
This is not a general model of ideology. It is a parody of one very particular type of ideology, one that depends on fear to gain power. As we have seen in more recent U.S. politics, the idea that “you are either with us or against us” is not a feature of every ideology — but rather a view held by a small number of extreme groups. This small number, in fact, have a tendency to demean alternative ideologies as as naïve specifically because they do not see the world in terms of good guys and bad guys.
This was just as true at the time of Abelson’s work.8 We can imagine Stevenson being critiqued exactly because his ideology operated by processes rather different from those encoded in the Goldwater machine. As a result of this difference, it would be impossible to create a “Stevenson machine” simply by providing a different set of concept-predicate pairs.
Of course, even Goldwater was a more subtle thinker than Abelson’s system would allow. The system is, like Eliza/Doctor, a caricature, a parody. And just as Eliza’s model of conversation (transforming the most recent statement to form a response) only found significant use through the Doctor script (transforming in the manner of a Rogerian therapist) so the ideology machine was closely wedded to the particular type of ideology used in its “demonstrational system.”
Another way of putting this is that both Eliza and Abelson’s system, if perhaps unintentionally, provide a critique of their subjects (Rogerian therapy and right-wing Cold War ideology) through a combination of their data and processes. At the foundations of AI in the U.S., we find systems that, upon examination of their components, it is nearly impossible to view as straight-faced science. Rather, they are expressive media. And this is a primary reason that they, and the systems that followed in the scruffy branch of AI, remain of interest today.
Encoding large amounts of human knowledge into the design of a system’s data and processes may not have been good cognitive science, but it was a powerful authoring technique. One result was compelling interactive characters, such as Eliza/Doctor and the Goldwater-infused ideology machine. Another result was a set of the most important early experiments in story generation, which the coming chapters will explore.
Notes
7After the terms “neat” and “scruffy” were introduced into the AI and cognitive science discourse by Abelson’s 1981 essay, in which he attributes the coinage to “an unnamed but easily guessable colleague” — Schank.
8A fact that, by the time of his 1973 publication on the system, Abelson would acknowledge: casting it as a system for modeling the ideological reasoning of the “true believer” rather than such reasoning in general. Of course, many true believers may believe in something other than a world divided into good guys and bad guys.
February 14th, 2008 at 12:20 pm
I think you mean “depended”, not “depending”, in the last sentence.
February 15th, 2008 at 4:03 pm
Yes, thanks for the catch!