February 13, 2008
EP 4.3: Abelson’s Ideology Machine
Abelson and Carroll’s paper — “Computer Simulation of Individual Belief Systems” (1965) — describes work that Abelson and his students had pursued since the late 1950s, and would continue to pursue into the 1970s. At the point of their 1965 paper the “ideology machine” consisted of an approach to belief structures and a number of operations that could be performed on such structures. Sample belief structures from the paper range from common cold war views (“Russia controls Cuba’s subversion of Latin America”) to absurd statements (“Barry Goldwater believes in socialism”) and also include simple facts (“Stevenson ran for President”).
As these examples foreground, this a system built in the midst of the cold war. The Cuban Missile Crisis, President Kennedy’s assassination, and the Gulf of Tonkin Resolution were all recent events. The world seemed polarized to many and, within the U.S., names like those of Adlai Stevenson and Barry Goldwater did not simply indicate prominent politicians with occasionally differing philosophies. Goldwater, 1964’s Republican nominee for President of the United States, was an emblematic believer in the idea that the world’s polarization was an inevitable result of a struggle between good and evil2 — a position that would be echoed by his ideological descendants, as in Ronald Reagan’s “evil empire” and George W. Bush’s “axis of evil.” Stevenson, the Democratic candidate for President in 1952 and 1956, on the other hand, was emblematic of those with a more complex view of world affairs and a belief in the potential of international cooperation — for which he was publicly derided by those with more extreme views.3
In such an environment, the example data that Abelson and Carroll use to illustrate the functioning of their system is clearly highly charged.4 Perhaps for this reason, they remained coy about the exact identity of the individual portrayed in their “demonstrational system,” referring to this person only as “a well-known right-winger.” But by the time of his 1973 publication on the system, Abelson was willing to say directly what was already well-known within the field: Goldwater, himself, was the model for the ideology used in developing the system.
Interaction with the system consisted of offering the assertion that a particular source (e.g., an individual) has made the claim that a concept (e.g., a particular nation) has the stated relation to a predicate (generally a verb and object). For example, “Stevenson claims Cuba threatens Latin America.” The statement is evaluated, a response is generated, and in some cases the state of the internal system data is altered.
Data and credibility
In order to understand how the ideology machine’s system responds to new statements it is necessary to consider two aspects of its construction. First, the way data is structured. Second, a process of determining “credibility” that employs this data.
The basic data of the ideology machine is a set of beliefs stored as concept-predicate pairs. Beyond this, there are two primary ways that data in the system can be structured. One is horizontal, in which a particular concept-predicate pair can become part of a compound predicate for another belief. For example, “Cuba” and “subverts Latin America” are a concept-predicate pair that make up a complete belief. But they can also be joined, in horizontal structuring, to become part of a compound predicate such as “controls Cuba’s subversion of Latin America” which can be joined with the concept “Russia.”
The second structuring mechanism is vertical, in which concepts or predicates can serve as more-specific instances or more-abstract qualities of others. In the example data, the concepts “Stevenson” and “Earl Warren” both have the quality “Liberals” (at a higher level of abstraction) which is an instance of “Left-wingers” (at a yet-higher level) which also has the instance “Administration theorists” (at the same abstraction level as Liberals). The predicates with which the concepts “Liberals” and “Administration theorists” are paired in the assertions “Liberals support anti-colonial policies” and “Administration theorists coddle left-leaning neutral nations” are instances of the more abstract predicate “mistreat U.S. friends abroad.”
Each element, concept or predicate, also carries a number representing the belief system’s evaluation of that element. Certain individuals or actions, for example, may be viewed very positively or negatively — while others are viewed relatively neutrally. These evaluations come into play in combination with the processes of credibility testing.
The ideology machine’s credibility testing begins when it is presented with a concept-predicate pair to evaluate. If that pair is already believed — that is, already present in its data — it is automatically credible. Similarly, if the opposite is already believed (the concept paired with the negation of the predicate) the pair is incredible. Assuming neither of these is the case, evaluating credibility is accomplished by movement through the system’s horizontal and vertical memory.
For example, given the pair “Liberals” and “support anti-colonial policies” (with neither it nor its opposite already in memory) the ideology machine will look at all instances of the concept Liberals (such as Stevenson and Warren) to see if they are paired with predicates that are instances of “support anti-colonial policies.” Ableson and Carroll give “Stevenson opposes Portugal on Angola” as an example pair that would lend support to the credibility of “Liberals support anti-colonial policies.” However, this does not establish credibility on its own. For example, other instances of Liberal (e.g., Warren) may not be found connected to instances of the predicate. Using this approach, the ideology machine only finds a pair credible if at least half the instances of the concept are found linked with instances of the predicate.
If this fails, other options are available. For example, the same sort of search may be performed to attempt to establish the credibility of the opposite of the given pair (e.g., “Liberals oppose anti-colonial policies”). More complexly, a search may be performed that also moves up the levels of abstraction, looking at the qualities (rather than just instances) of the concept and predicate given. For example, a more-abstract quality of “Liberals” is “Left-wingers” and a more-abstract quality of “support anti-colonial policies” is “mistreat U.S. friends abroad.” Given that “Administration theorists” is a more-specific instance of “Left-wingers” and “coddle left-leaning neutral nations” is a more-specific instance of “mistreat U.S. friends abroad,” finding this pair in the system’s data would lend support to “Liberals oppose anti-colonial policies.” In this kind of search, at least half of the more-abstract qualities of the concept must be found credibly related to at least one of the more-abstract qualities of the predicate. As with the instance-only credibility test, if this method fails a test for its opposite may be performed.
Denial and rationalization
The foundation of the ideology machine is created by the combination of interconnected structured pairs, evaluations of the elements of those pairs, and the credibility testing processes. Interaction is particularly strongly shaped by the evaluations of elements. Recall that interaction begins with a pair, that is input into the system, made up of a source and a compound predicate (constructed as a familiar concept-predicate pair). If a source with a positive evaluation (a “favorable source”) makes the claim that a concept with a positive evaluation (a “good actor”) is connected to a predicate with a positive evaluation (a “good action”) then, as Abelson and Carroll put it, “There is not much left for the system to do . . . except to express joy” (28). This is not entirely true, in that the system also stores this assertion in its belief data. Similarly, when a favorable source claims that a bad actor is engaged in a bad action, the system simply expresses regret and stores the data. But in other cases, the ideology machine engages one of the two primary types of processes that it carries out in response to interaction: denial and rationalization.
If the source of the assertion is viewed negatively, the ideology machine’s executive “calling sequence” is, as Abelson and Carroll put it, “motivated to dismiss the source’s assertion” via denial or rationalization. If the source is viewed positively, the calling sequence has two further options. If a good actor is asserted to be engaged in a bad action, the system will attempt to deny the alleged fact or deny that the assertion was made. On the other hand, if a bad actor is asserted to be engaged in a good action, the system will attempt to rationalize the alleged fact or rationalize the making of the assertion.
Of the two approaches, denial is simpler to explain. The ideology machine takes the concept predicate pair it wishes to deny (“C-P”), constructs its opposite (“C–not P”), and “enters the Credibility Test with the injunction to find C–not P credible if at all possible. Success in this procedure enables the system to deny C-P by means of contrary evidence” (27).
The rationalization mechanism, on the other hand, has three methods of dealing with upsetting statements — each of which represents a different way of denying the psychological responsibility of the actor for the action. They are:
1) by assigning prime responsibility for the actor to another actor who controls the original actor; 2) by assuming the original action was an unintended consequence of some other action truly intended by the actor; 3) by assuming that the original action will set other events in motion ultimately leading to a more appropriate outcome. (27)
Like the credibility test, these strategies are implemented as search processes on the belief data. The first, “Find the Prime Mover,” looks for a pair in data of the form “B controls C” (as in “Red China controls Cambodia”) in which a bad actor controls the good one. The second, “Accidental By-product,” looks for a predicate that can be interposed between the original concept and predicate of the form “Q can accidentally cause P” (Abelson, 1963, 296, 298). Finally, “Reinterpret Final Goal,” takes apart the predicate, looking for an instance in which its result serves as a concept for a pair in which the predicate has the opposite evaluation, compactly expressed as “P may lead to R” (1963, 295–296).5
As the system employs these mechanisms successfully, the results of the successes are stored and can become resources for future rationalizations. The next time that rationalization is needed, a search is undertaken of “nearby” pairs (moving up and down the network of concepts and predicates) looking to see what rationalization sub-processes have been used with their elements so that a similar approach can be tried. The result is that the system can develop a style of rationalization, which Abelson and Carroll describe as “paranoid,” “apologetic,” “Polyannic,” or a blend of the three (29).
Eliza, the ideology machine, and the evaluation of models
As mentioned earlier, Weizenbaum was clearly impressed by Abelson and Carroll’s paper. Writing of future work on Eliza he outlined an “intermediate goal” strongly informed by their work:
Eliza should be given the power to slowly build a model of the subject conversing with it. If the subject mentions that he is married, for example, and later speaks of his wife, then Eliza should be able to make the tentative inference that he is either a widower or divorced. Of course, he could simply be confused. In the long run, Eliza should be able to build up a belief structure (to use Abelson’s phrase) of the subject and on that basis detect the subject’s rationalizations, contradictions, etc. Conversations with such an Eliza would often turn into arguments. Important steps in the realization of these goals have already been taken. Most notable among these is Abelson’s and Carroll’s work on simulation of belief structures. (1966, 43)
Potentially, an Eliza that could accept scripts of this imagined sort would embody a more powerful model — not just transforming the audience’s most recent statement based on keywords, but able to draw on the history of the conversation and a developing structure of information (presumably stored in concept-predicate pairs) to offer a wider variety of responses. Of course, the authoring effort involved in creating such a script would also be greater, but so might be the opportunities for insightful construction (as we see in the original Eliza script, which can transform a statement such as “Everybody hates me” into the reply “Can you think of anyone in particular”).
In other words, such a system could be seen as an important, if limited, step beyond the simple transformation logic of Eliza and the directed graph logic of dialogue trees. Like Eliza, it would generate things to say by following rules. Like a dialogue tree, it would maintain a state of the conversation. But, crucially, the state would not be one of a set of pre-determined possible points — rather, it would be an interconnected, evolving data structure — and the rules would employ this structure, rather than just pre-written sentence templates for decomposition and reassembly. This points toward the power of employing models when making media.
But an Eliza embodying these goals was never built. More strikingly, despite Weizenbaum’s evidently positive view of Abelson’s work in his 1966 paper, I was able to find no mention of Abelson in Weizenbaum’s 1976 book. This is particularly notable given that, among its other contents, Computer Power and Human Reason contains something of a history and survey of the artificial intelligence field, including a chapter on “Computer Models in Psychology” — an area with which Abelson is strongly associated.
It may help us understand this shift to consider the rift already visible in their mid-1960s publications. In his 1966 paper one of Weizenbaum’s stated goals was to “rob Eliza of the aura of magic to which its application to psychological subject matter has to some extent contributed” (43). Abelson and Carroll, on the other hand, describe a rather different goal:
By an individual belief system we refer to an interrelated set of affect-laden cognitions concerning some aspects of the psychological world of a single individual. . . . Our use of the technique of computer simulation is intended to maximize the explicitness with which we state our assumptions and the vividness with which the consequences of these assumptions are made apparent. The operation of simulated belief systems can be played out on the computer and the details scrutinized in order to refine our level of approximation to real systems. (24)
In pursuing “approximation to real systems” (actual human belief systems) Abelson and Carroll also report that Gilson and Abelson have begun empirical study of the credibility test. Specifically, their study focus on the system’s requirement that, for a statement to be found credible via search, at least half the instances of a concept be found connected to at least one instance of the predicate. This means that one predicate can be taken as representative of the entire class to which it is connected, but not so for one concept. In the study, questions were constructed in an attempt to see if human experimental subjects would show the same propensity and, as Abelson and Carroll report, “Although the difference was not of substantial magnitude . . . the subjects were more willing to generalize over predicates than over concepts, as predicted” (30).
This approach — constructing a theory of the functioning of human cognition in the form of a model, implementing it as a computer program, and evaluating it via comparison with human behavior — has a history as long as that of the term “artificial intelligence” itself, stretching back at least to Allen Newell and Herbert Simon’s mid-1950s work on General Problem Solver (Edwards, 1997, 251). But it is also one that Weizenbaum came to reject with scorn. He wrote a particularly tongue-in-cheek response to Kenneth Colby’s work of this sort on paranoia, in a letter to the ACM Forum — in which he reported some supposed new results of his own: “The contribution here reported should lead to a full understanding of one of man’s most troublesome disorders: infantile autism. Surely once we have a faithful and utterly reliable simulation of the behavioral aspects of this, or any other mental disorder, we understand it” (1974, 543). The PL/1 program accompanying the letter reads:
- AUTISM: PROCEDURE OPTIONS
- (MAIN);
- DECLARE C CHAR (1000)
- VARYING;
- DO WHILE TRUE;
- GET LIST(C);
- PUT LIST(’ ’);
- END;
- END AUTISM;
(543)
Weizenbaum explains that his program “responds exactly as does an autistic patient — that is, not at all. I have validated this model following the procedure first used in commercial advertising by Carter’s Little Liver Pills (‘Seven New York doctors say . . .’) and later used so brilliantly by Dr. K. M. Colby in his simulation of paranoia.”6 Weizenbaum’s point, of course, is that machines that act like humans don’t necessarily tell us anything about the humans, a conclusion in stark contrast with much early work at the boundary of artificial intelligence and cognitive science, including Abelson’s. Suffice it to say, Abelson was likely relieved to notice his absence from Weizenbaum’s index.
Notes
2Goldwater wrote, in his most famous book, a sentence that captures his viewpoint’s paranoia, “We are confronted by a revolutionary world movement that possesses not only the will to dominate absolutely every square mile of the globe, but increasingly the capacity to do so: a military power that rivals our own, political warfare and propaganda skills that are superior to ours, an international fifth column that operates conspiratorially in the heart of our defenses…” (2007, 82). As for his use of the word “evil,” he commented, just a few pages later, “some Americans fail to grasp how evil the Soviet system really is” (99).
3Some of these were smears by association. David Greenberg, writing in Slate (2000), offers two examples from well-known figures: McCarthy (“Alger—I mean, Adlai”) and Richard Nixon (“Adlai the Appeaser . . . who got a Ph.D. from Dean Acheson’s College of Cowardly Communist Containment”). The McCarthy here is Joe, famous for his red scare witch hunts, who is linking Stevenson with the accused Russian spy Alger Hiss. Acheson, the other person linked to Stevenson above, was a major figure in the U.S. State Department during the construction of post-WWII policies such as the Marshall Plan, the same period when McCarthy made his wild claims that the department was “thoroughly infested with communists.” Interestingly, these smears are perfectly constructed for implementation in the hierarchical data connections of Abelson’s system — positioning Stevenson as a more-specific instance of a concept for which the other man (Hiss or Acheson) is also an instance.
4Even more than they had planned, for many readers, given that Stevenson died unexpectedly only weeks after the article’s publication.
5Ableson and Carroll, amusingly, give the following example of “Reinterpret Final Goal.” The pair “My simulation produced silly results” must be rationalized, because the concept (“my simulation”) is evaluated positively while the predicate (“produced silly results”) is evaluated negatively. First, the predicate is taken apart, leaving “silly results.” Next, in a search, the system finds the pair, “Silly results enrich my understanding.” Since the predicate “enrich my understanding” is positive (and has a stronger evaluation than “produced silly results”) the search succeeds. The simulation may have produced silly results, but these may lead to enriched understanding.
6Howard Gardner (1985) reports that Weizenbaum and Colby were collaborators before an acrimonious split.
February 13th, 2008 at 3:24 pm
Assume “he is married” was meant to be “he is not married”?
February 13th, 2008 at 8:50 pm
The journalism teacher in me suggests that, rather than “complex,” here something like “nuanced” might be more appropriate. Since Stevenson’s ambitions were never put into action, his idealism might just as easily be characterized as simplistic. If your summary is intended to echo the opinions of Abelson and Carroll, then of course take my suggestion with a grain of SALT (I or II).
February 14th, 2008 at 9:52 am
Right — thanks for the help! Maybe I should get someone to read the quotes aloud with me. Nick and I did that for The New Media Reader and caught a few errors.
February 14th, 2008 at 10:06 am
Dennis, I think you’re right that “nuanced” is a better word. At the same time, I’m not sure we should say that Stevenson’s ideas about international relations weren’t put into action. While he may not have become president, as the US ambassador to the United Nations he was, as Alexander Leitch puts it, “probably the best known and most popular representative at the United Nations.” During that tumultuous period, I have the impression his personal popularity and open mind were helpful in US diplomatic efforts.
February 14th, 2008 at 12:35 pm
This is fascinating stuff.
February 14th, 2008 at 2:53 pm
Fair enough. The references to Reagan and Bush had got me thinking along presidential terms, so I was perhaps looking for a discussion of Stevenson’s ideological successors. Carter’s troubles in Iran and Afghanistan come to mind, along with his successes in SALT and the Camp David Accord, which is why I think “nuanced” is such a useful term.
February 15th, 2008 at 4:04 pm
Glad you think so. It was a pleasure doing research for this book, digging into the history of these systems.