March 4, 2004

Turkle on Emotional Agents

by Andrew Stern · , 10:07 am

The Boston Globe ran an article last Sunday about Sherry Turkle, who is hosting an “Evocative Objects” symposium held tomorrow, Friday March 5, at MIT.

The article touches on emotional robotic and virtual characters, including robots in nursing homes.

What has become increasingly clear to her is that, counterintuitively, we become attached to sophisticated machines not for their smarts but their emotional reach. “They seduce us by asking for human nurturance, not intelligence,” she says. “We’re suckers not for realism but for relationships.” … “We need a national conversation on this whole area of machines like the one we’re having on cloning,” Turkle says. “It shouldn’t be just at AI conventions and among AI developers selling to nursing homes.”

We’ve discussed emotion in the context of virtual characters and games several times here on GTxA (1 2 3 4).

15 Responses to “Turkle on Emotional Agents”


  1. greglas Says:

    Andrew,

    Fwiw, when we were kicking around the AIBO love topic last time, I found this interesting study on kids and AIBOs:

    http://www.idemployee.id.tue.nl/c.bartneck/chi2004/Kahn.pdf

  2. ian wilson Says:

    Reading that article left me thoroughly depressed. Yet again depressingly uninformed jounalism (with the obligatory “machines are bad / humans are special” ending) with a depressingly uninformed (or badly quoted) but famous scientist.

    Maybe if I become famous I can become the “go to guy” for interactive story (even though it is not my specialty). I will simply read the articles and work on this site and then quote the obvious to bad science / technology journalists.

    Sorry for the negative tone, not usual for me. Am I off track here?

  3. gregolas Says:

    No, I think you’re on track. That’s a pretty justifiable reaction. The only thing to bear in mind is that the article is journalism, not scholarship or literature. You can’t hold it to the same standards.

    So if you picked up a few valuable facts (e.g. Sherry Turkle is doing a symposium on this topic), the article served its purpose.

  4. Malcolm Ryan Says:

    [I’m a new reader here, but I’ll just wade right in. What the heck.]

    Ian, what do you think is incorrect about the article? Is it untrue to say that we are building machines which provoke an emotional reaction? Is it wrong to question the appropriateness of that?

    Sure, we’re not going to build anything even remotely as emotionally rich as another human being yet (or ever?), but it seems to me that the phenomenon is happening and we should be asking questions about it now, rather than waiting until it becomes an issue, shouldn’t we?

    Then again, the phenomenon is hardly new. People have always grown attached to their posessions. I was very fond of my first car. Psychologists call it “cathexis”, I think. It’s geneally okay, I suppose, but it can go too far. There’s an old saying that people love what they should use, and use those they should love.

    Malcolm

  5. andrew Says:

    Malcolm, welcome! (I’d love to join in the discussion right now, but I’m currently on the road…)

  6. ian wilson Says:

    Malcolm, thanks for joining in.

    To review, I did not say the article was incorrect, I said it was uninformed, not the same thing. My point was that the very point of a science journalist interviewing an expert leads the lay reader to take as fact that those comments represent the “state of the art” [as it seems from your comments you did also perhaps]? However, this is not the case at all.

    You mentioned that “the phenomenon is hardly new” which is absolutely spot on and this was one of the points I wanted to convey, that the article simply stated the patently obvious but wrapped it in prose that seems to want to scare the reader, which is depressingly familiar to many of us in this field.

    It also ends with a comment that you also echo to some degree, “Sure, we’re not going to build anything even remotely as emotionally rich as another human being yet (or ever?)”, actually we are. So the article misleads the lay reader. Why? Well to say that machines will never be able to simulate the behavior of humans (well) would be to say that at some point very soon all advances in hardware and software will stop. This is obviously absurd. For if hardware and software advances in power and sophistication (not to mention research) continue as they are now then it would be illogical to suggest that “machines” would not at some point resemble, equal and eventual surpass human behavioral capabilities. Such is evolution.

    Your post did make me stop and think a while (which I always appreciate). About how the genral public views this kind of technology [wait, scratch that, I have just viewed your profile and see that your field is AI]. Anyway, besides that, it is definately not wrong to question anything. But what are these questions, especially about emotional machines? I would be interested to hear any specifics. As I personally have been living with this technology for 10 years, I have probably thought about the implications a great deal and it is easy for me now to be a bit “blase”. Such is my personality.

    Emotional machines (and software) already are an issue but one that can substantially enhance our lives (see the work being done by many of the guys here). The power understanding our own systems brings(and that is a pre requisit of simulating them) is enormous, the question is are controls in place to prevent the abuse of this technology. That is a useful debate.

    In the mean time nature and evolution (of which we as scientists are a fundamental and integral part) goes its merry way without cause for discussion. Such is life.

    Ian

  7. B. Rickman Says:

    Aside from love and comfort, [and perhaps obsession,] Turkle doesn’t sound particularly interested in the broader scope of emotions. Of what use, after all, would an angry robot nurse be? The late Douglas Adams gave us a far more intresting collection of actual emotional robots to talk about. When trashy science fiction has more interesting things to say about a subject than an MIT professor, I tend towards science fiction.

  8. Malcolm Says:

    Well, I may as well introduce myself properly. As for my credentials, I have a PhD in AI, studying reinforcement learning. I’m currently doing research in robotics at the University of New South Wales. I’ve played many text-adventures (ahem – “works of interactive fiction”) since I was a kid, and MUDded since the early 90’s. I’ve even built a fair bit of stuff on LambdaMOO.

    Interactive storytelling and its connection to AI reseearch is a recent hobby of mine. I’ve been reading Micahel Mateas’ thesis, and some of the other stuff from the Oz Project. We have a similar research group starting here at UNSW, combining people from our College of Fine Arts with researchers from Computer Science. I’m kind of on the fringes of that group, interested in what they’re doing, but without the time to properly contribute at the moment.

    With that out of the way, on with the argument…

    Ian, I disagree with your opinion about the eventual progress of AI. I see it as by no means inevitable that we will create machines even remotely as intelligent as human beings. In many ways we have hardly even scratched the surface of the problem.

    But clearly that isn’t necessary. People have emotional reactions to the things we already build. Is this good or bad? We are building things to encourage this, make the bond stronger, more “real”. And it seems to be working. Is this a good thing? What are proper and fitting uses for this technology, and what is improper?

    Do we want to encourage people to build emotional relationships with machines? In what ways could this be detrimental to individuals, and to society?

    Are you really confident that it is all perfectly harmless? Just a game, maybe? I’m not so sure.

    Or perhaps you think a relationship with a computer is no different from a relationship with another human being? I feel very uneasy about that.

    These are quite serious questions. And very few people are asking them. I’m not saying that I have good answers yet. I just find the silence on these issues (and ethics in AI in general) to be disquieting.

    Malcolm

  9. Malcolm Says:

    I don’t want to sound all doom and gloom. I realise that stories, theatre, film and television have been pushing our emotional buttons for years, millenia even. Most people don’t seem to have any problems distinguishing fiction from reality – although celebrity-stalking is an obvious sign that it is not universally so. (And even the general “cult of celebrity” does not appear to me to be a very healthy passtime.)

    Perhaps the things we are building are no different from these. In which case, sure, I’d happily say they were harmless. I love losing myself in a good novel as much as the next geek.

    Malcolm

  10. ian wilson Says:

    Some very interesting and necessary questions for society in general here. I would like to hear your thoughts on the answers.

    To correct one important point, I absolutely do not think this technology is harmless. I am very well aware of its power to fundamentally alter society and individuals in maladaptive ways [rather like substance abuse].

    For me an important question is how do we deliniate between humans and machines? Does that make any sense? To me it does not conceptually, it is just that we are too fond of thinking of ourselves as something “special” [i.e. animals and humans]. We are not, we are just very complex.

    For many people the shock of this new technology and research will be in understanding more about what we really are. Moving from the poetic to the concrete is disconcerting to many people and the press have a great time with playing with emotions and scaring readers with predictions of new monsters. But, as with cloning, while it is necessary to put in place controls, the reality just becomes a mundane fact of life once it is absorbed and understood for what it really is.

    Is it bad to have an emotional relationship with a dog for example? They are not human and they have a nasty side but we still do. I think this is a useful analogy and reference for this discussion. The difference here of course is that we are creating these entities from scratch based on our own designs but I am not sure if that is too much of a relevant factor.

    With the progress of AI [or more specifically computational neuroscience] while we are just below the surface [I think we can say we are beyond scratching now] if you look at the progress that has been made in research and development in only the past 25 years which is quite substantial though only the first steps, to say we will not learn more about the brain and hence build more elaborate, complex and sophisticated systems that get ever closer to human behavioral performance would, again, be to say that this progress will at some point stop. This is not logical. Where do you see the “glass ceiling” that will stop this increase in performance?

    For now, I will be adding a few extra CPU cycles to considering the social aspects of my work.

    Sorry if this is too much off topic [i.e. auto text]

  11. greglas Says:

    >Is it bad to have an emotional relationship with a dog for example?

    This is a great question. I haven’t brought it up too much because I think there is a very significant category difference (for ethical purposes) between human/animal and human/robot relationships. But the interesting point is that in all human/animal relationships, we find (to some degree) the use of the animal to create a largely fictional construct of anthropomorphic identity. That construct, in part, generally, for most pet-owners, forms a significant part of the value of the relationship.

    Anyone disagree with that?

  12. B. Rickman Says:

    greglas – Are you trying to emphasize the “fictional” aspect of human/animal relationships in your comment? If so, why? And if the human anthropomorphizes the animal, how do you suppose the animal views the human?

  13. greglas Says:

    Well the animal, unlike the bot, actually does view the human — and that’s the point.

    Animals and humans have relationships, but they’re often structured (by humans) as if the animal were the equivalent of a mute human — which the animal is not.

    To some extent, this is a form of play by the human, but when we mourn the death of the real animal, I think (to some extent) we mourn the loss of something that was, in part, fictional.

    Just some thoughts — I’m not holding myself out as an expert on this…

  14. B. Rickman Says:

    I think when we mourn the loss of anyone or anything, human or animal, you could say the same thing about the loss of something fictional. We anthropomorphize people as well as pets, movie characters, and artificial agents. I’m not sure how the fictional aspect of these relationships is of much significance.

  15. gregolas Says:

    BR>I’m not sure how the fictional aspect of these relationships is of much significance.

    Well, actually, it is of significance in some cases.

    See, e.g.

    http://www.animallaw.info/articles/arus47villlrev423.htm

Powered by WordPress