August 28, 2004

ISEA 2004: art/sci and Penny’s paper

by Michael Mateas · , 12:06 pm

This post was jointly written by Michael and Noah.

ISEA was a remarkable concentration of people bringing together technology and the arts. But there was an odd fixation, in many of the discussions, on the notion of “art/sci collaborations.” It seems that many who spoke at ISEA think of “artist” and “scientist” as exclusive categories — inhabitants of each unable to even glimpse far within the culture of the other, much less participate in both cultures. Significant work is needed, we were told, to find better ways for these vastly different beings to communicate and collaborate, so that the work of art/sci can move forward.

What makes this puzzling is that much of the foundational work for the ISEA community was created by people like Myron Krueger — people who worked on both the scientific and artistic aspects of their projects. People who saw these aspects as inextricably entwined (or “deeply intertwingled”) rather than as the separate territory of deeply different types of people.

And, of course, a look around the audience made it seem even stranger. ISEA was full of people who had identities in both scientific and artistic communities, and who do technical and artistic work simultaneously. GTxA representatives had some heated hallway discussions of these issues with ISEA attendees like Simon Penny, Ken Perlin, and Todd Winkler whose hybrid identities were basically left out of many of the discussions.

(This isn’t to disparage collaboration in any way, of course. Neither Michael nor Andrew could have created Facade by himself — but both worked on both the intertwined artistic and technical aspects. And even collaborations that include those identified solely as artists or scientists are fine. This is just an argument in favor of recognizing the possibility of hybridity.)

In any case, on our second day in Tallinn the conversation finally moved from the hallway into the auditorium. Simon Penny gave a paper entitled “After Interdisciplinarity: New Pedagogy, New Identities.” Michael and Noah both found it quite thought provoking, so we’re writing a collaborative post about it. (Incidentally, we tried using SubEthaEdit, used previously for collaborative posts on GTxA, to do a collaborative coast-to-coast edit. Sadly, this didn’t work (we kept on getting disconnected, document wouldn’t stay synced, etc.), though we’ll try it again with SubEtahEdit 2.0 (we were using 1.5) after a needed OS upgrade is complete).

Noah:
Simon talked about a phase change in our field, as we move from being solely a research area into also being a vibrant field of cultural production. As this happens, we have to realize that problems of “interdisciplinarity” and “collaboration” (which consumed much of our early focus) are historically contingent, rather than necessary to this type of undertaking. They may exist as strongly today simply because none of us have had an appropriate education.

Michael:
Yeah, the notion of new media artists needing a new education, one that combines technical, theoretical and artistic concerns, resonates strongly with my own thinking. New Media practice isn’t the simple “application” of existing technologies to art, but rather a research discipline that simultaneously pushes forward conceptual and technical concerns. The issue of science/art collaboration is one of those perennial discussions at venues like ISEA. And I think those discussions never get anywhere precisely because asking “how should scientists, technologists and artists communicate across their disciplinary boundaries” is the wrong question. What was refreshing about Simon’s talk is that he’s rephrasing the question as “how do we create scientist/technologist/artists”. Simon is exploring this issue through the creation of ACE (Arts, Culture, Engineering), an interdisciplinary masters and Ph.D. program at UC Irvine. At Georgia Tech, we’re trying to figure this out through our new Ph.D. program in Digital Media, our masters program in Information Design and Technology, and our new undergraduate degree program in Computational Media.

Noah:
Next Simon outlined what he viewed as the two major types of artifacts considered at ISEA: (1) new media, and (2) machine-based systems which use computation in their presentation (not just production). Note that this is an inversion of how Nick and I defined “new media” for The New Media Reader.

Michael:
He gave the acronym CACA (Computer-Aided Cultural Artifact) to the second type of new media artifact, and proceeded to use this acronym through the rest of his talk. Amusing, though I don’t think anyone is going to adopt it. In this vein, Ken Perlin suggested that my work is Simulated Heuristic Interactive Technology (and that it was, indeed, a “good” or “hot” instance of it). In the terms of Simon’s argument, it would have made more sense for him to call the type 1 media artifacts (traditional media artifacts produced using a computer) CACA :).

Noah:
Yes, but Simon did a good job of explaining why we should stay away from the term media when we mean something like “CACA” (though he didn’t manage to convert me). Put simply, Simon’s argument was that we don’t use the term “media” for things that behave. We use it, instead, to talk about “the media,” or blank CDs, or handmade paper, or… Of course, someone like Simon can’t help but start taking apart his new terminology as soon as he’s introduced it. Not just with the fact that “CACA” is obviously a joke of an acronym, but also pointing out that “artifact” sounds awfully material, and the necessity for distancing his focus on behavior from “behaviorism,” and so on. The real forward movement here, it seems to me, isn’t the debate about whether to call what’s under discussion “media” — but rather Simon’s focus on the “dynamics and poetics of behavior.” It’s to grapple with these that we need the new education he advocates.

Michael:
Simon referred to type 2 artifacts as “machines for generating a media residue.” Media objects are produced as a by-product of interaction with the machine. In my class we just finished looking at the excerpts from Burnham’s Software show catalog in the New Media Reader, and I can’t help but notice the relationship between “machines for generating media residue” and Les Levine’s piece System Burn-off, with its investigation of art, information about art, information about information about art, etc. In some of my writings where I’ve used structuralist semiotic theory to get a handle on the relationship between program code as a sign system, and the sign systems experienced by audience members, I’ve used the phrase “syntagmatic generativity” to refer to the media residue produced by procedural art (in my own case, I’m specifically interested in AI-based art).

Noah:
Speaking of the work of that earlier era, one of the moves I appreciated in Simon’s talk was the way he drew in our field’s history. Or, really, the way he practiced his version of assembling a history for the kind of work he was discussing. He made a nice series of moves through inventors like Leon Theremin; early computational media (the love letter generator for the Manchester Baby); kinetic sculpture (as described by Software curator Jack Burnham); those who brought computation to these sculptures; Weizenbaum’s Eliza; Myron Krueger; 80s telematic art experiments (which presaged later network culture); alife and synthetic character work (e.g., Craig Reynolds); and so on. From his point of view, it’s necessary that we begin to have students learn this history (which means, as a field, we’re going to have to investigate it further and create more resources).

Michael:
I wrote a related capsule treatment of AI-based artwork in this earlier GTxA post. I gave a short presentation of some of these ideas on Noah’s panel at ISEA.

Simon referred to the inherently destabilizing nature of this kind of interdisciplinary practice. This is a nice antidote to the “interdisciplinary love-fest” model of collaboration. When technical disciplines (e.g. artificial intelligence), theoretical engagements (e.g. science studies) and art practice are brought together, all three must change. You don’t get to continue doing business-as-usual in any of the disciplines. Radical rethinking, and the development of new practices and agendas, are required in all the disciplines brought together. The instabilities and crises of identity that must be navigated are a far cry from the “and the lion shall lay down with the lamb” Pollyanna school of interdisciplinary practice (has such a practice actually ever worked?).

Noah:
And, to start things out, Simon went into a pretty biting discussion of how existing media theory is inadequate — how it must change in order to take into account the fact that we’re not talking about images, but rather about machines for making images. In fact, he did a capsule version of his essay from First Person. Unfortunately, he went on to talk about how CACA “isn’t narrative” and made a rather naive and dated anti-hypertext argument. (When I was complaining about this in a taxi later, Michael pointed out that most people think literary hypertext is a static structure of nodes and links. Which is kind of amazing. Because I don’t know of much widely-discussed literary hypertext that has this structure. Starting with Afternoon and the other famous Eastgate titles, the structure has changed based on reader activity — something Storyspace titles usually implemented with guard fields, and other projects have done with Javascript (e.g., with the Connection Muse) or other technologies.) Simon also appears to have missed what Nick so nicely points out, which is that we have things like narrative poetry. Anyway, enough about that. Simon was out to raise some hackles, and for important reasons. He gained points by citing Facade as a counterexample toward the end of his “not narrative” discussion. He also talked about how CACA “isn’t primarily visual” — a position I won’t rant against.

Michael:
In defining CACA, Simon situated the interaction of human and machine as a subset; he opens the field to arbitrary ecosystems of interacting entities. This is fundamentally a cybernetic move, subverting the human/non-human subject/object boundary. The first half of Kate Hayle’s How we Became Post-Human does a nice job describing how cybernetics played with these human/non-human boundaries.

Noah:
And, in that human-machine vein, and having gone after theory earlier (“the difference between theory and practice is greater in practice than it is in theory”), Simon next turned his sights on Human-Computer Interaction. He said that HCI is fundamentally Fordist and Taylorist — whereas CACA is concerned with poetics. So traditional HCI is of as limited usefulness as traditional media theory. The idea that computers are general-purpose is clearly false, if we look at the form in which we actually use them. Why do we sit at desks to use computers? Our boxes are enhanced typewriters, and what most HCI studies is office efficiency.

Michael:
Yes, where HCI is interested in accomplishing tasks, in productivity, Simon said art practice “doesn’t give a bugger about productivity”, but is rather concerned with the poetics of experience. There are of course those in the HCI community who are concerned with experience rather than tasks, people like Bill Gaver, Phoebe Sengers, Kia Hook. Folk like these are trying to do within HCI practice what I’m trying to do within AI practice, establish a self-reflective critical technical practice. This move from productivity to poetics describes my own career trajectory. In a prior life I was an HCI professional; CHI was my primary conference. At Intel Labs I co-founded with Tony Salvador a group devoted to open-ended ethnographic inquiry and advanced prototyping of interactive technologies suggested by the ethnographic inquiry. We called the group Garage Ethnography and Applications Research (GEAR); the group has continued to thrive since my departure from Intel (back in 1996) and is now called People and Practices. In the studies that I participated in, Families with Young Children and 17/18 Year Old Teens, it was clear from the qualitative spatial/temporal/social models of these micro-cultures that the opportunities for computational intervention weren’t task-based, but were fundamentally about media, culture and social, emotional and aesthetic experience. I had already been on a trajectory for a number of years of moving away from a concern with “productivity” but these studies pretty much sealed it for me. I haven’t done ethnographic work for years, but I’m starting to think about how to bring it back into my practice as a form of site-specific art practice.

Incidentally, Phil Agre’s Computation and Human Experience is an amazing book that defines and describes through detailed theoretical and technical examples the notion of a critical technical practice.

Noah:
At ISEA Ken Perlin started talking about organizing a gathering, in the near future, specifically for those who work across these boundaries, who try to have identities in both communities. If we can make sure that people like Simon and Phil Agre are there, then such a gathering would be certain to dig deep and get at the problems inherent in such practice. Though we also need to simply affirm that such people exist, and share positive stories, strategies, and outlooks.

Michael:
Sounds like a great idea to me. Phoebe and I have been talking about making such a thing happen for several years. Maybe now is the time to finally do it.

4 Responses to “ISEA 2004: art/sci and Penny’s paper”


  1. Christy Dena Says:

    Just thought I’d add a couple of tongue-in-cheek names I gave to the artist/technician/scientist hybrid in a uni paper: ‘artech’ or ‘artest’.

    :)

  2. andrew Says:

    Wow, sounds like Simon gave a really useful and entertaining talk!

    Simon referred to type 2 artifacts as “machines for generating a media residue.” Media objects are produced as a by-product of interaction with the machine.

    I’m a little confused — how is it that systems that behave create residue? Does a performance create residue? Plays, dance, performance art? They don’t seem to. Do machines like computer games or interactive drama create a residue? Even though the experience comes to you via the screen, it’s as ephermeral as a play or performance art, unless recorded. Is it that the mere fact that they “existed” on a screen (the phosphors on the monitor lit up) during the performance is considered the residue, versus being “real” (light bounced off actors on stage into our retinas)?

    Simon referred to the inherently destabilizing nature of this kind of interdisciplinary practice. This is a nice antidote to the “interdisciplinary love-fest” model of collaboration. When technical disciplines (e.g. artificial intelligence), theoretical engagements (e.g. science studies) and art practice are brought together, all three must change.

    I like that a lot, that makes a lot of sense. The honeymoon is over, baby.

    “the difference between theory and practice is greater in practice than it is in theory”

    excellent!

    There are of course those in the HCI community who are concerned with experience rather than tasks, people like Bill Gaver, Phoebe Sengers, Kia Hook. Folk like these are trying to do within HCI practice what I’m trying to do within AI practice

    Ah, I hadn’t exactly yet thought of Phoebe et al.’s work in that way, very helpful.

    Great joint write up, dudes, thanks.

  3. noah Says:

    Hmm… A thought just occurred to me, reading the comments here. Simon talked about how art, theory, and technical disciplines must change when brought together. Then he gave examples: we can’t just apply existing media theory, and we can’t just work within the existing mode of HCI. But — I can’t remember — did he talk with any specifics about how we can’t just employ existing art practices? Or would that have risked a riot in the auditorium?

  4. michael Says:

    I’m a little confused — how is it that systems that behave create residue?

    I think what he means is that the linear trace of the behavior elicited by some particular interaction (e.g. recording the actions of an autonomous character) is the media residue.

    Put simply, Simon’s argument was that we don’t use the term “media” for things that behave. We use it, instead, to talk about “the media,” or blank CDs, or handmade paper, or…

    I’m fine calling procedural systems a medium. I sometimes describe Expressive AI as making the move of considering AI as a medium. Central to the concept of a medium is the concept of representation. And behaving systems are representations, they’re just procedural representations. This also avoids the confusion that Andrew hilights above.

    did he talk with any specifics about how we can’t just employ existing art practices?

    I don’t remember him talking specifically about this. Perhaps distancing CACA from media was part of this; if artists are fundamentally about creating media objects (but, then, what about conceptual art?), and behaving systems aren’t media, then something pretty fundamental is changing. One of the ways art practice changes is that it becomes very much a research practice, operating on longer time scales, and requiring the adoption of some of the engineering practices necessary to build complex systems.

Powered by WordPress