October 4, 2004

N. Katherine Hayles — Living in Computational Spaces: Means and Metaphors

by Noah Wardrip-Fruin · , 9:59 pm

I’m in So Cal for Alt+Ctrl (where I’m giving a lunchtime talk tomorrow, and attending the opening on Thursday evening) which gave me a chance to attend the first talk in UC Riverside’s “Global Interface” series. N. Katherine Hayles gave the opening talk: “Living in Computational Spaces: Means and Metaphors.” The alternative title, drawn from her forthcoming book, was My Mother was a Computer. Below are my notes.

Should computational “spaces” be considered metaphorically or literally? It’s worth thinking about, but it’s more important to think about how computation as means and metaphor work together.

What is computation? It’s a process, not a medium. Not what happens inside a digital computer — but also Hillis’s example of the tinker toy computer, von Neumann’s vision of doing it with i-beams, and so on.

What is computation?
– simple elements
– small set of logical operations (parsimonious)
– using this limited base, accomplishing much (well illustrated in something like Wolfram’s argument in A New Kind of Science)

Wolfram goes further than many in claims for simple computational systems — as in his “Principle of Computational Equivalence.”
– All complex behavior can be simulated computationally
– Simulations are “computationally irreducible” (very different from traditional mathematical modeling)
– “Natural” behaviors are themselves generated by computation (Wolfram waffles a bit — are they modeled by simulations, or do they actually operate that way?)

Computational universe in relation to classical metaphysics:
– Ontological requirements: a bare minimum
– Elementary distinction between one and zero
– Small set of logical operations
– Emergence of unpredictable phenomena as simulation progresses
– Complexity that comes from the bottom-up emergent processes, not from the starting point (e.g., God)

Cellular automata as universal Turing machine. This is Wolfram’s “Rule 110.” Not that this is a surprise for high-dimension cellular automata, but that it can be done with small-dimensional ones. It was research done by Wolfram’s employee Matthew Cook. Wolfram sued to keep Cook from presenting it at a scientific conference, because Wolfram wanted to break the story in his book.

Ed Fredkin, in Digital Philosophy, wants to show that particle physics can emerge from cellular automata. The universe is digital, time and space are not continuous but discrete. He goes farther than the “computational universe” — we are software running on a universal computer. A metaphysical claim.

Criticisms of Wolfram:
– Cellular automata do not evolve above one level of complexity. The cells can create complex patterns, but those patterns can’t become the primitives for a next level of complexity. Luc Steels claims to have made this happen, but people aren’t yet convinced.
– Downplays role of evolution.
– Does not show actual physical mechanisms at work.
– Ambiguity about whether real systems use same mechanisms.

Hayles thinks that Wolfram and Fredkin don’t pay enough attention to the connection between digital and analog processes. To stretch digital and analog a bit, it’s like the DNA code (digital) needing protein folding (analog) to get anywhere. Like Wolfram’s analog (or mixed analog/digital) consciousness looking at the digital patterns his automata created.

Morowitz on The Emergence of Everything, and its implications:
– Very large possibility space — how to deal with “trans-computable” problems?
– Pruning algorithms and selection rules
– Constraints function as explanation
– Simulations rather than explicit mathematical equations

Once you do this:
– Observer folded into system; not based on assumptions of “objectivity”
– Everything emerges from underlying computation
– Code as the discourse system of nature (a big promotion!)

Metaphor or means?
– As a metaphor, like seeing the universe as a clockwork mechanism in the 18th century
– As means, a literal picture of how reality works
– What the metaphor/means binary does not capture?

Military’s “network-centric warfare” using the “ontology of code.” A vast challenge to the chain of command. Turning the military into self-organizing swarms that carry out the commander’s program. So the military has taken an idea like the computational universe, and now is reorganizing the material world in its image. So the means are reformulated in terms of the metaphor. We need to understand code as means and metaphor together.

Further, without computational technology the metaphor would not have traction.

“What we make and what (we think) we are co-evolve together.” Bi-pedalism made it easier to carry tools, and tool use made an evolutionary advantage to bi-pedalism.

What we make
What we are
What we think we are

A turn to Greg Egan’s fiction to demonstrate the potential of the “computational universe” as a shift in understanding at the level of the Copernican revolution.

But first a brief connection of Zizek’s “symptom as explanation” (and Niklas Luhmann, “We do not see what we do not see”). The letter always arrives because its destination is wherever it arrives. We could see the computational universe as a symptom — reasoning back from a cultural situation. A New Kind of Science as a letter addressed to Wolfram’s beliefs. Zizek: the letter than follows us is our own death. Gap with Egan: embodiment as optional.

Permutation City: Letter arrives three times:
– Durham plus Paul the Copy
– Paul tries to “bale out” — letter delayed
– Experiments in consciousness
– Time delay
– Familiar from computation

Some of this is familiar from Moravec: Robot. Though Moravec makes the claim that computations are the same, even if run backwards. But computations use previous results as they move forward. For Egan this difference isn’t a problem, in fact it’s his central element. It’s the fact that where you start your intermittent computation, and the direction you run, can cause running in different directions that also overlap at particular states — which allows the original and copy to intersect but also take different paths.

Something like Conway’s “Game of Life” determines a possibility space. The grid, the rules. For some limited spaces (a 9×9 Conway grid) we’ve computed all the possible permutations. Any one state is a particular epiphenomenon that we can see as related to all the other known states. Something larger, of course, we can’t compute all of. But we can understand our lives, for example, as epiphenomena in a gigantic possibility space. This is the shift in perception of the computational universe.

4 Responses to “N. Katherine Hayles — Living in Computational Spaces: Means and Metaphors”


  1. WRT: Writer Response Theory - Mondo 2000 Says:
    […] idea being articulated here resembles N. Katherine Hayles’ very recent description of the Computational Universe. But in the language, there’s […]

  2. Keith Says:

    To make any sense of this you should try the Postmodernism Generator!

    http://www.elsewhere.org/cgi-bin/postmodern

  3. WRT: Writer Response Theory » Blog Archive » i.plot therefore i.write Says:

    […] ecking. i.plot is a combination search engine and semantic web with a bit of I Ching-like possibility space thrown in. The work builds on a previous p […]

  4. Joe Says:

    This is wonderful work.

    Despite the simplicity of the axioms governing evolution, the resulting behavior rapidly becomes so chaotic as to be actually unpredictable. It’s because the graph is interfering with itself, a short circuit. If we take this idea a little farther, and apply the cellular automata rules not only to a grid, but to a graph — no more than a series of points with connecting lines — a new ‘type’ of rule/behavior become available (transforming the connections between points) and with it a corresponding new kind of complexity.

    Yet even the simple versions of these automata can be considered, in some sense anyway, as isomorphic to the ‘system’ of computation itself. With a little skill in deciding the rules for interpretation, you can use the automata as little calculators for doing any computation you can formalize. Perhaps a little crack opens upon between P and NP, that in using the machine to ‘theorize’ itself, we manage to ‘think’ the boundary of computability — and so perhaps cross it…? Perhaps. This is the paradox of AI at its heart: as soon as we discover a new algorithm, we understand it, we ‘work it out’ and so it is no longer ‘truly’ AI: we will push this boundary back until we discover either something so simple it cannot be explained further, or something so complex it cannot be explained at all (I’m thinking here of quantum mechanics, for instance.)

    Thanks for this!

    Joe

Powered by WordPress