May 31, 2003

(Sharing) Control

by Andrew Stern · , 2:22 pm

In the comment thread of Narrative as Virtual Reality, Lisa asks, “Why would an author *want* to yield the authorial control of a piece to some sort of AI engine?”

That’s a really good and interesting question. To me the issue of who has control (or, similarly, agency) of an interactive artwork is primary. There seem to be at least 3 parties who could be in (or share) control of an interactive artwork: the original author, the user, and the work itself (e.g., the AI).

The original author can of course exert control over the experience by establishing what content is there in the first place, and the means to access the content (e.g., the interface, the user actions offered). The user has control over the experience within the limits circumscribed by those actions and interface designed by the author. That is, when the user plays a game or reads a hypertext, obviously the user is exerting control over what content occurs in this session of the experience, effectively participating in defining the experience. Additionally, if the original author decides to do so, the system itself can be programmed with enough autonomy / generativity to make some/many decisions about when content occurs, how content is accessed, or even what the content is in the first place. In other words, the system (AI) can become a co-author. Or, in the case of extremely generative artwork, the AI itself could be considered the author, and the human who programmed the AI is some sort of meta-author.

So to your question, “Why would an author *want* to yield the authorial control of a piece to some sort of AI engine”? There’s a certain joy in being a co-author with an AI of your own creation. It’s like a bit like collaborating with someone else, except in this case you’ve created your collaborator. (Let me say, although the AI’s I’ve worked on are meant to behave in lifelike ways, I by no means view those AI’s as more than complex machines. However I aspire to create an AI that is more intelligent, that does more to surprise me with what it generates.)

So, it’s a partnership, really. Perhaps you’re worried about a scenario where the human author somehow relinquishes *all* control over to the AI, losing their own authorship. Well, even if the AI is very generative, the human is still a meta-author of the experience; *someone* had to program the AI in the first place. But even more typically, especially in the case of digital fiction in the near and not-so-near future, a human author will be supplying much of the content themselves (ideally in very small grain size pieces), and allowing the AI to sequence, combine and recombine that content, and perhaps generate some new content too.