March 23, 2008
Link Madness, Part 1: the Hyperbolic
I occasionally make posts composed of link dumps, to help GTxA readers find articles they might enjoy and may have missed. This time I need to split the dump into two parts, the first part being a set of articles ranging from the slightly over-the-top to the truly hyperbolic. I will gently attempt to challenge, refute or debunk each as I go. :-)
- Hypertext boring? That’s the assertion Ben Vershbow made in a post that leads with a commentary on Hypertextopia, spawned from an earlier GTxA post. I’ve certainly been one to vent my issues with hypertext as a form for fiction, but “boring”, hypertext isn’t. Like Nick’s Portal v. Passage post, Ben’s post spawned a good discussion though, including reactions elsewhere (1 2 3); in the discussion, Ben admits to being deliberately provocative. (As a side note, btw, Ben is a developer of CommentPress, used to implement Noah’s Expressive Processing blog-review project here on GTxA.)
- In the annual GDC rant session, Clint Hocking asked: why don’t game developers take more risks towards making more meaningful games? Showing screenshots of Passage and The Marriage, Clint reportedly said, “Two guys tinkering in their spare time have moved things forward more than the rest of the industry.” While I agree with 99.9% of what Clint says in general, that statement seemed a bit of a stretch. As much as I appreciate Passage as a good game analog of a poem, and The Marriage as a short experiment in modelling the dynamics of human relationships, to me they are each small steps forward, no more important than the accomplishments of a variety of commercial games, such as The Sims, the Half-Life series, The Last Express, and virtual pets.
- Jesper links to and gives commentary on a physical avatar product, ConnectR, an iRobot product about to go on the market. Wow, that’s pretty cool, as well as difficult to believe people will like it. Then again, Roomba made it into the pilot of Knight Rider, so…
- Games are Art links to video clips of some conversational robots from Japan, including Qrio. I have my doubts the robot really performs as well it does in the video.
- The Escapist has an article on how to build a holodeck. The crazy thing to me is, the article only focuses on the hologram side of things, completely ignoring the AI and game design requirements… which ultimately may be harder to implement than the holograms. (Speaking of the holodeck, Tale-of-Tales recently presented a picture tour through some of the interactive games that have appeared in various Star Trek TV series.)
- There is a small movement of AI developers and enthusiasts towards artificial general intelligence. That in itself, while extremely ambitious, seems like a reasonable research direction; there was recently a conference on the topic with over 120 people in attendance, blogged by Ben Goertzel. Goertzel, who has his own startup called Novamente to build AGI-related products, has the idea to tap into MMOs as a resource for training AGIs. Seems like a good idea — akin to Jeff Orkin’s research, perhaps. However he starts going a bit off the rails with statements like this:
It seems possible to harness the “wisdom of crowds” phenomenon underlying these Internet phenomena [such as Google, Wikipedia] for AGI, enabling AGI systems to learn from vast numbers of appropriately interacting human teachers. There are no proofs or guarantees about this sort of thing, but it does seem at least plausible that this sort of mechanism could lead to a dramatic acceleration in the intelligence of virtually-embodied AGI systems, and maybe even on a time-scale faster than the pathway to Singularity-enabling AGI that Ray Kurzweil has envisioned, which brain-scanning and hardware advances lead to human-brain emulation, which then leads on to more general and powerful transhuman AGIs.
Too much handwaving going there for my taste.
- You probably know, Ray Kurzweil keynoted GDC this year, predicting a Full Intelligence Sim by 2029.
- However, my list of hyperbolic links doesn’t peak there… that designation goes to Selmer Bringsjord and his latest project, Rascals, a collaboration with Steve Nerbetski. You may recall Noah’s deconstruction of Bringsjord and David Ferrucci’s Brutus system, a story generator whose claims of narrative intelligence garnered significant press. So far, this time around, only EETimes, Slashdot and few technophile sites couldn’t resist reporting on Rascals, an “AI program that thinks like a four year old”, rolling out this Fall. Don’t believe cynical old me though: see for yourself by downloading videos of the AI in action, performing within their Second Life testbed.
(Note, it’s not the research itself that troubles me, it’s the claims that accompany it.)
Let’s come down to earth in Part 2 of my link dump…
March 24th, 2008 at 7:14 pm
You may be interested in this exchange. My argument is, whatever the limitations, AGI and smoke and mirrors is more interesting than mere AI and smoke and mirrors, the problem comes when one assume the AGI means more than it does in that instance.
Also, when you say “hand waving”, are you trying to play down speculation at the expense of near-term pragmatic discussion? Or are you emotionally perturbed by the idea that reality-as-you-know-it could cease to exist in the very near future?
March 24th, 2008 at 10:27 pm
By handwaving, I mean it’s quite a leap from an agent observing player behavior in MMOs — for example see my mention of Neil Madden’s work here — to a “dramatic acceleration” that results in highly intelligent agents. How does one get from A to B? There’s a lot of ongoing research in the field of machine learning; it’s a really hard problem.
When it comes to worrying about reality-as-I-know-it ceasing to exist due to advances in AI in my lifetime — I sleep like a baby.
Not that I don’t expect reality to be different in significant ways, say, 20 years from now. 20 years ago (I was a freshman in college), I couldn’t hold a small wireless device in my hand that allows me to get almost any tidbit of information / human knowledge I wanted at any time.
March 25th, 2008 at 6:37 am
I don’t understand what this AGI is supposed to be. I think of ‘generalization’ as a process, and ‘generalized’ as a comparative adjective. So is generalized AI pretty much just the integration of multiple mere-AI? Does it need to use an algorithm/process which is more singular/cohesive/elegant then simply a sum of mere-AI?
Phrased another way, if I’m working on an AGI which is incomplete (or otherwise limited) and it can only currently do a subset of things I want it do, how is that different from somebody else working on mere-AI that can only do that same set of things?
The wikipedia page on Strong AI (which is what google returns for AGI) seems to imply that AGI/StrongAI is an anthropomorphic specialization of the more generalized mere-AI.
March 25th, 2008 at 3:15 pm
Andrew: “By handwaving, I mean it’s quite a leap from an agent observing player behavior in MMOs — for example see my mention of Neil Madden’s work here — to a “dramatic acceleration” that results in highly intelligent agents. How does one get from A to B? There’s a lot of ongoing research in the field of machine learning; it’s a really hard problem.”
Is it a hard problem or a wicked problem? I suspect it’s a wicked problem with various hard problems embedded in it, and the reason Ben is being so assert with his proverbial hand, is that he suspects he’s got a wicked solution that handles sufficiently at least one hard problem. I must admit, when it comes to the Singularity, I want to believe, though recognizing that desire keeps me from falling into the trap of belief. If I can wring some good interactivity out of Novamente, or similar architectures, then that’s wicked.
I suspect there will be, for at least the next two or three years, relative strengths to narrow and general AI applied to games, like employing mariontettes versus a Deus ex Machina, the latter is less focused, less goal-oriented, the prior can be weilded to more subtle craft. There’s definetly an Uncanny Valley situation that limits AGI-powered agents from inhabiting specific dramatic roles until a major qualitative leap has been made.
“I don’t understand what this AGI is supposed to be. I think of ‘generalization’ as a process, and ‘generalized’ as a comparative adjective. So is generalized AI pretty much just the integration of multiple mere-AI? Does it need to use an algorithm/process which is more singular/cohesive/elegant then simply a sum of mere-AI?”
As I understand it, narrow-AI is designed around a specific problem or application, while a general AI is an architecture that can be applied to whatever problem or activity it’s underlying dynamics provide suitable results for. I think it does need to be more than a Frankenstien, Novamente uses the MOSES algorithm along with probabilistic learning networks and a variety of other components, so you could say there is a degree of stichedness under the hood, in the sense that bones and sinew and a bloodstream stich together limbs and organs. I’m really not an expert on AI architecture though, you should visit their site and look at some of the lit they have. If you want the real nuts and bolts, the architecture is documented in some books Dr. Goertzel has written.
“Phrased another way, if I’m working on an AGI which is incomplete (or otherwise limited) and it can only currently do a subset of things I want it do, how is that different from somebody else working on mere-AI that can only do that same set of things?”
The AGI aggregates pattern data that can, at least in Novamente, be converted to new processes, so the function of the system is outside the scope of it’s applications.
March 26th, 2008 at 12:29 am
It’s wickedly hard, and not hardly wicked.
March 30th, 2008 at 12:44 pm
[…] Grand Text Auto – Link Madness […]
March 30th, 2008 at 2:19 pm
Just two days ago, New Scientist reported on Goertzel’s research.
Bruce Blumberg, anyone? Geez, research even 5 years old seems to go unmentioned.