June 28, 2005

commonsense

by Mary Flanagan · , 3:40 pm

In my research on a new collaborative project meme.garden (with d. howe) I happened to re-explore some ‘commonsense’ databases including the commonsense project at MIT, “open mind.” After various approaches to making effective search engines, this one–with its reliance on real people’s knowledge aggregated over time–seems promising.

Yet such a system is rife with problems, as one can imagine. It is criticised by some net researchers and bloggers for containing too many ‘garbage’ entries to be efffective, and just plain factual errors by those who might even mean well.

Liu, Lieberman, and Selker at MIT are engaged in the mission of making better searches. In their 2002 article on a proposed search engine ‘goose’, they discuss the act of problem solving in searches; among other threads, they research the ways that people move from goals to actual key words in their searches, tracking inference chains. The authors note that most searches either use
1) thesaurus style tools to expand topics
2) ask for relevance feedback from users
3) use question templates (such as those used in Ask Jeeves).

I’ll be posting other related search engine material while researching…

4 Responses to “commonsense”


  1. andrew Says:

    When I was working at Zoesis last year, Push Singh came to talk to us about Open Mind, as we were developing our own ontology and knowledge representation system for creating better conversational characters. I agree that Open Mind did seem to have some potential in general, although we couldn’t easily find a way to apply it to our particular research problems at the time.

    Separately, I’ll go ahead and link back to a post from last April that pertains to this, in case it’s of help.

  2. mary Says:

    thx andrew!

  3. Rob Says:

    I’m very skeptical of automated ontology acquisition techniques, not because they contain so much garbage, but because it’s impossible to feed them anything that won’t eventually decay into garbage. The problem I see is usage context – automatic tools can’t comprehend makeshift human ontologies that we make up for practical purposes all the time. Which can only lead them to erraneous conclusions.

    “Revenge is a dish best served cold,” for example. It’s not exactly “false”. But if taken as an context-free fact, it might very easily lead the machine to infer that revenge is probably no bigger than a pizza box, that it should be stored in a refrigerator, and that it might last a few weeks in the freezer. :)

    I only used a metaphorical problem because it’s an easy target, :) but it also extends to non-metaphorical usage. For example, what is the ontological status of a “bottle”? Certainly a bottle is a container, but in some contexts it isn’t – such as decorative bottles that are not containers at all. A bottle can also be a myriad other objects – a doorstop, or a weapon, or an instrument, or a vase… And vice versa, can we categorize anything context-free as a doorstop? That would be difficult too, since whether something actually works as a doorstop is a function of the object being used, the door, the floor material, and everything else that goes into the task of keeping the door open. Doorstops are defined by the task, not vice versa.

    Objective definitions are impossible. How about that for an objective definition? :)

  4. Grand Text Auto » New AI Links: Books, Code Releases, Articles and a TV Show Says:

    […] What Really Happened?“, about Push Singh and Chris McKinstry. It’s a sad story. (Here’s a GTxA post from 2005 touching on Push’s […]

Powered by WordPress