Thoughts on the frame problem and moral symbol grounding
(some thoughts on frames, grounding symbols, and Cyc)
The frame problem is a problem in AI to do with all the variables not expressed within the logical formalism—what happens to them? To illustrate, consider the Yale Shooting Problem: a person is going to be shot with a gun, at time 2. If that gun is loaded, the person dies. The gun will get loaded at time 1. Formally, the system is:
alive(0) (the person is alive to start with)
¬loaded(0) (the gun begins unloaded)
true → loaded(1) (the gun will get loaded at time 1)
loaded(2) → ¬alive(3) (the person will get killed if shot with a loaded gun)
So the question is, does the person actually die? It would seem blindingly obvious that they do, but that isn’t formally clear—we know the gun was loaded at time 1, but was it still loaded at time 2? Again, this seems blindingly obvious—but that’s because of the words, not the formalism. Ignore the descriptions in italics, and the names of the suggestive LISP tokens.
Since that’s hard to do, consider the following example. Alicorn, for instance, hates surprises—they make her feel unhappy. Let’s say that we decompose time into days, and that a surprise one day will ruin her next day. Then we have a system:
happy(0) (Alicorn starts out happy)
¬surprise(0) (nobody is going to surprise her on day 0)
true → surprise(1) (somebody is going to surprise her on day 1)
surprise(2) → ¬happy(3) (if someone surprises her on day 2, she’ll be unhappy the next day)
So here, is Alicorn unhappy on day 3? Well, it seems unlikely—unless someone coincidentally surprised her on day 2. And there’s no reason to think that would happen! So, “obviously”, she’s not unhappy on day 3.
Except… the two problems are formally identical. Replace “alive” with “happy” and “loaded” with “surprise”. And though our semantic understanding tells us that “(loaded(1) → loaded (2))” (guns don’t just unload themselves) but “¬(surprise(1) → surprise(2))” (being surprised one day doesn’t mean you’ll be surprised the next), we can’t tell this from the symbols.
And we haven’t touched on all the other problems with the symbolic setup. For instance, what happens with “alive” on any other time than 0 and 3? Does that change from moment to moment? If we want the words to do what we want, we need to put in a lot of logical conditionings, so that our intuitions are all there.
This shows that there’s a connection between the frame problem and symbol grounding. If we and the AI both understand what the symbols mean, then we don’t need to specify all the conditionals—we can simply deduce them, if asked (“yes, if the person is dead at 3, they’re also dead at 4”). But conversely, if we have a huge amount of logical conditioning, then there is less and less that the symbols could actually mean. The more structure we put in our logic, the less structures there are in the real world that fit it (“X(i) → X(i+1)” is something that can apply to being dead, not to being happy, for instance).
This suggests a possible use for the Cyc project—the quixotic attempt to build an AI by formalising all of common sense (“Bill Clinton belongs to the collection of U.S. presidents” and “all trees are plants”). You’re very unlikely to get an AI through that approach—but it might be possible to train an already existent AI with it. Especially if the AI had some symbol grounding, then there might not be all that many structures in the real world that could correspond to that mass of logical relations. Some symbol grounding + Cyc + the internet—and suddenly there’s not that many possible interpretations for “Bill Clinton was stuck up a tree”. The main question, of course, is whether there is a similar restricted meaning for “this human is enjoying a worthwhile life”.
Do I think that’s likely to work? No. But it’s maybe worth investigating. And it might be a way of getting across ontological crises: you reconstruct a model as close as you can to your old one, in the new formalism.
- 6 May 2014 12:14 UTC; 0 points) 's comment on Siren worlds and the perils of over-optimised search by (
It seems to be that your system is incomplete: it does not fully model what you are claiming it models. To have the system completely describe what you want it to, you must add on additional statements. For example you would need such additional statements as:
loaded(2) → loaded(3)
and
(alive(2) AND ¬loaded(2) )→ alive(3)
These are just as vital parts of the system as the statements that you included, you just arbitrarily left them out. You just as easily could have left out “alive(0)” and claimed that your system does not tell you whether you are alive at time 0. Of course not! You need to include all the parts of the system if you want it to work as a model.
Yeah, that’s kinda the point :-)
You have to include all the parts, but in the real world, you can’t.
The frame problem has been solved for 20 years (AIMA recommends the successor state axiom approach). It was an issue in planning, but planning guys don’t even mention it anymore, they just get work done.
The solutions are all mentioned in the Wikipedia article you linked to.
Yes, the formal problem is solved. The informal problem—how to deal with non-specified transitions in a complicated real system when you can’t specify all the transitions—isn’t solved, and may be AI complete (and possibly equivalent with AI mastering semantics).
Ok, but that’s not about frames anymore, it’s just a special instance of the general ontology/knowledge representation problem. The frame problem is algorithmic: people didn’t know how to efficiently update states as time passes. We know now.
You can’t reach that kind of certainty with high level human concepts. We could bring the person back to life. We could copy their body and then both bring them back to life and keep them dead. We could change the person so much that your named-entity recognition predicate broke down.
There is nothing I have seen in Cyc that looks any more adequate as a pre-chewed, easily digestible knowledge base than wikipedia. A grammar of hashes and arrows and statements of troponymy makes Cyc easily parsed, not easily understood.
Yes, so there’s the additional problem of accounting for errors: how can you account for formal logical systems that seem to track real concepts, but only do so approximately (or only within the usual bounds of human experience)?
Try Cyc+wikipedia+rest of the internet. The advantage Cyc has is that people have been writing down blatantly obvious statements, in a way that would be just assumed in other areas. Maybe language learning tools would have the similar “over-obvious” statements explicitly in them?
The fewer symbols you have, the more meanings they can have.
Interestingly, in human language, the more a particular symbol is used, the more meanings it ends up having. (Pinker 2007)
Might be the case that even after the plethora of symbols is very large, they still don’t ‘touch’ ‘reach’ ‘track’ the world the right way. So instead of keeping in mind the one world, and seeing whether a more complex and full map is better or worse at representing it, could be useful to keep in mind for each particular map structure, the infinitely many different worlds it represents. Just as a heuristic.
As a exercise in humility, perhaps—but neither that point of view, nor the single world view, are any good for the question of “how well is this tracking reality—will the decisions be wonky?”
We need maths of some sort...