What we’re actually doing is here is defining “automated ontology identification”
(Flagging that I didn’t understand this part of the reply, but don’t have time to reload context and clarify my confusion right now)
If you deny the existence of a true decision boundary then you’re saying that there is just no fact of the matter about the questions that we’re asking to automated ontology identification. How then would we get any kind of safety guarantee (conservativeness or anything else)?
When you assume a true decision boundary, you’re assuming a label-completion of our intuitions about e.g. diamonds. That’s the whole ball game, no?
But I don’t see why the platonic “true” function has to be total. The solution does not have to be able to answer ambiguous cases like “the diamond is molecularly disassembled and reassembled”, we can leave those unresolved, and let the reporter say “ambiguous.” I might not be able to test for ambiguity-membership, but as long as the ELK solution can:
Know when the instance is easy,
Solve some unambiguous hard instances,
Say “ambiguous” to the rest,
Then a planner—searching for a “Yes, the diamond is safe” plan—can reasonably still end up executing plans which keep the diamond safe. If we want to end up in realities where we’re sure no one is burning in a volcano, that’s fine, even if we can’t label every possible configuration of molecules as a person or not. The planner can just steer into a reality where it unambiguously resolves the question, without worrying about undefined edge-cases.
Thanks for your reply!
(Flagging that I didn’t understand this part of the reply, but don’t have time to reload context and clarify my confusion right now)
When you assume a true decision boundary, you’re assuming a label-completion of our intuitions about e.g. diamonds. That’s the whole ball game, no?
But I don’t see why the platonic “true” function has to be total. The solution does not have to be able to answer ambiguous cases like “the diamond is molecularly disassembled and reassembled”, we can leave those unresolved, and let the reporter say “ambiguous.” I might not be able to test for ambiguity-membership, but as long as the ELK solution can:
Know when the instance is easy,
Solve some unambiguous hard instances,
Say “ambiguous” to the rest,
Then a planner—searching for a “Yes, the diamond is safe” plan—can reasonably still end up executing plans which keep the diamond safe. If we want to end up in realities where we’re sure no one is burning in a volcano, that’s fine, even if we can’t label every possible configuration of molecules as a person or not. The planner can just steer into a reality where it unambiguously resolves the question, without worrying about undefined edge-cases.