You do realize that people are working on logical uncertainty under limited time, and this could tell an AI how to re-examine its assumptions? I admit that Gaifman at Columbia deals only with a case where we know the possibilities beforehand (at least in the part I read). But if the right answer has a description in the language we’re using, then it seems like E.T. Jaynes theoretically addresses this when he recommends having an explicit probability for ‘other hypotheses.’
Then again, if this approach didn’t come up when the authors of “Tiling Agents” discuss utility maximization, perhaps I’m overestimating the promise of formalized logical uncertainty.
You do realize that people are working on logical uncertainty under limited time, and this could tell an AI how to re-examine its assumptions? I admit that Gaifman at Columbia deals only with a case where we know the possibilities beforehand (at least in the part I read). But if the right answer has a description in the language we’re using, then it seems like E.T. Jaynes theoretically addresses this when he recommends having an explicit probability for ‘other hypotheses.’
Then again, if this approach didn’t come up when the authors of “Tiling Agents” discuss utility maximization, perhaps I’m overestimating the promise of formalized logical uncertainty.