Ok, but that wasn’t the question I was trying to solve; at this point, I was merely trying to understand what expected utility maximization over logical uncertainty could even mean, in principle. Before the insight described in this post, I was able to think about scenarios like Vladimir Nesov’s Counterfactual Mugging and Logical Uncertaintyverbally, or by making a model specific to this situation, but I wasn’t able to give a definition of an agent that maximizes utility over logical possibilities in enough detail to prove something about how it would behave in a given situation.
(To formalize counterfactual mugging, I wouldn’t use the uniform prior from this post, though—I’d use the definition of impossible possible world and leave the prior as a black box satisfying the obvious assumption about high digits of pi.)
[ETA: Hmm, on second thoughts, counterfactual mugging still isn’t the easiest thing to actually formalize, though :-)]
Ah, I think I see what the insight was—you’re not actually trying to solve logical uncertainty, you’re trying to solve a specific problem of UDT with logical uncertainty, as outlined in your third link, and you’re doing this by replacing a search through proofs with an iteration over “logical worlds” where the proof would be true or false.
Am I close?
EDIT: well, I guess that’s not totally true—you are trying to solve logical uncertainty, in the sense that logical uncertainty is going to be defined relative to some bounded model of decision-making, so you’re proposing a model.
Ok, but that wasn’t the question I was trying to solve; at this point, I was merely trying to understand what expected utility maximization over logical uncertainty could even mean, in principle. Before the insight described in this post, I was able to think about scenarios like Vladimir Nesov’s Counterfactual Mugging and Logical Uncertainty verbally, or by making a model specific to this situation, but I wasn’t able to give a definition of an agent that maximizes utility over logical possibilities in enough detail to prove something about how it would behave in a given situation.
(To formalize counterfactual mugging, I wouldn’t use the uniform prior from this post, though—I’d use the definition of impossible possible world and leave the prior as a black box satisfying the obvious assumption about high digits of pi.)
[ETA: Hmm, on second thoughts, counterfactual mugging still isn’t the easiest thing to actually formalize, though :-)]
Ah, I think I see what the insight was—you’re not actually trying to solve logical uncertainty, you’re trying to solve a specific problem of UDT with logical uncertainty, as outlined in your third link, and you’re doing this by replacing a search through proofs with an iteration over “logical worlds” where the proof would be true or false.
Am I close?
EDIT: well, I guess that’s not totally true—you are trying to solve logical uncertainty, in the sense that logical uncertainty is going to be defined relative to some bounded model of decision-making, so you’re proposing a model.