Really the only things that seems in any way novel here are the idea that the space of possible worlds might include worlds that work by different mathematical rules and that possibility is contingent on the agent’s priors. I don’t know how to characterize how math works in a different world, other than by saying explicitly what the outcome of a given computation will be. You can think of that as forcing the structural equation that would normally compute “1+1” to output “5″, where the graph setup would somehow keep that logical fact from colliding with proofs that “3-1=2” (for worlds that don’t explode) (which is what I thought Eliezer meant by creating a factored DAG of mathematics here). That’s for a very limited case of illogical-calculation where our reasoning process produced results close enough to their analogues in the target world that we’re even able to make some valid deductions. Maybe other worlds don’t have a big book of platonic truths (ambiguity or instability) and cross-world utility calculations just don’t work. In that case, I can’t think of any sensible course of action.
I don’t think this is totally worthless speculation, even if you don’t agree that “a world with different math” makes sense, because an AI with faulty hardware / reasoning will still need to reason about mathematics that work differently from its mistaken inferences, and that probably requires a partial correspondence between how the agent reasons and how the world works, just like how the partial correspondence between worlds with different mathematical rules allows some limited deductions with cross-world or other-world validity.
That’s okay, there’s no formalized theory behind it. But for the sake of conversation:
It seems you once agreed that multiple agents in the same epistemic state in different possible worlds can define strategies over their future observations in a way that looks like trading utilities: http://lesswrong.com/lw/102/indexical_uncertainty_and_the_axiom_of/sht
When I treat priors as a kind of utility, that’s interpretation #4 from this Wei Dai post: http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/
Really the only things that seems in any way novel here are the idea that the space of possible worlds might include worlds that work by different mathematical rules and that possibility is contingent on the agent’s priors. I don’t know how to characterize how math works in a different world, other than by saying explicitly what the outcome of a given computation will be. You can think of that as forcing the structural equation that would normally compute “1+1” to output “5″, where the graph setup would somehow keep that logical fact from colliding with proofs that “3-1=2” (for worlds that don’t explode) (which is what I thought Eliezer meant by creating a factored DAG of mathematics here). That’s for a very limited case of illogical-calculation where our reasoning process produced results close enough to their analogues in the target world that we’re even able to make some valid deductions. Maybe other worlds don’t have a big book of platonic truths (ambiguity or instability) and cross-world utility calculations just don’t work. In that case, I can’t think of any sensible course of action.
I don’t think this is totally worthless speculation, even if you don’t agree that “a world with different math” makes sense, because an AI with faulty hardware / reasoning will still need to reason about mathematics that work differently from its mistaken inferences, and that probably requires a partial correspondence between how the agent reasons and how the world works, just like how the partial correspondence between worlds with different mathematical rules allows some limited deductions with cross-world or other-world validity.