The “oracle” helps make the problem tractable: a) it prevents other, non-optimal programs from naively trying to simulate the world and going into infinite recursion; b) it makes the general solution algorithm implementable by unambiguously identifying the spots in the world program that are are actually “oracle” invocations, which would be impossible otherwise (Rice’s theorem).
I don’t really get the point of “decision theories”, so try to reduce all similar problems to “algorithmic game theory” (is that an existing area?).
Edited to add: I couldn’t make up a rigorous game-theoretic formulation without an oracle.
Why worry about non-optimal programs? We’re talking about a theory of how AIs should make decisions, right?
I think it’s impossible for an AI to avoid the need to determine non-trivial properties of other programs, even though Rice’s Theorem says there is no algorithm for doing this that’s guaranteed to work in general. It just has to use methods that sometimes return wrong answers. And to deal with that, it needs a way to handle mathematical uncertainty.
ETA: If formalizing the problem is a non-trivial process, you might be solving most of the problem yourself in there, rather than letting the AI’s decision algorithm solve it. I don’t think you’d want that. In this case, for example, if your AI were to encounter Omega in real life, how would it know to model the situation using a world program that invokes a special kind of oracle?
Re ETA: in the comments to Formalizing Newcomb’s, Eliezer effectively said he prefers the “special kind of oracle” interpretation to the simulator interpretation. I’m not sure which one an AI should assume when Omega gives it a verbal description of the problem.
Yes, I meant that. Maybe I misinterpreted you; maybe the game needs to be restated with a probabilistic oracle :-) Because I’m a mental cripple and can’t go far without a mathy model.
The “oracle” helps make the problem tractable: a) it prevents other, non-optimal programs from naively trying to simulate the world and going into infinite recursion; b) it makes the general solution algorithm implementable by unambiguously identifying the spots in the world program that are are actually “oracle” invocations, which would be impossible otherwise (Rice’s theorem).
I don’t really get the point of “decision theories”, so try to reduce all similar problems to “algorithmic game theory” (is that an existing area?).
Edited to add: I couldn’t make up a rigorous game-theoretic formulation without an oracle.
Why worry about non-optimal programs? We’re talking about a theory of how AIs should make decisions, right?
I think it’s impossible for an AI to avoid the need to determine non-trivial properties of other programs, even though Rice’s Theorem says there is no algorithm for doing this that’s guaranteed to work in general. It just has to use methods that sometimes return wrong answers. And to deal with that, it needs a way to handle mathematical uncertainty.
ETA: If formalizing the problem is a non-trivial process, you might be solving most of the problem yourself in there, rather than letting the AI’s decision algorithm solve it. I don’t think you’d want that. In this case, for example, if your AI were to encounter Omega in real life, how would it know to model the situation using a world program that invokes a special kind of oracle?
Re ETA: in the comments to Formalizing Newcomb’s, Eliezer effectively said he prefers the “special kind of oracle” interpretation to the simulator interpretation. I’m not sure which one an AI should assume when Omega gives it a verbal description of the problem.
Wha?
If you mean my saying (3), that doesn’t mean “Oracle”, it means we reason about the program without doing a full simulation of it.
Yes, I meant that. Maybe I misinterpreted you; maybe the game needs to be restated with a probabilistic oracle :-) Because I’m a mental cripple and can’t go far without a mathy model.