When reading about Transparent Newcomb’s problem: Isn’t this perfectly general? Suppose Omega says: I give everyone who subscribes to decision theory A $1000, and give those who subscribe to other decision theories nothing. Clearly everyone who subscribes to decision theory A “wins”.
It seems that if one lives in the world with many such Omegas, and subscribing to decision theory A (vs subscribing to decision theory B) would otherwise lead to losing at most, say, $100 per day between two successive encounters with such Omegas, then one would win overall by subscribing (or self-modifying to subscribe) to A.
In other words, if subscribing to certain decision theory changes your subjective experience of the world (not sure what proper terminology for this is), which decision theory wins will depend on the world you live in. There would simply not be a “universal” winning decision theory.
Similar thing will happen with counterfactual mugging—if you expect to encounter the coin-tossing Omega again many times then you should give up your $100, and if not then not.
Really you probably need start imagining Omega as a trustworthy process, e.g. a mathematical proof that tells you ‘X’—thinking it as a person seems to trip you up if you are constantly bringing up the possibility it’s lying when it says ‘X’...
When reading about Transparent Newcomb’s problem: Isn’t this perfectly general? Suppose Omega says: I give everyone who subscribes to decision theory A $1000, and give those who subscribe to other decision theories nothing. Clearly everyone who subscribes to decision theory A “wins”.
It seems that if one lives in the world with many such Omegas, and subscribing to decision theory A (vs subscribing to decision theory B) would otherwise lead to losing at most, say, $100 per day between two successive encounters with such Omegas, then one would win overall by subscribing (or self-modifying to subscribe) to A.
In other words, if subscribing to certain decision theory changes your subjective experience of the world (not sure what proper terminology for this is), which decision theory wins will depend on the world you live in. There would simply not be a “universal” winning decision theory.
Similar thing will happen with counterfactual mugging—if you expect to encounter the coin-tossing Omega again many times then you should give up your $100, and if not then not.
How many times in a row will you be mugged, before you realize that omega was lying to you?
Really you probably need start imagining Omega as a trustworthy process, e.g. a mathematical proof that tells you ‘X’—thinking it as a person seems to trip you up if you are constantly bringing up the possibility it’s lying when it says ‘X’...
Omega is, by definition, always truthful.