.51 1000000 + .49 1001000 = 1000490
dsoodak
I figured that if Omega is required to try its best to predict you and you are permitted to do something that is physically random in your decision making process, then it will probably be able to work out that I am going to choose just one box with slightly more probability than choosing 2. Therefore, it will gain the most status on average (it MUST be after status since it obviously has no interest in money) by guessing that I will go with one box.
Didn’t realize anyone watched the older threads so wasn’t expecting such a fast response...
I’ve already heard about the version where “intelligent alien” is replaced with “psychic” or “predictor”, but not the “human is required to be deterministic” or quantum version (which I’m pretty sure would require the ability to measure the complete waveform of something without affecting it). I didn’t think of the “halting problem” objection, though I’m pretty sure its already expected to do things even more difficult to get such a good success rate with something as complicated as a human CNS (does it just passively observe the player for a few days preceding the event or is it allowed to do a complete brain scan?).
I still think my solution will work in any realistic case (where the alien isn’t magical, and doesn’t require your thought processes to be both deterministic and computable while not placing any such limits on itself).
What I find particularly interesting, however, is that such a troublesome example explicitly states that the agents have vastly unequal intelligence, while most examples seem to assume “perfectly rational” agents (which seems to be interpreted as being intelligent and rational enough so that further increases in intelligence and rationality will make no difference). Are there any other examples where causal decision theory fails which don’t involve non-equal agents? If not, I wonder if you could construct a proof that it DEPENDS on this as an axiom.
Has anyone tried adding “relative ability of one agent to predict another agent” as a parameter in decision theory examples? It seems like this might be applicable in the prisoner’s dilemma as well. For example, a simple tit-for-tat bot modified so it doesn’t defect unless it has received 2 negative feedbacks in a row might do reasonably well against other bots but would do badly against a human player as soon as they figured out how it worked.
As I understand it, most types of decision theory (including game theory) assume that all agents have about the same intelligence and that this intelligence is effectively infinite (or at least large enough so everyone has a complete understanding of the mathematical implications of the relevant utility functions).
In Newcomb’s problem, one of the players is explicitly defined as vastly more intelligent than the other.
In any situation where someone might be really good at predicting your thought processes, its best to add some randomness to your actions. Therefore, my strategy would be to use a quantum random number generator to choose just box B with 51% probability. I should be able to win an average of $1000490.
If there isn’t a problem with this argument and if it hasn’t been thought of before, I’ll call it “variable intelligence decision theory” or maybe “practical decision theory”.
Dustin Soodak
I believe that what you have proven is that it will probably not help your career to investigate fringe phenomena. Unfortunately, science needs the occasional martyr who is willing to be completely irrational in their life path (unless you assign a really large value to having “he was right after all” written on your tombstone) while maintaining very strict rationality in their subject of interest. For example, the theory that “falling stars were” were caused rocks falling out of the sky was considered laughable since this had already been lumped together with ghosts, etc.