Didn’t realize anyone watched the older threads so wasn’t expecting such a fast response...
I’ve already heard about the version where “intelligent alien” is replaced with “psychic” or “predictor”, but not the “human is required to be deterministic” or quantum version (which I’m pretty sure would require the ability to measure the complete waveform of something without affecting it). I didn’t think of the “halting problem” objection, though I’m pretty sure its already expected to do things even more difficult to get such a good success rate with something as complicated as a human CNS (does it just passively observe the player for a few days preceding the event or is it allowed to do a complete brain scan?).
I still think my solution will work in any realistic case (where the alien isn’t magical, and doesn’t require your thought processes to be both deterministic and computable while not placing any such limits on itself).
What I find particularly interesting, however, is that such a troublesome example explicitly states that the agents have vastly unequal intelligence, while most examples seem to assume “perfectly rational” agents (which seems to be interpreted as being intelligent and rational enough so that further increases in intelligence and rationality will make no difference). Are there any other examples where causal decision theory fails which don’t involve non-equal agents? If not, I wonder if you could construct a proof that it DEPENDS on this as an axiom.
Has anyone tried adding “relative ability of one agent to predict another agent” as a parameter in decision theory examples? It seems like this might be applicable in the prisoner’s dilemma as well. For example, a simple tit-for-tat bot modified so it doesn’t defect unless it has received 2 negative feedbacks in a row might do reasonably well against other bots but would do badly against a human player as soon as they figured out how it worked.
Didn’t realize anyone watched the older threads so wasn’t expecting such a fast response...
I’ve already heard about the version where “intelligent alien” is replaced with “psychic” or “predictor”, but not the “human is required to be deterministic” or quantum version (which I’m pretty sure would require the ability to measure the complete waveform of something without affecting it). I didn’t think of the “halting problem” objection, though I’m pretty sure its already expected to do things even more difficult to get such a good success rate with something as complicated as a human CNS (does it just passively observe the player for a few days preceding the event or is it allowed to do a complete brain scan?).
I still think my solution will work in any realistic case (where the alien isn’t magical, and doesn’t require your thought processes to be both deterministic and computable while not placing any such limits on itself).
What I find particularly interesting, however, is that such a troublesome example explicitly states that the agents have vastly unequal intelligence, while most examples seem to assume “perfectly rational” agents (which seems to be interpreted as being intelligent and rational enough so that further increases in intelligence and rationality will make no difference). Are there any other examples where causal decision theory fails which don’t involve non-equal agents? If not, I wonder if you could construct a proof that it DEPENDS on this as an axiom.
Has anyone tried adding “relative ability of one agent to predict another agent” as a parameter in decision theory examples? It seems like this might be applicable in the prisoner’s dilemma as well. For example, a simple tit-for-tat bot modified so it doesn’t defect unless it has received 2 negative feedbacks in a row might do reasonably well against other bots but would do badly against a human player as soon as they figured out how it worked.