Didn’t we have a thread about this really recently?
Anyhow, to crib from the previous thread—an important point is reflective equilibrium. I shouldn’t be able to predict that I’ll do badly—if I know that, and the problem is “fair” in that it’s a decision-determined, I can just make the other decision. Or if I’m doing things a particular way, and I know that another way of doing things would be better, and the problem is “fair” in that I can choose how to do things, I can just do things the better way. To sit and stew and lose anyhow is just cray talk.
A totally different and very good example problem where this shows up was covered by Wei Dai here.
Offhand I don’t see anything in this thread that hasn’t been covered by those, but I may be missing relevant subtleties; I don’t find this debate especially interesting past the first few rounds.
Didn’t we have a thread about this really recently?
Anyhow, to crib from the previous thread—an important point is reflective equilibrium. I shouldn’t be able to predict that I’ll do badly—if I know that, and the problem is “fair” in that it’s a decision-determined, I can just make the other decision. Or if I’m doing things a particular way, and I know that another way of doing things would be better, and the problem is “fair” in that I can choose how to do things, I can just do things the better way. To sit and stew and lose anyhow is just cray talk.
A totally different and very good example problem where this shows up was covered by Wei Dai here.
Yeah, CarlSchulman put up a couple of threads on Newcomb a couple weeks ago, here and here. The original Newcomb’s Problem and Regret of Rationality thread has also been getting some traffic recently.
Offhand I don’t see anything in this thread that hasn’t been covered by those, but I may be missing relevant subtleties; I don’t find this debate especially interesting past the first few rounds.