I thought you made some excellent points about many of these ideas are in the philosophical memespace, but just haven’t gained dominance.
In Newcomb’s Problem and Regret of Rationality, Eliezer’s argument is pretty much “I can’t provide a fully satisfactory solution, so let’s just forget about the theoretical argument which we could never be certain about anyway and use common sense”. While I agree that this is a good principle, philosophers who discuss the problem generally aren’t trying to figure out what they’d do if they were actually in the sitution, but to discover what this problem tells us about the principles of decision theory. The pragmatic solution wouldn’t meet this aim. Further, the pragmatic principle would suggest not paying in Counterfactual Mugging.
I guess I have a somewhat interesting perspective on this given that I don’t find the standard LW very satisfying for Newcomb’s or Counterfactual Mugging and I’ve proposed my own approaches which haven’t gained much traction, but I consider to be far more satisfying. Should I take the outside view and assume that I’m way too overconfident about being correct (since I have definitely been in the past and is very common among people who propose theories in general)? Or should I take the inside view and downgrade my assessment of how good LW is as a community for philosophy discussion?
I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.
I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
I agree with this. I especially agree that LWers (on average) are too prone to do things like:
Hear Eliezer’s anti-zombie argument and conclude “oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!”.
Hear about Tegmark’s Mathematical Universe Hypothesis and conclude “oh good, there’s no longer anything about why there’s something rather than nothing!”.
On average, I think LWers are more likely to make important errors in the direction of ‘prematurely dismissing things that sound un-sciencey’ than to make important errors in the direction of ‘prematurely embracing un-sciencey things’.
But ‘tendency to dismiss things that sound un-sciencey’ isn’t exactly the dimension I want LW to change on, so I’m wary of optimizing LW in that direction; I’d much rather optimize it in more specific directions that are closer to the specific things I think are true and good.
In short, my position on Newcomb’s is as follows: Perfect predictors require determinism which means that strictly there’s only one decision that you can make. To talk about choosing between options requires us to construct a counterfactual to compare against. If we construct a counterfactual where you make a different choice and we want it to be temporally consistent then given determinism we have to edit the past. Consistency may force us to also edit Omega’s prediction and hence the money in the box, but all this is fine since it is a counterfactual. CDT’s may deny the need for consistency, but then they’d have to justify ignoring changes in past brain state *despite* the presence of a perfect predictor which may have a way of reading this state.
As far as I’m concerned, the Counterfactual Prisoner’s Dilemma provides the most satisfying argument for taking the Counterfactual Mugging seriously.
I thought you made some excellent points about many of these ideas are in the philosophical memespace, but just haven’t gained dominance.
In Newcomb’s Problem and Regret of Rationality, Eliezer’s argument is pretty much “I can’t provide a fully satisfactory solution, so let’s just forget about the theoretical argument which we could never be certain about anyway and use common sense”. While I agree that this is a good principle, philosophers who discuss the problem generally aren’t trying to figure out what they’d do if they were actually in the sitution, but to discover what this problem tells us about the principles of decision theory. The pragmatic solution wouldn’t meet this aim. Further, the pragmatic principle would suggest not paying in Counterfactual Mugging.
I guess I have a somewhat interesting perspective on this given that I don’t find the standard LW very satisfying for Newcomb’s or Counterfactual Mugging and I’ve proposed my own approaches which haven’t gained much traction, but I consider to be far more satisfying. Should I take the outside view and assume that I’m way too overconfident about being correct (since I have definitely been in the past and is very common among people who propose theories in general)? Or should I take the inside view and downgrade my assessment of how good LW is as a community for philosophy discussion?
Also note that Eliezer’s “I haven’t written this out yet” was in 2008, and by 2021 I think we have some decent things written on FDT, like Cheating Death in Damascus and Functional Decision Theory: A New Theory of Instrumental Rationality.
You can see some responses here and here. I find them uncompelling.
I think there’s something like: LessWrong sometimes tends too hard towards pragmatism and jumps past things that are deserving of closer consideration.
To be fair, though, I think LessWrong does a better job of being pragmatic enough to be useful for having an impact on the world than academic philosophy does. I just note that, like with anything, sometimes the balance seems to go too far and fails to carefully consider things that are worthy of greater consideration as a result of a desire to get on with things and say something actionable.
I agree with this. I especially agree that LWers (on average) are too prone to do things like:
Hear Eliezer’s anti-zombie argument and conclude “oh good, there’s no longer anything confusing about the Hard Problem of Consciousness!”.
Hear about Tegmark’s Mathematical Universe Hypothesis and conclude “oh good, there’s no longer anything about why there’s something rather than nothing!”.
On average, I think LWers are more likely to make important errors in the direction of ‘prematurely dismissing things that sound un-sciencey’ than to make important errors in the direction of ‘prematurely embracing un-sciencey things’.
But ‘tendency to dismiss things that sound un-sciencey’ isn’t exactly the dimension I want LW to change on, so I’m wary of optimizing LW in that direction; I’d much rather optimize it in more specific directions that are closer to the specific things I think are true and good.
The fact that so many rationalists have made that mistake is evidence against the claim rationalists are superior philosophers.
Yep!
Then we have to evidence against the claim , but none for it.
False!
Where’s the evidence for it?
In short, my position on Newcomb’s is as follows: Perfect predictors require determinism which means that strictly there’s only one decision that you can make. To talk about choosing between options requires us to construct a counterfactual to compare against. If we construct a counterfactual where you make a different choice and we want it to be temporally consistent then given determinism we have to edit the past. Consistency may force us to also edit Omega’s prediction and hence the money in the box, but all this is fine since it is a counterfactual. CDT’s may deny the need for consistency, but then they’d have to justify ignoring changes in past brain state *despite* the presence of a perfect predictor which may have a way of reading this state.
As far as I’m concerned, the Counterfactual Prisoner’s Dilemma provides the most satisfying argument for taking the Counterfactual Mugging seriously.