I haven’t fully put together my thoughts on this, but it seems like a bad test to “break someone’s trust in a sane world” for a number of reasons:
this is a case where all the views are pretty much empirically indistinguishable, so it isn’t an area where physicists really care all that much
since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn’t transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off
as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn’t really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren’t good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so
From my perspective, the main point is that if you’d expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently—the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn’t really matter much to me, since we’re already way under the ‘pass’ threshold for FAI.
I feel this doesn’t address the “low stakes” issues I brought up, or that this may not even by the physicists’ area of competence. Maybe you’d get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn’t this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn’t immediately call you out on being wrong. And the people setting things up don’t have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you’re interested.
I haven’t fully put together my thoughts on this, but it seems like a bad test to “break someone’s trust in a sane world” for a number of reasons:
this is a case where all the views are pretty much empirically indistinguishable, so it isn’t an area where physicists really care all that much
since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn’t transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off
as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn’t really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren’t good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so
[Edited to add a missing “not”]
From my perspective, the main point is that if you’d expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently—the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn’t really matter much to me, since we’re already way under the ‘pass’ threshold for FAI.
I feel this doesn’t address the “low stakes” issues I brought up, or that this may not even by the physicists’ area of competence. Maybe you’d get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn’t this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn’t immediately call you out on being wrong. And the people setting things up don’t have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
What different evidence would you expect to observe in a world where amateur attempts to set up systems of government were usually botched?
(Edit: reworded for (hopefully) clarity.)
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you’re interested.
You meant “is not really”?
Yes, thank you for catching.