From my perspective, the main point is that if you’d expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently—the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn’t really matter much to me, since we’re already way under the ‘pass’ threshold for FAI.
I feel this doesn’t address the “low stakes” issues I brought up, or that this may not even by the physicists’ area of competence. Maybe you’d get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn’t this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn’t immediately call you out on being wrong. And the people setting things up don’t have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you’re interested.
From my perspective, the main point is that if you’d expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently—the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn’t really matter much to me, since we’re already way under the ‘pass’ threshold for FAI.
I feel this doesn’t address the “low stakes” issues I brought up, or that this may not even by the physicists’ area of competence. Maybe you’d get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn’t this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn’t immediately call you out on being wrong. And the people setting things up don’t have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
What different evidence would you expect to observe in a world where amateur attempts to set up systems of government were usually botched?
(Edit: reworded for (hopefully) clarity.)
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you’re interested.