The extraordinary intellectual caliber of the best physicists
That is of course exactly why I picked QM and MWI to make my case for nihil supernum. It wouldn’t serve to break a smart person’s trust in a sane world if I demonstrated the insanity of Muslim theologians or politicians; they would just say, “But surely we should still trust in elite physicists.” It is by demonstrating that trust in a sane world fails even at the strongest point which ‘elite common sense’ would expect to find, that I would hope to actually break someone’s emotional trust, and cause them to just give up.
I haven’t fully put together my thoughts on this, but it seems like a bad test to “break someone’s trust in a sane world” for a number of reasons:
this is a case where all the views are pretty much empirically indistinguishable, so it isn’t an area where physicists really care all that much
since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn’t transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off
as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn’t really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren’t good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so
From my perspective, the main point is that if you’d expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently—the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn’t really matter much to me, since we’re already way under the ‘pass’ threshold for FAI.
I feel this doesn’t address the “low stakes” issues I brought up, or that this may not even by the physicists’ area of competence. Maybe you’d get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn’t this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn’t immediately call you out on being wrong. And the people setting things up don’t have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you’re interested.
I agree that if it were true that the consensus of elite physicists believed that MWI is wrong when there was a decisive case in favor of it, that would be striking. But
There doesn’t seem to be a consensus among elite physicists that MWI is wrong.
Paging through your QM sequence, it doesn’t look as though you’ve systematically addressed all objections that otherwise credible people have raised against MWI. For example, have you been through all of the references that critique MWI cited in this paper? I think given that most experts don’t view the matter as decided, and given the intellectual caliber of the experts, in order have 99+% confidence in this setting, one has to cover all of one’s bases.
One will generally find that correct controversial ideas convince some physicists. There are many physicists who believe MWI (though they perhaps cannot get away with advocating it as rudely as I do), there are physicists signed up for cryonics, there were physicists advocating for Drexler’s molecular nanotechnology before it was cool, and I strongly expect that some physicists who read econblogs have by now started advocating market monetarism (if not I would update against MM). A good new idea should have some physicists in favor of it, and if not it is a warning sign. (Though the endorsement of some physicists is not a proof, obviously many bad ideas can convince a few physicists too.) If I could not convince any physicists of my views on FAI, that would be a grave warning sign indeed. (I’m pretty sure some such already exist.) But that a majority of physicists do not yet believe in MWI does not say very much one way or another.
The cognitive elites do exist and some of them are physicists, therefore you should be able to convince some physicists. But the cognitive elites don’t correspond to a majority of MIT professors or anything like that, so you shouldn’t be able to convince a majority of that particular group. A world which knew what its own elite truthfinders looked like would be a very different world from this one.
Ok, putting aside MWI, maybe our positions are significantly more similar than it initially seemed. I agree with
A world which knew what its own elite truthfinders looked like would be a very different world from this one.
I’ve taken your comments such as
Depends on how crazy the domain experts are being, in this mad world of ours.
to carry connotations of the type “the fraction of people who exhibit high epistemic rationality outside of their immediate areas of expertise is vanishingly small.”
I think that there are thousands of people worldwide who exhibit very high epistemic rationality in most domains that they think about. I think that most of these people are invisible owing to the paucity of elites online. I agree that epistemic standards are generally very poor, and that high status academics generally do poorly outside of their immediate areas of expertise.
I think that there are thousands of people worldwide who exhibit very high epistemic rationality in most domains that they think about. I think that most of these people are invisible owing to the paucity of elites online.
Where does this impression come from? Are they people you’ve encountered personally? If so, what gave you the impression that they exhibited “very high epistemic rationality in most domains that they think about”?
To clarify, when I wrote “very high epistemic rationality” I didn’t mean “very accurate beliefs,” but rather “aware of what they know and what they don’t.” I also see the qualifier “most” as significant — I think that any given person has some marked blind spots. Of course, the boundary that I’m using is fuzzy, but the standard that I have in mind is something like “at the level of the 15 most epistemically rational members of the LW community.”
“Thousands” is at the “1 in a million” level, so in relative terms, my claim is pretty weak. If one disputes the claim, one needs a story explaining how the fraction could be so small. It doesn’t suffice to say “I haven’t personally seen almost any such people,” because there are so many people who one hasn’t seen, and the relevant people may be in unexpected places.
I’ve had the subjective impression that ~ 2% of those who I know outside of the LW/EA spheres fit this description. To be sure, there’s a selection effect, but I’ve had essentially no exposure to physics, business, finance or public policy, nor to people in very populous countries such as India and China. The people who I know who fit this description don’t seem to think that they’re extremely rare, which suggests that their experiences are similar to my own (though I recognize that “it’s a small world,” i.e. these people’s social circles may overlap in nonobvious ways).
Some of the people who GiveWell has had conversations with come across very favorably in the notes. (I recognize that I’m making a jump from “their area of specialty” to “most topics that they think about” Here I’m extrapolating from the people who I know who are very strong in their area of specialty.) I think that Carl’s comment about the Gates Foundation is relevant.
I updated in the direction of people being more rational than I had thought for reason that I gave at the end of my post Many weak arguments and the typical mind.
I don’t have high confidence here: maybe ~ 50% (i.e. I’m giving a median case estimate).
I should also clarify that I don’t think that one needs a silver bullet argument of the type “the people who you would expect to be most trustworthy have the wrong belief on something that they’ve thought about, with very high probability” to conclude with high confidence that epistemic standards are generally very low.
I think that there are many weak arguments that respected authorities are very often wrong.
Vladimir M has made arguments of the type “there’s fierce disagreement among experts at X about matters pertaining to X, so one knows that at least some of them are very wrong.” I think that string theory is a good case study. There are very smart people who strongly advocate for string theory as a promising road for theoretical physics research, and other very smart people who strongly believe that the research program is misguided. If nothing else, one can tell that a lot of the actors involved are very overconfident (even if one doesn’t know who they are).
There are very smart people who strongly advocate for string theory as a promising road for theoretical physics research, and other very smart people who strongly believe that the research program is misguided. If nothing else, one can tell that a lot of the actors involved are very overconfident (even if one doesn’t know who they are).
Or, alternatively, they disagree about who the research program is promising for.
That is of course exactly why I picked QM and MWI to make my case for nihil supernum. It wouldn’t serve to break a smart person’s trust in a sane world if I demonstrated the insanity of Muslim theologians or politicians; they would just say, “But surely we should still trust in elite physicists.” It is by demonstrating that trust in a sane world fails even at the strongest point which ‘elite common sense’ would expect to find, that I would hope to actually break someone’s emotional trust, and cause them to just give up.
I haven’t fully put together my thoughts on this, but it seems like a bad test to “break someone’s trust in a sane world” for a number of reasons:
this is a case where all the views are pretty much empirically indistinguishable, so it isn’t an area where physicists really care all that much
since the views are empirically indistinguishable, it is probably a low-stakes question, so the argument doesn’t transfer well to breaking our trust in a sane world in high-stakes cases; it makes sense to assume people would apply more rationality in cases where more rationality pays off
as I said in another comment, MWI seems like a case where physics expertise is not really what matters, so this doesn’t really show that the scientific method as applied by physicists is broken; it seems it at most it shows that physics aren’t good at questions that are essentially philosophical; it would be much more persuasive if you showed that e.g., quantum gravity was obviously better than string theory and only 18% of physicists working in the relevant area thought so
[Edited to add a missing “not”]
From my perspective, the main point is that if you’d expect AI elites to handle FAI competently, you would expect physics elites to handle MWI competently—the risk factors in the former case are even greater. Requires some philosophical reasoning? Check. Reality does not immediately call you out on being wrong? Check. The AI problem is harder than MWI and it has additional risk factors on top of that, like losing your chance at tenure if you decide that your research actually needs to slow down. Any elite incompetence beyond the demonstrated level in MWI doesn’t really matter much to me, since we’re already way under the ‘pass’ threshold for FAI.
I feel this doesn’t address the “low stakes” issues I brought up, or that this may not even by the physicists’ area of competence. Maybe you’d get a different outcome if the fate of the world depended on this issue, as you believe it does with AI.
I also wonder if this analysis leads to wrong historical predictions. E.g., why doesn’t this reasoning suggest that the US government would totally botch the constitution? That requires philosophical reasoning and reality doesn’t immediately call you out on being wrong. And the people setting things up don’t have incentives totally properly aligned. Setting up a decent system of government strikes me as more challenging than the MWI problem in many respects.
How much weight do you actually put on this line of argument? Would you change your mind about anything practical if you found out you were wrong about MWI?
What different evidence would you expect to observe in a world where amateur attempts to set up systems of government were usually botched?
(Edit: reworded for (hopefully) clarity.)
I have an overall sense that there are a lot of governments that are pretty good and that people are getting better at setting up governments over time. The question is very vague and hard to answer, so I am not going to attempt a detailed one. Perhaps you could give it a shot if you’re interested.
You meant “is not really”?
Yes, thank you for catching.
I agree that if it were true that the consensus of elite physicists believed that MWI is wrong when there was a decisive case in favor of it, that would be striking. But
There doesn’t seem to be a consensus among elite physicists that MWI is wrong.
Paging through your QM sequence, it doesn’t look as though you’ve systematically addressed all objections that otherwise credible people have raised against MWI. For example, have you been through all of the references that critique MWI cited in this paper? I think given that most experts don’t view the matter as decided, and given the intellectual caliber of the experts, in order have 99+% confidence in this setting, one has to cover all of one’s bases.
One will generally find that correct controversial ideas convince some physicists. There are many physicists who believe MWI (though they perhaps cannot get away with advocating it as rudely as I do), there are physicists signed up for cryonics, there were physicists advocating for Drexler’s molecular nanotechnology before it was cool, and I strongly expect that some physicists who read econblogs have by now started advocating market monetarism (if not I would update against MM). A good new idea should have some physicists in favor of it, and if not it is a warning sign. (Though the endorsement of some physicists is not a proof, obviously many bad ideas can convince a few physicists too.) If I could not convince any physicists of my views on FAI, that would be a grave warning sign indeed. (I’m pretty sure some such already exist.) But that a majority of physicists do not yet believe in MWI does not say very much one way or another.
The cognitive elites do exist and some of them are physicists, therefore you should be able to convince some physicists. But the cognitive elites don’t correspond to a majority of MIT professors or anything like that, so you shouldn’t be able to convince a majority of that particular group. A world which knew what its own elite truthfinders looked like would be a very different world from this one.
Ok, putting aside MWI, maybe our positions are significantly more similar than it initially seemed. I agree with
I’ve taken your comments such as
to carry connotations of the type “the fraction of people who exhibit high epistemic rationality outside of their immediate areas of expertise is vanishingly small.”
I think that there are thousands of people worldwide who exhibit very high epistemic rationality in most domains that they think about. I think that most of these people are invisible owing to the paucity of elites online. I agree that epistemic standards are generally very poor, and that high status academics generally do poorly outside of their immediate areas of expertise.
Where does this impression come from? Are they people you’ve encountered personally? If so, what gave you the impression that they exhibited “very high epistemic rationality in most domains that they think about”?
To clarify, when I wrote “very high epistemic rationality” I didn’t mean “very accurate beliefs,” but rather “aware of what they know and what they don’t.” I also see the qualifier “most” as significant — I think that any given person has some marked blind spots. Of course, the boundary that I’m using is fuzzy, but the standard that I have in mind is something like “at the level of the 15 most epistemically rational members of the LW community.”
“Thousands” is at the “1 in a million” level, so in relative terms, my claim is pretty weak. If one disputes the claim, one needs a story explaining how the fraction could be so small. It doesn’t suffice to say “I haven’t personally seen almost any such people,” because there are so many people who one hasn’t seen, and the relevant people may be in unexpected places.
I’ve had the subjective impression that ~ 2% of those who I know outside of the LW/EA spheres fit this description. To be sure, there’s a selection effect, but I’ve had essentially no exposure to physics, business, finance or public policy, nor to people in very populous countries such as India and China. The people who I know who fit this description don’t seem to think that they’re extremely rare, which suggests that their experiences are similar to my own (though I recognize that “it’s a small world,” i.e. these people’s social circles may overlap in nonobvious ways).
Some of the people who GiveWell has had conversations with come across very favorably in the notes. (I recognize that I’m making a jump from “their area of specialty” to “most topics that they think about” Here I’m extrapolating from the people who I know who are very strong in their area of specialty.) I think that Carl’s comment about the Gates Foundation is relevant.
I updated in the direction of people being more rational than I had thought for reason that I gave at the end of my post Many weak arguments and the typical mind.
I don’t have high confidence here: maybe ~ 50% (i.e. I’m giving a median case estimate).
I should also clarify that I don’t think that one needs a silver bullet argument of the type “the people who you would expect to be most trustworthy have the wrong belief on something that they’ve thought about, with very high probability” to conclude with high confidence that epistemic standards are generally very low.
I think that there are many weak arguments that respected authorities are very often wrong.
Vladimir M has made arguments of the type “there’s fierce disagreement among experts at X about matters pertaining to X, so one knows that at least some of them are very wrong.” I think that string theory is a good case study. There are very smart people who strongly advocate for string theory as a promising road for theoretical physics research, and other very smart people who strongly believe that the research program is misguided. If nothing else, one can tell that a lot of the actors involved are very overconfident (even if one doesn’t know who they are).
Or, alternatively, they disagree about who the research program is promising for.