Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn’t make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)
You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.
I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it’s about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don’t care about religion). Without MWI, something else would be “the most controversial topic which EY should not have added because it antagonizes people for no good reason”, and people would speculate about the dark reasons that made EY write about that.
For context, I will quote the part that Yvain quoted from the Sequences:
Everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on QM. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probably the majority view among theoretical physicists, if that counts for anything (though I will argue the matter separately from opinion polls). Still, it is not the only view that exists in the modern physics community. I do not feel obliged to present the other views right away, but I feel obliged to warn my readers that there are other views, which I will not be presenting during the initial stages of the introduction.
Everyone please make your own opinion about whether this is how cult leaders usually speak (because that seems to be the undertone of some comments in this thread).
In scenario 1 that someone knows me beforehand and optimizes what he says to influence me.
In scenario 2 that someone doesn’t know who will respond, but is optimizing his message to attract specific kinds of people.
The former scenario is a bit worrisome—it’s manipulation. But the latter one looks fairly benign to me—how else would you attract people with a particular set of features? Of course the message is, in some sense, bait but unless it’s poisoned that shouldn’t be a big problem.
I don’t know why scenario 2 should be any less worrisome. The distinction between “optimized for some perception/subset of you” and “optimized for someone like you” is completely meaningless.
Because of degree of focus. It’s like the distinction between a black-hat scanning the entire ’net for vulnerabilities and a black-hat scanning specifically your system for vulnerabilities. Are the two equally worrisome?
equally worrisome, conditional on me having the vulnerability the blackhat is trying to use. This is equivalent to the original warning being conditional on something resonating with you.
If there is a tribal marker, it’s not MWI persay; it’s choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist ‘tribe’.
choosing an interpretation of QM on grounds of explanatory parsimony.
contested the strength of the MW claim. Explanatory parsimony doesn’t differentiate a strong from a weak claim
OP’s original claim:
Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn’t make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)
A fair point. Maybe I’m committing the typical mind fallacy and underestimating the general gullibility of people. If someone offers you something, it’s obvious to me that you should look for strings, consider the incentives of the giver, and ponder the consequences (including those concerning your mind). If you don’t understand why something is given to you, it’s probably wise to delay grabbing the cheese (or not touching it) until you understand.
And still this all looks to me like a plain-vanilla example of a bootstrapping an organization and creating a base of support, financial and otherwise, for it. Unless you think there were lies, misdirections, or particularly egregious sins of omission, that’s just how the world operates.
Also, anyone who succeeds in attracting people to an enterprise, be it by the most impeccable of means, will find the people they have assembled creating tribal markers anyway. The leader doesn’t have to give out funny hats. People will invent their own.
People do a lot of things. Have biases, for example.
There is quite a bit of our evolutionary legacy it would be wise to deemphasize. Not like there aren’t successful examples of people doing good work in common and not being a tribe.
edit: I think what’s going on is a lot of the rationalist tribe folks are on the spectrum and/or “nerdy”, and thus have a more difficult time forming communities, and LW/etc was a great way for them to get something important in their life. They find it valuable and rightly so. They don’t want to give it up.
I am sympathetic to this, but I think it would be wise to separate the community aspects and rationality itself as a “serious business.” Like, I am friends with lots of academics, but the academic part of our relationship has to be kept separate (I would rip into their papers in peer review, etc.) The guru/disciple dynamic I think is super unhealthy.
Because warning against dark side rationality with dark side rationality to find light side rationalists doesn’t look good against the perennial c-word claims against LW...
I think LW is skewed toward believing in MWI because they’ve all read Yudkowsky. It really doesn’t seem likely Yudkowsky just gleaned MWI was already popular and wrote about it to pander to the tribe. In any case I don’t really see why MWI would be a salient point for group identity.
That’s not what I am saying. People didn’t write the Nicene Creed to pander to Christians. (Sorry about the affect side effects of that comparison, that wasn’t my intention, just the first example that came to mind).
MWI is perfect for group identity—it’s safely beyond falsification, and QM interpretations are a sufficiently obscure topic where folks typically haven’t thought a lot about it. So you don’t get a lot of noise in the marker.
But I am not trying to make MWI into more than it is. I don’t think MWI is a centrally important idea, it’s mostly an illustration of what I think is going on (also with some other ideas).
My model of him has him having an attitude of “if I think that there’s a reason to be highly confident of X, then I’m not going to hide what’s true just for the sake of playing social games”.
Given the way the internet works bloggers who don’t take strong stances don’t get traffic. If Yudkowsky wouldn’t have took positions confidently, it’s likely that he wouldn’t have founded LW as we know it.
Shying away from strong positions for the sake of not wanting to be vulnerable is no good strategy.
I don’t think the goal is to drive traffic. It’s also to have an impact on the person who reads the article. If you want a deeper look at the strategy look at Nassim Taleb is quite explicit about the principle in Antifragile.
I don’t think that Elizers public and private beliefs differ on the issues that RaelwayScot mentioned.
A counterfactual world where Eliezer would be a vocal about his beliefs wouldn’t have ended up with LW as we know it.
Actually, I can probably answer this without knowing exactly what you mean: the notion of improved Solomonoff Induction that gets him many-worlds seems like an important concept for his work with MIRI.
I don’t know where “his work often gets ripped apart” for that reason, but I suspect they’d object to the idea of improved/naturalized SI as well.
Why does E. Yudkowsky voice such strong priors e.g. wrt. the laws of physics (many worlds interpretation), when much weaker priors seem sufficient for most of his beliefs (e.g. weak computationalism/computational monism) and wouldn’t make him so vulnerable? (With vulnerable I mean that his work often gets ripped apart as cultish pseudoscience.)
You seem to assume that MWI makes the Sequences more vulnerable; i.e. that there are people who feel okay with the rest of the Sequences, but MWI makes them dismiss it as pseudoscience.
I think there are other things that rub people the wrong way (that EY in general talks about some topics more than appropriate for his status, whether it’s about science, philosophy, politics, or religion) and MWI is merely the most convenient point of attack (at least among those people who don’t care about religion). Without MWI, something else would be “the most controversial topic which EY should not have added because it antagonizes people for no good reason”, and people would speculate about the dark reasons that made EY write about that.
For context, I will quote the part that Yvain quoted from the Sequences:
Everyone please make your own opinion about whether this is how cult leaders usually speak (because that seems to be the undertone of some comments in this thread).
Because he was building a tribe. (He’s done now).
edit: This should actually worry people a lot more than it seems to.
Why?
Consider that if stuff someone says resonates with you, that someone is optimizing for that.
There are two quite different scenarios here.
In scenario 1 that someone knows me beforehand and optimizes what he says to influence me.
In scenario 2 that someone doesn’t know who will respond, but is optimizing his message to attract specific kinds of people.
The former scenario is a bit worrisome—it’s manipulation. But the latter one looks fairly benign to me—how else would you attract people with a particular set of features? Of course the message is, in some sense, bait but unless it’s poisoned that shouldn’t be a big problem.
I don’t know why scenario 2 should be any less worrisome. The distinction between “optimized for some perception/subset of you” and “optimized for someone like you” is completely meaningless.
Because of degree of focus. It’s like the distinction between a black-hat scanning the entire ’net for vulnerabilities and a black-hat scanning specifically your system for vulnerabilities. Are the two equally worrisome?
equally worrisome, conditional on me having the vulnerability the blackhat is trying to use. This is equivalent to the original warning being conditional on something resonating with you.
MIRI survives in part via donations from people who bought the party line on stuff like MWI.
Are you saying that based on having looked at the data? I think we should have a census that has numbers about donations for MIRI and belief in MWI.
Really, you would want MWI belief delta (to before they found LW) to measure “bought the party line.”
I am not trying to emphasize MWI specifically, it’s the whole set of tribal markers together.
If there is a tribal marker, it’s not MWI persay; it’s choosing an interpretation of QM on grounds of explanatory parsimony. Eliezer clearly believed that MWI is the only interpretation of QM that qualifies on such grounds. However, such a belief is quite simply misguided; it ignores several other formulations, including e.g. relational quantum mechanics, the ensemble interpretation, the transactional interpretation, etc. that are also remarkable for their overall parsimony. Someone who advocated for one of these other approaches would be just as recognizable as a member of the rationalist ‘tribe’.
contested the strength of the MW claim. Explanatory parsimony doesn’t differentiate a strong from a weak claim
OP’s original claim:
A fair point. Maybe I’m committing the typical mind fallacy and underestimating the general gullibility of people. If someone offers you something, it’s obvious to me that you should look for strings, consider the incentives of the giver, and ponder the consequences (including those concerning your mind). If you don’t understand why something is given to you, it’s probably wise to delay grabbing the cheese (or not touching it) until you understand.
And still this all looks to me like a plain-vanilla example of a bootstrapping an organization and creating a base of support, financial and otherwise, for it. Unless you think there were lies, misdirections, or particularly egregious sins of omission, that’s just how the world operates.
Also, anyone who succeeds in attracting people to an enterprise, be it by the most impeccable of means, will find the people they have assembled creating tribal markers anyway. The leader doesn’t have to give out funny hats. People will invent their own.
People do a lot of things. Have biases, for example.
There is quite a bit of our evolutionary legacy it would be wise to deemphasize. Not like there aren’t successful examples of people doing good work in common and not being a tribe.
edit: I think what’s going on is a lot of the rationalist tribe folks are on the spectrum and/or “nerdy”, and thus have a more difficult time forming communities, and LW/etc was a great way for them to get something important in their life. They find it valuable and rightly so. They don’t want to give it up.
I am sympathetic to this, but I think it would be wise to separate the community aspects and rationality itself as a “serious business.” Like, I am friends with lots of academics, but the academic part of our relationship has to be kept separate (I would rip into their papers in peer review, etc.) The guru/disciple dynamic I think is super unhealthy.
Because warning against dark side rationality with dark side rationality to find light side rationalists doesn’t look good against the perennial c-word claims against LW...
I think LW is skewed toward believing in MWI because they’ve all read Yudkowsky. It really doesn’t seem likely Yudkowsky just gleaned MWI was already popular and wrote about it to pander to the tribe. In any case I don’t really see why MWI would be a salient point for group identity.
That’s not what I am saying. People didn’t write the Nicene Creed to pander to Christians. (Sorry about the affect side effects of that comparison, that wasn’t my intention, just the first example that came to mind).
MWI is perfect for group identity—it’s safely beyond falsification, and QM interpretations are a sufficiently obscure topic where folks typically haven’t thought a lot about it. So you don’t get a lot of noise in the marker.
But I am not trying to make MWI into more than it is. I don’t think MWI is a centrally important idea, it’s mostly an illustration of what I think is going on (also with some other ideas).
Consequentialist ethic
My model of him has him having an attitude of “if I think that there’s a reason to be highly confident of X, then I’m not going to hide what’s true just for the sake of playing social games”.
Given the way the internet works bloggers who don’t take strong stances don’t get traffic. If Yudkowsky wouldn’t have took positions confidently, it’s likely that he wouldn’t have founded LW as we know it.
Shying away from strong positions for the sake of not wanting to be vulnerable is no good strategy.
I don’t agree with this reasoning. Why not write clickbait then if the goal is to drive traffic?
I don’t think the goal is to drive traffic. It’s also to have an impact on the person who reads the article. If you want a deeper look at the strategy look at Nassim Taleb is quite explicit about the principle in Antifragile.
I don’t think that Elizers public and private beliefs differ on the issues that RaelwayScot mentioned. A counterfactual world where Eliezer would be a vocal about his beliefs wouldn’t have ended up with LW as we know it.
its a balancing act
Actually, I can probably answer this without knowing exactly what you mean: the notion of improved Solomonoff Induction that gets him many-worlds seems like an important concept for his work with MIRI.
I don’t know where “his work often gets ripped apart” for that reason, but I suspect they’d object to the idea of improved/naturalized SI as well.
His work doesn’t get “ripped apart” because he doesn’t write or submit for peer review.
inductive bias
The Hell do you mean by “computational monism” if you think it could be a “weaker prior”?