But fortunately, I have a very high level of epistemic trust for the rationalist community.
No! Not fortunately! Speaking from personal experience, succumbing to the delusion that there is any such thing as “the rationalist community” worthy of anyone’s trust has caused me an enormous amount of psychological damage, such that I’m still (still?!) not done recovering from the betrayal traumamore than three years later—and I’m probablynot the only one.
(I thought I was done recovering as of (specifically) 13 September, but the fact that I still felt motivated to write a “boo ‘rationalists’” comment on Friday and then went into an anxiety spiral for the next 36 hours—and the fact that I’m drafting this comment in a paper notebook when I should be spending a relaxing network-free Sunday studying math—suggest that I’m still (somehow—still?!) not done grieving. I think I’m really close, though!)
There is no authority regulating who’s allowed to use the “rationalist” brand name. Trusting “the rationalist community” leaves you open to getting fucked over[1] by any bad idea that can successfully market itself to high-Openness compsci nerds living in Berkeley, California in the current year. The craft is not the community.The ideology is not the movement.Don’t revere the bearer of good info.Every cause—every cause—wants to be a cult. At this point, as a guard against my earlier mistakes, I’ve made a habit of using the pejorative “robot cult” to refer to the social cluster, reserving “rationalist” to describe the methodology set forth in the Sequences—and really, I should probably phase out “rationalist”, too. Plain rationality is already a fine word for cognitive algorithms that create and exploit map–territory correspondences—maybe it doesn’t have to be an -ism.
Real trust—trust that won’t predictably blow up in your face and take three and a half years to recover from—needs to be to something finer-grained than some amorphous self-recommending “community.” You need to model the specific competencies of specific people and institutions, and model their incentives to tell you the truth—or to fuck with you.
(Note: people don’t have to be consciously fucking with you in order for modeling them as fucking with you to be useful for compressing the length of the message needed to describe your observations. I can’t speak to what the algorithms of deceptionfeel from the inside—just that the systematic production of maps that don’t reflect the territory for any reason, even mere “bias”, should be enough for you to mark them as hostile.)
COVID-19 is an unusually easy case, where people’s interests are, for the most part, aligned. People may use the virus as a prop in their existing political games, but at least no one is actually pro-virus. Under those circumstances, sure, trusting an amorphous cluster of smart people who read each other’s blogs can legitimately be a better bet than alternative aggregators of information. As soon as you step away from the unusually easy cases—watch your step!
If you learned a lot from the Sequences, I think that’s a good reason to trust what Eliezer Yudkowsky in particular says about AI in particular, even if you can’t immediately follow the argument. (There’s a prior that any given nonprofit claiming you should give it money in order to prevent the destruction of all value in the universe is going to just be a scam—but you see, the Sequences are very good.) That trust does not bleed over (except at a very heavy quantitative discount) to an alleged “community” of people who also trust Yudkowsky—and I don’t think it bleeds over to Yudkowsky’s off-the-cuff opinions on (let’s say, picking an arbitrary example) the relative merits of polyamory, unless you have some more specific reason to trust that he actually thought it through and actually made sure to get that specific question right, rather than happening to pick up that answer through random cultural diffusion from his robot cult. (Most people get most of their beliefs from random cultural diffusion; we can’t think fast enough to do otherwise.) Constant vigilance!
Eh, it’s pretty obvious that there is a thing that corresponds to “beliefs of the rationality community” or “broad consensus of the rationality community”, and also pretty obvious that those broadly get a lot of things more right than many other sources of ideas one could listen to. Of course, it might still be fine advice to try really hard to think through things for yourself, but like, calling the existence of such a thing as something that one could even hypothetically assign trust to a “delusion” just seems straightforwardly wrong.
While I agree that it’s a part of shared mapmaking that ‘exists’ (i.e. is a common referent people coordinate around), I do think that the process that determines what’s publicly considered “the beliefs of the rationality community” is fairly different from the actual consensus positions of those LessWrongers and MIRI/CFAR/LW staff (and others) who have shown themselves to be the most surprisingly correct thinkers, and it seems accurate for Zack to make the point that you’ll be subject to systematic error if you make the two things identical in your map of the world.
Oh, yeah, totally. I had understood Zack to make an ontological argument in the first paragraph that such an entity cannot coherently exist, or alternatively that “it is not deserving of anyone’s trust”, both of which seem like statements that are too strong to me, and I think neither correspond to the thing you are saying here. The rest of the comment seems pretty good and I agree with most of it.
Trusting “the rationalist community” leaves you open to getting fucked over[1] by any bad idea that can successfully market itself to high-Openness compsci nerds living in Berkeley, California in the current year.
That seems to be true to the extend that you see “the rationalist community” as being the cluster of people in Berkeley. It’s my impression that it becomes less true when you speak about the more global community on LessWrong. The particular idea towards which you point as having caused you huge damage doesn’t seem to have strong expressed support on LessWrong.
No! Not fortunately! Speaking from personal experience, succumbing to the delusion that there is any such thing as “the rationalist community” worthy of anyone’s trust has caused me an enormous amount of psychological damage, such that I’m still (still?!) not done recovering from the betrayal trauma more than three years later—and I’m probably not the only one.
(Uh, can’t say I wasn’t warned.)
(I thought I was done recovering as of (specifically) 13 September, but the fact that I still felt motivated to write a “boo ‘rationalists’” comment on Friday and then went into an anxiety spiral for the next 36 hours—and the fact that I’m drafting this comment in a paper notebook when I should be spending a relaxing network-free Sunday studying math—suggest that I’m still (somehow—still?!) not done grieving. I think I’m really close, though!)
There is no authority regulating who’s allowed to use the “rationalist” brand name. Trusting “the rationalist community” leaves you open to getting fucked over[1] by any bad idea that can successfully market itself to high-Openness compsci nerds living in Berkeley, California in the current year. The craft is not the community. The ideology is not the movement. Don’t revere the bearer of good info. Every cause—every cause—wants to be a cult. At this point, as a guard against my earlier mistakes, I’ve made a habit of using the pejorative “robot cult” to refer to the social cluster, reserving “rationalist” to describe the methodology set forth in the Sequences—and really, I should probably phase out “rationalist”, too. Plain rationality is already a fine word for cognitive algorithms that create and exploit map–territory correspondences—maybe it doesn’t have to be an -ism.
Real trust—trust that won’t predictably blow up in your face and take three and a half years to recover from—needs to be to something finer-grained than some amorphous self-recommending “community.” You need to model the specific competencies of specific people and institutions, and model their incentives to tell you the truth—or to fuck with you.
(Note: people don’t have to be consciously fucking with you in order for modeling them as fucking with you to be useful for compressing the length of the message needed to describe your observations. I can’t speak to what the algorithms of deception feel from the inside—just that the systematic production of maps that don’t reflect the territory for any reason, even mere “bias”, should be enough for you to mark them as hostile.)
COVID-19 is an unusually easy case, where people’s interests are, for the most part, aligned. People may use the virus as a prop in their existing political games, but at least no one is actually pro-virus. Under those circumstances, sure, trusting an amorphous cluster of smart people who read each other’s blogs can legitimately be a better bet than alternative aggregators of information. As soon as you step away from the unusually easy cases—watch your step!
If you learned a lot from the Sequences, I think that’s a good reason to trust what Eliezer Yudkowsky in particular says about AI in particular, even if you can’t immediately follow the argument. (There’s a prior that any given nonprofit claiming you should give it money in order to prevent the destruction of all value in the universe is going to just be a scam—but you see, the Sequences are very good.) That trust does not bleed over (except at a very heavy quantitative discount) to an alleged “community” of people who also trust Yudkowsky—and I don’t think it bleeds over to Yudkowsky’s off-the-cuff opinions on (let’s say, picking an arbitrary example) the relative merits of polyamory, unless you have some more specific reason to trust that he actually thought it through and actually made sure to get that specific question right, rather than happening to pick up that answer through random cultural diffusion from his robot cult. (Most people get most of their beliefs from random cultural diffusion; we can’t think fast enough to do otherwise.) Constant vigilance!
I (again) feel bad about cussing in a Less Wrong comment, but I want to be very emphatic here!
Eh, it’s pretty obvious that there is a thing that corresponds to “beliefs of the rationality community” or “broad consensus of the rationality community”, and also pretty obvious that those broadly get a lot of things more right than many other sources of ideas one could listen to. Of course, it might still be fine advice to try really hard to think through things for yourself, but like, calling the existence of such a thing as something that one could even hypothetically assign trust to a “delusion” just seems straightforwardly wrong.
While I agree that it’s a part of shared mapmaking that ‘exists’ (i.e. is a common referent people coordinate around), I do think that the process that determines what’s publicly considered “the beliefs of the rationality community” is fairly different from the actual consensus positions of those LessWrongers and MIRI/CFAR/LW staff (and others) who have shown themselves to be the most surprisingly correct thinkers, and it seems accurate for Zack to make the point that you’ll be subject to systematic error if you make the two things identical in your map of the world.
Oh, yeah, totally. I had understood Zack to make an ontological argument in the first paragraph that such an entity cannot coherently exist, or alternatively that “it is not deserving of anyone’s trust”, both of which seem like statements that are too strong to me, and I think neither correspond to the thing you are saying here. The rest of the comment seems pretty good and I agree with most of it.
That seems to be true to the extend that you see “the rationalist community” as being the cluster of people in Berkeley. It’s my impression that it becomes less true when you speak about the more global community on LessWrong. The particular idea towards which you point as having caused you huge damage doesn’t seem to have strong expressed support on LessWrong.