I think if you ask people a question like, “Are you planning on going off and doing something / believing in something crazy?”, they will, generally speaking, say “no” to that, and that is roughly more likely the more isomorphic your question is to that, even if you didn’t exactly word it that way. My guess is that it was at least heavily implied that you meant “crazy” by the way you worded it.
To be clear, they might have said “yes” (that they will go and do the thing you think is crazy), but I doubt they will internally represent that thing or wanting to do it as “crazy.” Thus the answer is probably going to be one of, “no” (as a partial lie, where no indirectly points to the crazy assertion), or “yes” (also as a partial lie, pointing to taking the action).
In practice, people have a very hard time instantiating the status identifier “crazy” on themselves, and I don’t think that can be easily dismissed.
I think the utility of the word “crazy” is heavily overestimated by you, given that there are many situations where the word cannot be used the same way by the people relevant to the conversation in which it is used. Words should have the same meaning to the people in the conversation, and since some people using this word are guaranteed to perceive it as hostile and some are not, that causes it to have asymmetrical meaning inherently.
I also think you’ve brought in too much risk of “throwing stones in a glass house” here. The LW memespace is, in my estimation, full of ideas besides Roko’s Basilisk that I would also consider “crazy” in the same sense that I believe you mean it: Wrong ideas which are also harmful and cause a lot of distress.
Pessimism, submitting to failure and defeat, high “p(doom)”, both MIRI and CFAR giving up (by considering the problems they wish to solve too inherently difficult, rather than concluding they must be wrong about something), and people being worried that they are “net negative” despite their best intentions, are all (IMO) pretty much the same type of “crazy” that you’re worried about.
Our major difference, I believe, is in why we think these wrong ideas persist, and what causes them to be generated in the first place. The ones I’ve mentioned don’t seem to be caused by individuals suddenly going nuts against the grain of their egregore.
I know this is a problem you’ve mentioned before and consider it both important and unsolved, but I think it would be odd to notice both that it seems to be notably worse in the LW community, but also to only be the result of individuals going crazy on their own (and thus to conclude that the community’s overall sanity can be reliably increased by ejecting those people).
By the way, I think “sanity” is a certain type of feature which is considerably “smooth under expectation” which means roughly that if p(person = insane) = 25%, that person should appear to be roughly 25% insane in most interactions. In other words, it’s not the kind of probability where they appear to be sane most of the time, but you suspect that they might have gone nuts in some way that’s hard to see or they might be hiding it.
The flip side of that is that if they only appear to be, say, 10% crazy in most interactions, then I would lower your assessment of their insanity to basically that much.
I still find this feature, however, not altogether that useful, but using it this way is still preferable over a binary feature.
I also think you’ve brought in too much risk of “throwing stones in a glass house” here. The LW memespace is, in my estimation, full of ideas (...) that I would also consider “crazy”
That seems to me like an extra reason to keep “throwing stones”. To make clear the line between the kind of “crazy” that rationalists enjoy, and the kind of “crazy” that is the opposite.
As an insurance, just in the (hopefully unlikely) case that tomorrow Unreal goes on a shooting spree, I would like to have it in writing—before it happened—that it happened because of ideas that the rationalist community disapproves of.
Otherwise, the first thing everyone will do is: “see, another rationalist gone crazy”. And whatever objection we make afterwards, it will be like “yeah, now that the person is a bad PR, everyone says ‘comrades, this is not true rationalism, the true rationalism has never been tried’, but previously no one saw a problem with them”.
(I am exaggerating a lot, of course. Also, this is not a comment on Unreal specifically, just on the value of calling out “crazy” memes, despite being perceived as “crazy” ourselves.)
I think if you ask people a question like, “Are you planning on going off and doing something / believing in something crazy?”, they will, generally speaking, say “no” to that, and that is roughly more likely the more isomorphic your question is to that, even if you didn’t exactly word it that way. My guess is that it was at least heavily implied that you meant “crazy” by the way you worded it.
To be clear, they might have said “yes” (that they will go and do the thing you think is crazy), but I doubt they will internally represent that thing or wanting to do it as “crazy.” Thus the answer is probably going to be one of, “no” (as a partial lie, where no indirectly points to the crazy assertion), or “yes” (also as a partial lie, pointing to taking the action).
In practice, people have a very hard time instantiating the status identifier “crazy” on themselves, and I don’t think that can be easily dismissed.
I think the utility of the word “crazy” is heavily overestimated by you, given that there are many situations where the word cannot be used the same way by the people relevant to the conversation in which it is used. Words should have the same meaning to the people in the conversation, and since some people using this word are guaranteed to perceive it as hostile and some are not, that causes it to have asymmetrical meaning inherently.
I also think you’ve brought in too much risk of “throwing stones in a glass house” here. The LW memespace is, in my estimation, full of ideas besides Roko’s Basilisk that I would also consider “crazy” in the same sense that I believe you mean it: Wrong ideas which are also harmful and cause a lot of distress.
Pessimism, submitting to failure and defeat, high “p(doom)”, both MIRI and CFAR giving up (by considering the problems they wish to solve too inherently difficult, rather than concluding they must be wrong about something), and people being worried that they are “net negative” despite their best intentions, are all (IMO) pretty much the same type of “crazy” that you’re worried about.
Our major difference, I believe, is in why we think these wrong ideas persist, and what causes them to be generated in the first place. The ones I’ve mentioned don’t seem to be caused by individuals suddenly going nuts against the grain of their egregore.
I know this is a problem you’ve mentioned before and consider it both important and unsolved, but I think it would be odd to notice both that it seems to be notably worse in the LW community, but also to only be the result of individuals going crazy on their own (and thus to conclude that the community’s overall sanity can be reliably increased by ejecting those people).
By the way, I think “sanity” is a certain type of feature which is considerably “smooth under expectation” which means roughly that if p(person = insane) = 25%, that person should appear to be roughly 25% insane in most interactions. In other words, it’s not the kind of probability where they appear to be sane most of the time, but you suspect that they might have gone nuts in some way that’s hard to see or they might be hiding it.
The flip side of that is that if they only appear to be, say, 10% crazy in most interactions, then I would lower your assessment of their insanity to basically that much.
I still find this feature, however, not altogether that useful, but using it this way is still preferable over a binary feature.
That seems to me like an extra reason to keep “throwing stones”. To make clear the line between the kind of “crazy” that rationalists enjoy, and the kind of “crazy” that is the opposite.
As an insurance, just in the (hopefully unlikely) case that tomorrow Unreal goes on a shooting spree, I would like to have it in writing—before it happened—that it happened because of ideas that the rationalist community disapproves of.
Otherwise, the first thing everyone will do is: “see, another rationalist gone crazy”. And whatever objection we make afterwards, it will be like “yeah, now that the person is a bad PR, everyone says ‘comrades, this is not true rationalism, the true rationalism has never been tried’, but previously no one saw a problem with them”.
(I am exaggerating a lot, of course. Also, this is not a comment on Unreal specifically, just on the value of calling out “crazy” memes, despite being perceived as “crazy” ourselves.)