Interesting. As an interested person for whom Less Wrong is a highly interesting, challenging and entertaining site but isn’t exactly an ‘insider’, the following points:
First: ‘Let’s hope it’s the money woes, then. Or...hmm...maybe a vacuum to be met by someone who believes in the cause and also possesses mild wordcraft? What fancy!’
This is probably just lighthearted but it’s worth noting because Less Wrong clearly does have that. Several people here write well, and Eliezer writes very engagingly indeed, if in a slightly unpolished way. So if there’s a PR problem it’s not lack of talent with words.
Second: there is a bit of a (conscious? proud?) nerd bias. However, I think this is probably for a complicated set of reasons and can’t be switched off.
1) group identity. Most sites like this identify by idolising people they like or (more often) constantly mocking those they don’t. This one does through a bit of self-reference, which is probably better
2) condition of order: the fact that this blog exists and doesn’t get political (or topical at all in a controversial way) is incredible: the entropic tendency of the net is towards flame wars, and the active intellectualised culture here may be needed to present that
3) ‘feeling like home’: this is a bit like (1) but is particularly interesting on this site. I’ve seen informal polls/anecdote suggesting a lot of aspergers on here, and a lot of generalised lack of social confdence. As such, this site might be one of the best ‘these people get me!’ social places for some members, which mean they’re likely to emphasise their (percieved) common attributes.
As a ‘being honest even though it makes me sound like a dick’ aside which may be relevant to the PR of this group: I’ve seen various discussions on here of how to think/learn your way past social anxiety or lack of social skills in a systematic deliberate way. My intellectual mind thinks ‘what a great idea, good for them’. But I am INTENSELY aware of my instinctive reaction of ‘weirdos! you can’t treat your social life like that! I wouldn’t want to be trapped in a lift with one of these people! AWKWARD!’ This despite the fact that I’ve been to a couple of meetups and found people very interesting and engaging.
Finally, there’s an issue around whether the different parts of LW and SIAI can be peeled apart. As this interesting recent discussion thread notes, there are a lot of claims that newbies are presented with
critically, these are not only weird, but some of them have very obvious explanations from the external view. In particular, the core issues of AI and cryogenics immediately suggest a God-replacement/millenarian attitude and a rationalisation to escape fear of death respectively.
In particular, the core issues of AI and cryogenics immediately suggest a God-replacement/millenarian attitude and a rationalisation to escape fear of death respectively.
Perhaps higher profile refutations for these suspicions are in order.
The problem is that the suspicions don’t necessarily need to be refuted… only explained. A super-intelligent AI is a bit of a god to human eyes, or at least a demi-god. I’ve said before that half the point of SIAI is to make sure that we create a god we like, and I wasn’t really joking (I’m pretty sure I was quoting someone else as well). Likewise, I’m signed up for cryonics specifically because I don’t like death, and would prefer to escape it if possible.
So I couldn’t honestly refute either accusation, only admit to them and then either brush it off with “it’s my crazy thing, we all have our pet crazy thing, right?” if I don’t believe getting into the topic will be fruitful with that particular person, or to explain how this is different from superstition and try to reduce the inferential gap.
Only if those high profile refutations are
a) quick
b) non-reliant upon specialist knowledge
c) seem honest.
I don’t know a huge amount about either issue (which is revealing as an interested lurker and occasional participant here): but I think combining these is tough.
You could try to make it seem honest, but you need certain technical knowledge to really get them, and it’s contentious technical knowledge in that most relevant scientists don’t buy Less Wrong’s take on either. So I might feel an argument seems convincing, but then remember that I can find pro or anti global warming arguments convincing if the person advancing them is far more informed on the scientific issues than me. So this would fail totally on (a) and (b): I’d have to feel I could rely on my own knowledge above the experts that disagree with LW and SIAI, and I have other things to do with my .
You can go for quick and easy, but the argument I’d expect here is the ‘so much to lose from evil AI that it counterbalances low likelihood’ or ‘so much to gain from immortaility that it counterbalances low likelihood’. And both of those simply feel like cheats to most people: it’s too like Pascal’s Wager and feels like a trick that you can play by raising the stakes.
Finally, you can address the root of the suspicions by convincing people that you don’t have the tendencies to be attracted by the idea of a greater mind, a father substitute that can solve the world’s problems, that you don’t look ahead to a golden future age or that you’re intensely relaxed about your own mortality. But I don’t know how you could do that. The last is particularly unbelievable for me.
Interesting. As an interested person for whom Less Wrong is a highly interesting, challenging and entertaining site but isn’t exactly an ‘insider’, the following points:
First: ‘Let’s hope it’s the money woes, then. Or...hmm...maybe a vacuum to be met by someone who believes in the cause and also possesses mild wordcraft? What fancy!’
This is probably just lighthearted but it’s worth noting because Less Wrong clearly does have that. Several people here write well, and Eliezer writes very engagingly indeed, if in a slightly unpolished way. So if there’s a PR problem it’s not lack of talent with words.
Second: there is a bit of a (conscious? proud?) nerd bias. However, I think this is probably for a complicated set of reasons and can’t be switched off. 1) group identity. Most sites like this identify by idolising people they like or (more often) constantly mocking those they don’t. This one does through a bit of self-reference, which is probably better 2) condition of order: the fact that this blog exists and doesn’t get political (or topical at all in a controversial way) is incredible: the entropic tendency of the net is towards flame wars, and the active intellectualised culture here may be needed to present that 3) ‘feeling like home’: this is a bit like (1) but is particularly interesting on this site. I’ve seen informal polls/anecdote suggesting a lot of aspergers on here, and a lot of generalised lack of social confdence. As such, this site might be one of the best ‘these people get me!’ social places for some members, which mean they’re likely to emphasise their (percieved) common attributes.
As a ‘being honest even though it makes me sound like a dick’ aside which may be relevant to the PR of this group: I’ve seen various discussions on here of how to think/learn your way past social anxiety or lack of social skills in a systematic deliberate way. My intellectual mind thinks ‘what a great idea, good for them’. But I am INTENSELY aware of my instinctive reaction of ‘weirdos! you can’t treat your social life like that! I wouldn’t want to be trapped in a lift with one of these people! AWKWARD!’ This despite the fact that I’ve been to a couple of meetups and found people very interesting and engaging.
Finally, there’s an issue around whether the different parts of LW and SIAI can be peeled apart. As this interesting recent discussion thread notes, there are a lot of claims that newbies are presented with
http://lesswrong.com/r/discussion/lw/73g/take_heed_for_it_is_a_trap/
critically, these are not only weird, but some of them have very obvious explanations from the external view. In particular, the core issues of AI and cryogenics immediately suggest a God-replacement/millenarian attitude and a rationalisation to escape fear of death respectively.
Perhaps higher profile refutations for these suspicions are in order.
The problem is that the suspicions don’t necessarily need to be refuted… only explained. A super-intelligent AI is a bit of a god to human eyes, or at least a demi-god. I’ve said before that half the point of SIAI is to make sure that we create a god we like, and I wasn’t really joking (I’m pretty sure I was quoting someone else as well). Likewise, I’m signed up for cryonics specifically because I don’t like death, and would prefer to escape it if possible.
So I couldn’t honestly refute either accusation, only admit to them and then either brush it off with “it’s my crazy thing, we all have our pet crazy thing, right?” if I don’t believe getting into the topic will be fruitful with that particular person, or to explain how this is different from superstition and try to reduce the inferential gap.
Only if those high profile refutations are a) quick b) non-reliant upon specialist knowledge c) seem honest.
I don’t know a huge amount about either issue (which is revealing as an interested lurker and occasional participant here): but I think combining these is tough.
You could try to make it seem honest, but you need certain technical knowledge to really get them, and it’s contentious technical knowledge in that most relevant scientists don’t buy Less Wrong’s take on either. So I might feel an argument seems convincing, but then remember that I can find pro or anti global warming arguments convincing if the person advancing them is far more informed on the scientific issues than me. So this would fail totally on (a) and (b): I’d have to feel I could rely on my own knowledge above the experts that disagree with LW and SIAI, and I have other things to do with my .
You can go for quick and easy, but the argument I’d expect here is the ‘so much to lose from evil AI that it counterbalances low likelihood’ or ‘so much to gain from immortaility that it counterbalances low likelihood’. And both of those simply feel like cheats to most people: it’s too like Pascal’s Wager and feels like a trick that you can play by raising the stakes.
Finally, you can address the root of the suspicions by convincing people that you don’t have the tendencies to be attracted by the idea of a greater mind, a father substitute that can solve the world’s problems, that you don’t look ahead to a golden future age or that you’re intensely relaxed about your own mortality. But I don’t know how you could do that. The last is particularly unbelievable for me.