Comment on markup: I saw the first version of your comment, where you were using “(*)” as a textual marker, and I see you’re now using “#” because the asterisks were messing with the markup. You should be able to get the “(*)” marker to work by putting a backslash before the asterisk (and I preferred the “(*)” indicator because that’s more easily recognized as a footnote-style marker).
Feels weird to post an entire paragraph just to nitpick someone’s markup, so here’s an actual comment!
From what I’ve read (several hundred assorted threads), I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you’re a part of these groups
Let me try and rephrase this in a way that might be more testable/easier to think about. It sounds like the question here is what is causing the correlation between being a member of LW/SIAI and agreeing with LW/SIAI that future AI is one of the most important things to worry about. There are several possible causes:
group membership causes group agreement (agreement with the group)
group agreement causes group membership
group membership and group agreement have a common cause (or, more generally, there’s a network of causal factors that connect group membership with group agreement)
I’m not sure that your rephrasing accurately captures what I was trying to get at. In particular, strictly speaking (*) doesn’t require that one be a part of a group , although being part of a group often plays a role in enabling (*).
Also, I’m not only interested in possible irrational causes for LW/SIAI members’ belief that future AI is one of the most important things to worry about, but also possible irrational causes for each of:
(1) SIAI members’ belief that donating to SIAI in particular is the most leveraged way to reduce existential risks? Note that it’s possible to devote ones’ live to a project without believing that it’s the best project for additional funding—see Givewell’s blog posts on Room For More Funding:
A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks.
(2) The belief that refining the art of human rationality is very important.
My own take is that the Less Wrong community has been very enriching in some of its members lives on account of allowing them the opportunity to connect with people similar to themselves, and that their very positive feelings connected with their Less Wrong experience have led some of them to overrate the overall importance of Less Wrong’s stated mission. I can write more about this if there’s interest.
I’m interested. I’ve been thinking about this issue myself for a bit, and something like an ‘internal review’ would greatly help in bringing any potential biases the community holds to light.
Comment on markup: I saw the first version of your comment, where you were using “(*)” as a textual marker, and I see you’re now using “#” because the asterisks were messing with the markup. You should be able to get the “(*)” marker to work by putting a backslash before the asterisk (and I preferred the “(*)” indicator because that’s more easily recognized as a footnote-style marker).
Feels weird to post an entire paragraph just to nitpick someone’s markup, so here’s an actual comment!
Let me try and rephrase this in a way that might be more testable/easier to think about. It sounds like the question here is what is causing the correlation between being a member of LW/SIAI and agreeing with LW/SIAI that future AI is one of the most important things to worry about. There are several possible causes:
group membership causes group agreement (agreement with the group)
group agreement causes group membership
group membership and group agreement have a common cause (or, more generally, there’s a network of causal factors that connect group membership with group agreement)
a mix of the above
And we want to know whether #1 is strong enough that we’re drifting towards a cult attractor or some other groupthink attractor.
I’m not instantly sure how to answer this, but I thought it might help to rephrase this more explicitly in terms of causal inference.
I’m not sure that your rephrasing accurately captures what I was trying to get at. In particular, strictly speaking (*) doesn’t require that one be a part of a group , although being part of a group often plays a role in enabling (*).
Also, I’m not only interested in possible irrational causes for LW/SIAI members’ belief that future AI is one of the most important things to worry about, but also possible irrational causes for each of:
(1) SIAI members’ belief that donating to SIAI in particular is the most leveraged way to reduce existential risks? Note that it’s possible to devote ones’ live to a project without believing that it’s the best project for additional funding—see Givewell’s blog posts on Room For More Funding:
For reference, PeerInfinity says
(2) The belief that refining the art of human rationality is very important.
On (2), I basically agree with Yvain’s post Extreme Rationality: It’s Not That Great.
My own take is that the Less Wrong community has been very enriching in some of its members lives on account of allowing them the opportunity to connect with people similar to themselves, and that their very positive feelings connected with their Less Wrong experience have led some of them to overrate the overall importance of Less Wrong’s stated mission. I can write more about this if there’s interest.
Thank you for clarifying. I don’t think I really have an opinion on this, but I figure it’s good to have someone bring it up as a potential issue.
I’m interested. I’ve been thinking about this issue myself for a bit, and something like an ‘internal review’ would greatly help in bringing any potential biases the community holds to light.