Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
And I’d hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that’s also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We’re doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That’s fine, and it does deserve further discussion.
But the topic of rationality isn’t at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they’re brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
Disagree on the “fewer” part. I’m not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don’t remember if it hasn’t been done due to lack of resources, or if there’s some downside to the idea.)
Fair enough. It ultimately comes down to whether or not tickling transhumanists’ brains wins us more than we’d gain from appearing however more approachable to non-transhumanist rationalists, and there’s enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.
I stand by my statements when it comes to SIAI-specific values, though.
I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause
One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.
My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Hmm...
Roko’s Basilisk
Boxed AI trying to extort you
The ’People Are Jerks” failure mode of CEV
I can’t think of any other possible examples off the top of my head. were these the ones you were thinking of?
Also Pascal’s mugging (though I suppose how closely related that is to the HFT depends on where you place the emphasis) and a few rarer variations, but you’ve hit the main ones.
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
(...)
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Less Wrong ought to be about reasoning, as per Common Interest of Many Causes. Like you (I presume), I would like to see more posts about reasoning and fewer, despite my transhumanist sympathies, about boxed AIs, hypothetical torture scenarios, and the optimality of donating to the Friendly AI cause: focusing our efforts that way is more interesting, more broadly appealing, and ultimately more effective for everyone involved including the SIAI.
And I’d hazard a guess that the SIAI representatives here know that. A lot of people benefit from knowing how to think and act more effectively unqualified, but a site about improving reasoning skills that’s also an appendage to the SIAI party line limits its own effectiveness, and therefore its usefulness as a way of sharpening reasoning about AI (and, more cynically, as a source of smart and rational recruits), by being exclusionary. We’re doing a fair-to-middling job in that respect; we could definitely be doing a better one, if the above is a fair description of the intended topic according to the people who actually call the shots around here. That’s fine, and it does deserve further discussion.
But the topic of rationality isn’t at all well served by flogging criticisms of the SIAI viewpoint that have nothing to do with rationality, especially when they’re brought up out of the context of an existing SIAI discussion. Doing so might diminish perceived or actual groupthink re: galactic civilizations and your money, but it still lowers the signal-to-noise ratio, for the simple reason that the appealing qualities of this site are utterly indifferent to the pros and cons of dedicating your money to the Friendly AI cause except insofar as it serves as a case study in rational charity. Granted, there are signaling effects that might counter or overwhelm its usefulness as a case study, but the impression I get from talking to outsiders is that those are far from the most obvious or destructive signaling problems that the community exhibits.
Bottom line, I view the friendly AI topic as something between a historical quirk and a pet example among several of the higher-status people here, and I think you should too.
Disagree on the “fewer” part. I’m not sure about SIAI, but I think at least my personal interests would not be better served by having fewer transhumanist posts. It might be a good idea to move such posts into a subforum though. (I think supporting such subforums was discussed in the past, but I don’t remember if it hasn’t been done due to lack of resources, or if there’s some downside to the idea.)
Fair enough. It ultimately comes down to whether or not tickling transhumanists’ brains wins us more than we’d gain from appearing however more approachable to non-transhumanist rationalists, and there’s enough unquantified values in that equation to leave room for disagreement. In a world where a magazine as poppy and mainstream as TIME likes to publish articles on the Singularity, I could easily be wrong.
I stand by my statements when it comes to SIAI-specific values, though.
One of these things is not like the others. One of these things is not about the topic which historically could not be named. One of them is just a building block that can be sometimes useful when discussing reasoning that involves decision making.
My objection to that one is slightly different, yes. But I think it does derive from the same considerations of vast utility/disutility that drive the historically forbidden topic, and is subject to some of the same pitfalls (as well as some others less relevant here).
There are also a few specific torture scenarios which are much more closely linked to the historically forbidden topic, and which come up, however obliquely, with remarkable frequency.
Hmm...
Roko’s Basilisk
Boxed AI trying to extort you
The ’People Are Jerks” failure mode of CEV
I can’t think of any other possible examples off the top of my head. were these the ones you were thinking of?
Also Pascal’s mugging (though I suppose how closely related that is to the HFT depends on where you place the emphasis) and a few rarer variations, but you’ve hit the main ones.
This should be a top-level post, if only to maximize the proportion of LessWrongers that will read it.
Upvoted for complete agreement, particularly:
Please do not downvote comments like the parent.