(I like the above and agree with most of it and am mulling and hope to be able to reply substantively, but in the meantime I wanted to highlight one little nitpick that might be more than a nitpick.)
Maybe finding and fixing organizational problems will lead to marginally more researcher time/effort on alignment, or maybe the drama itself will lead to a net loss of researcher attention to alignment. But these are both mechanisms of going marginally faster or marginally slower along the direction we’re already pointed. In a high-dimensional world, that’s not the sort of thing which matters much.
I think this leaves out a thing which is an important part of most people’s values (mine included), which is that there’s something bad about people being hurt, and there’s something good about not hurting people, and that’s relevant to a lot of people (me included) separate from questions of how it impacts progress on AI alignment. Like, on the alignment forum, I get subordinating people’s pain/suffering/mistreatment to questions of mission progress (maybe), but I think that’s not true of a more general place like LessWrong.
Put another way, I think there might be a gap between the importance you reflectively assign to the Drama, and the importance many others reflectively assign to it. A genuine values difference.
I do think that on LessWrong, even people’s pain/suffering/mistreatment shouldn’t trump questions of truth and accuracy, though. Shouldn’t encourage us to abandon truth and accuracy.
Maybe finding and fixing organizational problems will lead to marginally more researcher time/effort on alignment, or maybe the drama itself will lead to a net loss of researcher attention to alignment. But these are both mechanisms of going marginally faster or marginally slower along the direction we’re already pointed. In a high-dimensional world, that’s not the sort of thing which matters much.
… and it’s also not the sort of thing which matters much for reduction of overall pain/suffering/mistreatment, even within the community. (Though it may be the sort of thing which matters a lot for public perceptions of pain/suffering/mistreatment.) This is a basic tenet of EA: the causes which elicit great public drama are not highly correlated with the causes which have lots of low-hanging fruit for improvement. Even within the rationalist community, our hardcoded lizard-brain drama instincts remain basically similar, and so I expect the same heuristic to apply: public drama is not a good predictor of the best ways to reduce pain/suffering/mistreatment within the community.
But that’s a post-hoc explanation. My actual gut-level response to this comment was an aesthetic feeling of danger/mistrust/mild ickiness, like it’s a marker for some kind of outgroup membership. Like, this sounds like what (a somewhat less cartoonish and more intelligent version of) Captain America would say, and my brain automatically tags anything Captain-America-esque as very likely to be mistaken in a way that actively highjacks moral/social intuitions. That’s an aesthetic I actively cultivate, to catch exactly this sort of argument. I recommend it.
FWIW, I agree with this (to the extent that I’ve actually understood you). Like, I think this is compatible with the OP, and do not necessarily disagree with a heuristic of flagging Captain America statements. If 80% of them are bad, then the 20% that are good should indeed have to undergo scrutiny.
What is this “Captain America” business (in this context)? Would you mind explaining, for those of us who aren’t hip with the teen culture or what have you?
My guess is that it’s something like: Captain America makes bold claims with sharp boundaries that contain a lot of applause-light spirit, and tend to implicitly deny nuance. They are usually in the right direction, but “sidesy” and push people more toward being in disjoint armed camps.
Any chance of getting an example of such bold claims? (And, ideally, confirmation from johnswentworth that this is what’s meant?)
(I ask only because I really have no knowledge of the relevant comic books on which to base any kind of interpretation of this part of the discussion…)
(Disclaimer: that’s an old essay which isn’t great by my current standards, and certainly doesn’t make much attempt to justify the core model. I think it’s pointing to the right thing, though.)
I endorse this one myself (have used it in an essay before). But it’s definitely … er, well, it emboldens people who are wrong (but unaware of it) just as much as it emboldens people who are right?
I dunno. I can’t pass John’s ITT here; just trying to help. =)
(I like the above and agree with most of it and am mulling and hope to be able to reply substantively, but in the meantime I wanted to highlight one little nitpick that might be more than a nitpick.)
I think this leaves out a thing which is an important part of most people’s values (mine included), which is that there’s something bad about people being hurt, and there’s something good about not hurting people, and that’s relevant to a lot of people (me included) separate from questions of how it impacts progress on AI alignment. Like, on the alignment forum, I get subordinating people’s pain/suffering/mistreatment to questions of mission progress (maybe), but I think that’s not true of a more general place like LessWrong.
Put another way, I think there might be a gap between the importance you reflectively assign to the Drama, and the importance many others reflectively assign to it. A genuine values difference.
I do think that on LessWrong, even people’s pain/suffering/mistreatment shouldn’t trump questions of truth and accuracy, though. Shouldn’t encourage us to abandon truth and accuracy.
Addendum to the quoted claim:
… and it’s also not the sort of thing which matters much for reduction of overall pain/suffering/mistreatment, even within the community. (Though it may be the sort of thing which matters a lot for public perceptions of pain/suffering/mistreatment.) This is a basic tenet of EA: the causes which elicit great public drama are not highly correlated with the causes which have lots of low-hanging fruit for improvement. Even within the rationalist community, our hardcoded lizard-brain drama instincts remain basically similar, and so I expect the same heuristic to apply: public drama is not a good predictor of the best ways to reduce pain/suffering/mistreatment within the community.
But that’s a post-hoc explanation. My actual gut-level response to this comment was an aesthetic feeling of danger/mistrust/mild ickiness, like it’s a marker for some kind of outgroup membership. Like, this sounds like what (a somewhat less cartoonish and more intelligent version of) Captain America would say, and my brain automatically tags anything Captain-America-esque as very likely to be mistaken in a way that actively highjacks moral/social intuitions. That’s an aesthetic I actively cultivate, to catch exactly this sort of argument. I recommend it.
FWIW, I agree with this (to the extent that I’ve actually understood you). Like, I think this is compatible with the OP, and do not necessarily disagree with a heuristic of flagging Captain America statements. If 80% of them are bad, then the 20% that are good should indeed have to undergo scrutiny.
What is this “Captain America” business (in this context)? Would you mind explaining, for those of us who aren’t hip with the teen culture or what have you?
My guess is that it’s something like: Captain America makes bold claims with sharp boundaries that contain a lot of applause-light spirit, and tend to implicitly deny nuance. They are usually in the right direction, but “sidesy” and push people more toward being in disjoint armed camps.
Any chance of getting an example of such bold claims? (And, ideally, confirmation from johnswentworth that this is what’s meant?)
(I ask only because I really have no knowledge of the relevant comic books on which to base any kind of interpretation of this part of the discussion…)
I explain a bit more of what I mean here: http://seekingquestions.blogspot.com/2017/06/be-more-evil.html
(Disclaimer: that’s an old essay which isn’t great by my current standards, and certainly doesn’t make much attempt to justify the core model. I think it’s pointing to the right thing, though.)
http://www.ldssmile.com/wp-content/uploads/2014/09/3779149-no+you+move+cap+says.jpg
Hmm, I see.
But I am fairly sure that I endorse this sentiment. Or do you think there is a non-obvious interpretation where he’s wrong?
I endorse this one myself (have used it in an essay before). But it’s definitely … er, well, it emboldens people who are wrong (but unaware of it) just as much as it emboldens people who are right?
I dunno. I can’t pass John’s ITT here; just trying to help. =)
It also enourages nitpicking about details where people disagree, which means that if you have several people like this on the same team, the arguing probably never stops.
John’s linked article went into it in detail:
http://seekingquestions.blogspot.com/2017/06/be-more-evil.html