But that doesn’t mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn’t actually a very good way to get them to change.
I haven’t said ‘bad person’ unless I’m missing something. I’ve said things like ‘doing net harm in your career’ or ‘making it worse’ or ‘not doing the right thing.’ I’m talking about actions, and when I say ‘right thing’ I mean shorthand for ‘that which moves things in the directions you’d like to see’ rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.
It’s a strange but consistent thing that people’s brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don’t take the better actions of being bad people. Or even, as you say, ‘exceptionally bad’ people.
I haven’t said ‘bad person’ unless I’m missing something.
I mean, you haven’t called anyone a bad person, but “It’s Not The Incentives, It’s You” is a pretty damn accusatory thing to say, I’d argue. (Of course, I’m also aware that you weren’t the originator of that phrase—the author of the linked article was—but you at least endorse its use enough to repeat it in your own comments, so I think it’s worth pointing out.)
Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.
On two levels.
Level one is the one where some level of endorsement of something means that I’m making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.
Level two is that the OP doesn’t make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for ‘science’ or ‘the world’ but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.
That is importantly different from claiming that these are bad people.
Is it possible to say ‘your actions are bad and maybe you should stop’ or even ‘your actions are having these results and maybe you should stop’ without saying ‘you are bad and you should feel bad’?
Is it possible to say ‘your actions are bad and maybe you should stop’ or even ‘your actions are having these results and maybe you should stop’ without saying ‘you are bad and you should feel bad’?
I actually am asking, because I don’t know.
I’ve touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It’s hugely inefficient. But I don’t actually see alternatives.
It sucks even more because it’s probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I’m not confident how common this is. I think a more prototypical example is “southern politeness” with “Oh bless your heart”).
So I don’t think there’s a permanent answer for public discourse. There’s just costly signaling via phrasing things carefully in a way that suggests you’re paying attention to your reader’s mental state (including their mental map of the current landscape of social moves people commonly pull) and writing things that expressly work to build trust given that mental state.
(Duncan’s more recent writing often seems to be making an effort at this. It doesn’t work universally, due to the unfortunate fact that not all one’s readers will be having the same mental state. A disclaimer that reassures one person may alienate another)
It seems… hypothetically possible for LessWrong to someday establish this sort of trust, but I think it actually requires hours and hours of doublecrux for each pair of people with different worldviews, and then that trust isn’t necessarily transitive between the next pair of people with different different worldviews. (Worldviews which affect what even seem like reasonable meta-level norms within the paradigm of ‘we’re all here to truthseek’. See tensions in truthseeking for some [possibly out of date] thoughts on mine on that)
I haven’t said ‘bad person’ unless I’m missing something. I’ve said things like ‘doing net harm in your career’ or ‘making it worse’ or ‘not doing the right thing.’ I’m talking about actions, and when I say ‘right thing’ I mean shorthand for ‘that which moves things in the directions you’d like to see’ rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.
It’s a strange but consistent thing that people’s brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don’t take the better actions of being bad people. Or even, as you say, ‘exceptionally bad’ people.
I mean, you haven’t called anyone a bad person, but “It’s Not The Incentives, It’s You” is a pretty damn accusatory thing to say, I’d argue. (Of course, I’m also aware that you weren’t the originator of that phrase—the author of the linked article was—but you at least endorse its use enough to repeat it in your own comments, so I think it’s worth pointing out.)
Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.
On two levels.
Level one is the one where some level of endorsement of something means that I’m making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.
Level two is that the OP doesn’t make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for ‘science’ or ‘the world’ but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.
That is importantly different from claiming that these are bad people.
Is it possible to say ‘your actions are bad and maybe you should stop’ or even ‘your actions are having these results and maybe you should stop’ without saying ‘you are bad and you should feel bad’?
I actually am asking, because I don’t know.
I’ve touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It’s hugely inefficient. But I don’t actually see alternatives.
It sucks even more because it’s probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I’m not confident how common this is. I think a more prototypical example is “southern politeness” with “Oh bless your heart”).
So I don’t think there’s a permanent answer for public discourse. There’s just costly signaling via phrasing things carefully in a way that suggests you’re paying attention to your reader’s mental state (including their mental map of the current landscape of social moves people commonly pull) and writing things that expressly work to build trust given that mental state.
(Duncan’s more recent writing often seems to be making an effort at this. It doesn’t work universally, due to the unfortunate fact that not all one’s readers will be having the same mental state. A disclaimer that reassures one person may alienate another)
It seems… hypothetically possible for LessWrong to someday establish this sort of trust, but I think it actually requires hours and hours of doublecrux for each pair of people with different worldviews, and then that trust isn’t necessarily transitive between the next pair of people with different different worldviews. (Worldviews which affect what even seem like reasonable meta-level norms within the paradigm of ‘we’re all here to truthseek’. See tensions in truthseeking for some [possibly out of date] thoughts on mine on that)
I’ve noted issues with Public Archipelago given current technologies, but it still seems like the best solution to me.
It seems pretty fucked up to take positive proposals at face value given that context.