My interpretation of the message in Incredibles would be: “No one like the whiners”. (Note: I haven’t actually seen the movies, so there may be a nuance I am missing here completely.)
A part of the reason is that in many situations whining is unproductive. For example, there may be a situation where no available choice is perfect, and any solution necessarily contain a trade-off, but some people waste everyone’s time by refusing to accept that, and playing high moral ground, without proposing their own solution (which would expose them to criticism). Or people may defend an obviously suboptimal choice by selectively applying nirvana fallacy to all alternatives.
But another part is that we are instinctively wired to win social conflicts. If someone complains too much, it suggests they are too weak, and therefore a useless ally. You want to join someone who is frustrated today, but has a solid chance to prevail tomorrow; not someone who will predictably remain at the bottom. And there are all kinds of biases that can make your “elephant” perceive something as too much whining.
I occasionally find myself in situations where I feel I’m being asked to take a sort of Straussian stance — if you want to get important things done, you can’t be totally transparent about what you’re doing, because the general public will stop you. I’m not sure these people are wrong. But I really hope they are. I have a bad feeling about maintaining information asymmetries as a general policy.
Among rational and mutually friendly agents, hiding information would be bad. But most people are pretty far from being rational, and some people are pretty far from being friendly. Secrecy is a defense against an unknown but statistically real enemy.
If you say something in front of a sufficiently large audience, inevitably some people will disagree for wrong reasons. Some of them are crazy, or just completely misinformed about the topic (in a way that cannot be fixed in short term). Some of them see “disagreeing with you” as a way to get status points at your expense, even if they actually don’t truly disagree with you on the object level. Yes, it’s true that some people might disagree for the right reasons. But how would you solicit the feedback from the latter, without exposing yourself to the reaction from the former?
Secrecy is a defense against an unknown but statistically real enemy.
What’s the actual threat here? “People disagreeing with you” isn’t a threat in itself. The possible threats I am thinking of include:
being fired or otherwise embargoed for saying politically controversial things (solution: use basic PR filters to avoid pattern matching as an enemy; this is mostly about connotation rather than denotation)
death threats, such as those faced by Anita Sarkeesian (solution: don’t be that combination of politically controversial and well-known, or just ignore the threats since they’re very rarely carried out)
having criticism posted online that influences your supporters (solution: possibly respond to this criticism; if it’s vacuous and your supporters have decent judgment, this is not hard)
I’m just not seeing risks commensurate with the costs of actively avoiding creating common knowledge about your strategies/beliefs/etc.
Hmm. This just seems really optimistic to me about how reasonable people are, and how easy it is to avoid these classes of issues.
The main threat here is costs to your time, attention and emotional well being, which are some of your most valuable resources.
Use basic PR filters to avoid pattern matching as an enemy; this is mostly about connotation rather than denotation)
Sure. But, this is a skill you now need to learn, that you didn’t have to learn if you were just sidestepping the issue, and which you might not be very talented at.
Ignore death threats
Probably correct, but, again, a major emotional skill people don’t have. Most people find death threats incredibly stressful.
Respond to criticism that influences your supporters
Assumes you have time to do so. Every instance of criticism you need to address is time that you didn’t have to spend before but now you do. This adds up quickly the bigger your stage.
If your supporters have decent judgement”
This is sidestepping several issues:
decent judgment is in fact not that common in the first place
critics who are not going out of their way to be truth-aligned can be optimizing for all kinds of things that are harder to defend against than attack. Two instances of this are the “if you have to spend time convincing people you’re not a child molester you’ve already lost” phenomenon, and the “if the argument for your position requires more inferential steps than your supporters have the attention to listen to, then you can’t actually address it.”
the relevant class isn’t “supporters”, it’s “potential supporters”, and there are many instances you don’t have the luxury of everyone in the “potential supporters” category being especially reasonable
It’s unfortunate that people don’t talk about the benefits of negative publicity more in this context. Attention can be metabolized into money or other resources. Negative attention too, because it will galvanize at least some positive attention, and because it gives you free publicity and sometimes people will straight-up pay for things they think are terrible. Look at what Jordan Peterson and Donald Trump have made out of being hated by so many.
You can’t do this if you’re depending on the average of public opinion for validation, though.
You can’t do this if you’re depending on the average of public opinion for validation, though.
I’m curious why you write this line after speaking about Donald Trump who did have to win something like an average of the population to vote for him.
Nassim Taleb writes about the benefit of negative publicity. His notion of anti-fragility is useful for thinking about when negative publicity is beneficial. Conceptually, I think that notion is more useful than asking yourself whether you depend on an average of public opinion.
When you want more of a how-to guide there’s Ryan Holiday’s Trust Me I’m Lying.
It seems to me like the main interesting thing Trump did was win the primary, and his tactics seemed designed to galvanize strong supporters, not win over the median Republican voter. I think the general election very closely followed party affiliation, which suggests that most voters just aren’t that sensitive to the details of who’s their party’s candidate and vote the party line.
But even under the median voter theorem you only need slightly more than 50% of voters to like you just a little more than the alternatives, and the intensity of opposition doesn’t matter much.
Even so, I agree Trump was not a straightforward example. Oops!
There were at the time plenty of people who predicted that even if Trump would win the primary he surely wouldn’t win the general selection. The fact that he did seems to be more obvious in hindsight.
The situations I was most imagining (from Sarah’s original post, not necessarily from Jessicata’s comment) were actually more Dunbar-ish-number-sized – a workplace or local community, that is large enough to have multiple interest groups.
In that context… well, there’s still a benefit of negative publicity (I have sometimes written things with intent to be medium-controversial, so as to get more attention to an idea). But it comes embedded with more personal costs than when you’re engaging the wider world and “no such thing as bad press” is a bit more fraught a guideline.
I don’t think we have a major disagreement about how big the threat is. Mostly I get annoyed when people allude to a vague threat from being transparent instead of being specific about what the threat is, how big it is, and how to plan around it; you are being specific here, which is helpful. I think planning around it is usually worth it, because the benefits from sharing information (strategic and otherwise) outside your clique are very large, unless your clique is itself already a functional secret society, which, let’s be real, is generally not what is going on. Advances (scientific, strategic, etc) are generally made by networks of communicating people, not lone individuals, and generally not secret cliques either (see: The Inner Ring).
“People disagreeing with you” isn’t a threat in itself.
Depends on why they disagree. For example, some people just love to argue. If you say “X”, they are going to say “non-X” even if a minute before they had absolutely no opinion about it. It could be their idea of fun; it could be a status move. Some people have to inject themselves in everything, because it makes them feel important. Suddenly you are stuck talking to people who do not provide the truth-seeking value a honest opponent would.
Even if you ignore death threats, a stalker who follows you everywhere and keeps disagreeing with you publicly, can be a waste of your energy. Crazy people can write an insane amount of content, because they can type without thinking and they have nothing better to do at the moment. Even if they don’t convince anyone, they can disrupt a meaningful debate, and make you seem bad by association with them.
I feel kind of lost. Your examples imply a narrowly focused malign attention (in which case I would add “potential backstabbing” to the list :), Viliam seems to be talking about working with groups of people who are more or less neutral to the actual cause, and Sarah seems to be talking about working with people who have at least ostensibly agreed to move in a common direction. Won’t there be different threats in all these cases?
OTOH, how about “I will get you fired” promises? Less spooky than death threats, but much more manageable.
Yeah, I think what happened is that Sarah talked about people pushing towards secrecy for vague reasons, Villiam tried giving a more specific reason (people disagreeing with you), and I was like “wait wtf how are any of these responding to serious threats, here’s my best attempt at thinking about what could actually go wrong, is this what you’re worried about?”
I think there is a lot of irrational paranoia going on with pushes towards secrecy, perhaps a rationalization of a desire to create/maintain an inner ring. Very likely, if the people Sarah was talking to were more transparent, none of the things I listed would happen; some much-less-serious negative consequences might happen, and they would be relatively easy to deal with.
(note: Sarah did mention a concern about the general public stopping you if you were transparent, which implies neutral/negative attention)
I think there is a lot of irrational paranoia going on with pushes towards secrecy
I think the paranoia is basically entirely rational. Several people have listed a variety of threats ranging from (at one extreme) death threats, and much more commonly, mild social disapproval that just makes it harder to accomplish things.
This doesn’t mean there aren’t benefits to transparency. But I think the threats are generally well understood, and if you want (yourself, or others) to get the benefits of transparency you to need to actually do a lot of social infrastructure work to alleviate those costs.
This is an important and worthwhile project, but even within the rationality community, “mild social disapproval that is demoralizing and makes it harder to accomplish things” is still a problem that needs to be actively addressed in order for transparency benefits to scale.
My interpretation of the message in Incredibles would be: “No one like the whiners”. (Note: I haven’t actually seen the movies, so there may be a nuance I am missing here completely.)
A part of the reason is that in many situations whining is unproductive. For example, there may be a situation where no available choice is perfect, and any solution necessarily contain a trade-off, but some people waste everyone’s time by refusing to accept that, and playing high moral ground, without proposing their own solution (which would expose them to criticism). Or people may defend an obviously suboptimal choice by selectively applying nirvana fallacy to all alternatives.
But another part is that we are instinctively wired to win social conflicts. If someone complains too much, it suggests they are too weak, and therefore a useless ally. You want to join someone who is frustrated today, but has a solid chance to prevail tomorrow; not someone who will predictably remain at the bottom. And there are all kinds of biases that can make your “elephant” perceive something as too much whining.
Among rational and mutually friendly agents, hiding information would be bad. But most people are pretty far from being rational, and some people are pretty far from being friendly. Secrecy is a defense against an unknown but statistically real enemy.
If you say something in front of a sufficiently large audience, inevitably some people will disagree for wrong reasons. Some of them are crazy, or just completely misinformed about the topic (in a way that cannot be fixed in short term). Some of them see “disagreeing with you” as a way to get status points at your expense, even if they actually don’t truly disagree with you on the object level. Yes, it’s true that some people might disagree for the right reasons. But how would you solicit the feedback from the latter, without exposing yourself to the reaction from the former?
What’s the actual threat here? “People disagreeing with you” isn’t a threat in itself. The possible threats I am thinking of include:
being fired or otherwise embargoed for saying politically controversial things (solution: use basic PR filters to avoid pattern matching as an enemy; this is mostly about connotation rather than denotation)
death threats, such as those faced by Anita Sarkeesian (solution: don’t be that combination of politically controversial and well-known, or just ignore the threats since they’re very rarely carried out)
having criticism posted online that influences your supporters (solution: possibly respond to this criticism; if it’s vacuous and your supporters have decent judgment, this is not hard)
I’m just not seeing risks commensurate with the costs of actively avoiding creating common knowledge about your strategies/beliefs/etc.
Hmm. This just seems really optimistic to me about how reasonable people are, and how easy it is to avoid these classes of issues.
The main threat here is costs to your time, attention and emotional well being, which are some of your most valuable resources.
Sure. But, this is a skill you now need to learn, that you didn’t have to learn if you were just sidestepping the issue, and which you might not be very talented at.
Probably correct, but, again, a major emotional skill people don’t have. Most people find death threats incredibly stressful.
Assumes you have time to do so. Every instance of criticism you need to address is time that you didn’t have to spend before but now you do. This adds up quickly the bigger your stage.
This is sidestepping several issues:
decent judgment is in fact not that common in the first place
critics who are not going out of their way to be truth-aligned can be optimizing for all kinds of things that are harder to defend against than attack. Two instances of this are the “if you have to spend time convincing people you’re not a child molester you’ve already lost” phenomenon, and the “if the argument for your position requires more inferential steps than your supporters have the attention to listen to, then you can’t actually address it.”
the relevant class isn’t “supporters”, it’s “potential supporters”, and there are many instances you don’t have the luxury of everyone in the “potential supporters” category being especially reasonable
It’s unfortunate that people don’t talk about the benefits of negative publicity more in this context. Attention can be metabolized into money or other resources. Negative attention too, because it will galvanize at least some positive attention, and because it gives you free publicity and sometimes people will straight-up pay for things they think are terrible. Look at what Jordan Peterson and Donald Trump have made out of being hated by so many.
You can’t do this if you’re depending on the average of public opinion for validation, though.
I’m curious why you write this line after speaking about Donald Trump who did have to win something like an average of the population to vote for him.
Nassim Taleb writes about the benefit of negative publicity. His notion of anti-fragility is useful for thinking about when negative publicity is beneficial. Conceptually, I think that notion is more useful than asking yourself whether you depend on an average of public opinion.
When you want more of a how-to guide there’s Ryan Holiday’s Trust Me I’m Lying.
It seems to me like the main interesting thing Trump did was win the primary, and his tactics seemed designed to galvanize strong supporters, not win over the median Republican voter. I think the general election very closely followed party affiliation, which suggests that most voters just aren’t that sensitive to the details of who’s their party’s candidate and vote the party line.
But even under the median voter theorem you only need slightly more than 50% of voters to like you just a little more than the alternatives, and the intensity of opposition doesn’t matter much.
Even so, I agree Trump was not a straightforward example. Oops!
There were at the time plenty of people who predicted that even if Trump would win the primary he surely wouldn’t win the general selection. The fact that he did seems to be more obvious in hindsight.
The situations I was most imagining (from Sarah’s original post, not necessarily from Jessicata’s comment) were actually more Dunbar-ish-number-sized – a workplace or local community, that is large enough to have multiple interest groups.
In that context… well, there’s still a benefit of negative publicity (I have sometimes written things with intent to be medium-controversial, so as to get more attention to an idea). But it comes embedded with more personal costs than when you’re engaging the wider world and “no such thing as bad press” is a bit more fraught a guideline.
I don’t think we have a major disagreement about how big the threat is. Mostly I get annoyed when people allude to a vague threat from being transparent instead of being specific about what the threat is, how big it is, and how to plan around it; you are being specific here, which is helpful. I think planning around it is usually worth it, because the benefits from sharing information (strategic and otherwise) outside your clique are very large, unless your clique is itself already a functional secret society, which, let’s be real, is generally not what is going on. Advances (scientific, strategic, etc) are generally made by networks of communicating people, not lone individuals, and generally not secret cliques either (see: The Inner Ring).
Depends on why they disagree. For example, some people just love to argue. If you say “X”, they are going to say “non-X” even if a minute before they had absolutely no opinion about it. It could be their idea of fun; it could be a status move. Some people have to inject themselves in everything, because it makes them feel important. Suddenly you are stuck talking to people who do not provide the truth-seeking value a honest opponent would.
Even if you ignore death threats, a stalker who follows you everywhere and keeps disagreeing with you publicly, can be a waste of your energy. Crazy people can write an insane amount of content, because they can type without thinking and they have nothing better to do at the moment. Even if they don’t convince anyone, they can disrupt a meaningful debate, and make you seem bad by association with them.
I feel kind of lost. Your examples imply a narrowly focused malign attention (in which case I would add “potential backstabbing” to the list :), Viliam seems to be talking about working with groups of people who are more or less neutral to the actual cause, and Sarah seems to be talking about working with people who have at least ostensibly agreed to move in a common direction. Won’t there be different threats in all these cases?
OTOH, how about “I will get you fired” promises? Less spooky than death threats, but much more manageable.
Yeah, I think what happened is that Sarah talked about people pushing towards secrecy for vague reasons, Villiam tried giving a more specific reason (people disagreeing with you), and I was like “wait wtf how are any of these responding to serious threats, here’s my best attempt at thinking about what could actually go wrong, is this what you’re worried about?”
I think there is a lot of irrational paranoia going on with pushes towards secrecy, perhaps a rationalization of a desire to create/maintain an inner ring. Very likely, if the people Sarah was talking to were more transparent, none of the things I listed would happen; some much-less-serious negative consequences might happen, and they would be relatively easy to deal with.
(note: Sarah did mention a concern about the general public stopping you if you were transparent, which implies neutral/negative attention)
I think the paranoia is basically entirely rational. Several people have listed a variety of threats ranging from (at one extreme) death threats, and much more commonly, mild social disapproval that just makes it harder to accomplish things.
This doesn’t mean there aren’t benefits to transparency. But I think the threats are generally well understood, and if you want (yourself, or others) to get the benefits of transparency you to need to actually do a lot of social infrastructure work to alleviate those costs.
This is an important and worthwhile project, but even within the rationality community, “mild social disapproval that is demoralizing and makes it harder to accomplish things” is still a problem that needs to be actively addressed in order for transparency benefits to scale.