Geoffrey Miller explained in a talk about Virtue signaling and effective altruism (which I saw after writing this post) how things can go wrong when there is too much intelligence signaling:
With the exception of a Nootropics arms race, I don’t think runaway IQ signalling looks like anything that was mentioned. Runaway IQ signalling might start off like that, but as time goes on, peoples views on all of the above will start to flipflop as they try to distinguish themselves from their peers at a similar level of IQ. Leaning too hard on any of the above mentioned signalling mechanisms exposes you to arguments against them, allowing someone to signal that they’re smarter than you. But then if you over-correct, then you become exposed to counter arguments, allowing someone else to signal that they’re smarter than you. I think EY captured the basic idea in this post on meta-contrarianism.
I think a better model might be that it looks a lot more like old school internet arguments. Several people trying really hard to out-manoeuvre each other in a long series of comments in a thread, mailing list, or debate. With each of them saying some version of, “Ah this is true, but you’ve forgotten to account for...” in order to prove that they are Right and everyone else is Wrong. Or mathematicians trying to to prove difficult and well recognized theorems, since those are solid benchmarks for demonstrating intelligence.
If you want to see what runaway intelligence signaling looks like, go to grad school in analytic philosophy. You will find amazingly creative counterexamples, papers full symbolic logic, speakers who get attacked with refutations from the audience in mid-talk, and then, sometimes, deftly parry the killing blow with a clever metaphor, taking the questioner down a peg...
It’s not too much of a stretch to see philosophers as IQ signaling athletes. Tennis has its ATP ladder, and everybody gets a rank. In philosophy it’s slightly less blatant, partly because even the task of scorekeeping in the IQ signaling game requires you to be very smart. Nonetheless, there is always a broad consensus about who the top players players are and which departments employ them.
Unlike tennis players, though, philosophers play their game without a real audience, apart from themselves. The winners get comfortable jobs and some worldly esteem, but their main achievement is just winning. Some have huge impact inside the game, but because nobody else is watching, that impact is almost never transmitted to the world outside the game. They’re not using their intelligence to improve the world. They’re using their intelligence to demonstrate their intelligence.
Could you expand a bit on why you expect a trade-off between intelligence/virtue signalling, as opposed to two independent axes? I can sort of see a case where intelligence is the “cost” part of “costly virtue signalling”, and virtue is the “cost” part of “costly intelligence signalling”, like the examples in toxoplasma of rage. On the other hand, looking at those examples of the dangers of runaway IQ signalling, they generally don’t seem to trade-off against virtue.
Could you expand a bit on why you expect a trade-off between intelligence/virtue signalling, as opposed to two independent axes?
They are two independent axes, but when you’re at the Pareto frontier (which I think a lot of people are at), doing more of one requires doing less of the other. For virtue signaling in particular, to signal effectively you often have to parrot a very narrow party line or orthodoxy, which leaves very few degrees of freedom to do intelligence signaling. For example, if there are errors in the party line or orthodoxy, which you’d ordinarily get “intelligence points” for finding and pointing them out, in a virtue-signaling environment you’d get shamed/censored/punished.
What started this whole line of thought was this statement (linked to in the OP), which I saw someone quote in a completely serious way.
It seems like a lot of examples of virtue signalling require sacrificing intelligence, but sacrificing virtue seems like a less common requirement to signal intelligence. So one possible model would be that, rather than a pareto frontier on which the two trade off symmetrically, intelligent decisions are an input which are destructively consumed to produce virtue signals—like trees are consumed to produce paper.
Sometimes you can sacrifice a bit of virtue to signal intelligence. For example, when people talk in real life, interrupting other people may give you an opportunity to say something clever first. Or you can make a funny joke that shows how smart and quick you are, even if you know that this will derail the debate.
Then there is contrarianism for signalling sake. You disagree with people not because you truly believe they are wrong, but to show that they are unthinking sheep and you are the brave one who dares to oppose the popular opinion (even if you actually believe the popular opinion to be correct, and the thing you said is just an exercise in finding clever excuses for what is most likely the wrong answer). This can cause actual harm, when people convinced by your speech do the wrong thing instead of the right one.
Geoffrey Miller explained in a talk about Virtue signaling and effective altruism (which I saw after writing this post) how things can go wrong when there is too much intelligence signaling:
With the exception of a Nootropics arms race, I don’t think runaway IQ signalling looks like anything that was mentioned. Runaway IQ signalling might start off like that, but as time goes on, peoples views on all of the above will start to flipflop as they try to distinguish themselves from their peers at a similar level of IQ. Leaning too hard on any of the above mentioned signalling mechanisms exposes you to arguments against them, allowing someone to signal that they’re smarter than you. But then if you over-correct, then you become exposed to counter arguments, allowing someone else to signal that they’re smarter than you. I think EY captured the basic idea in this post on meta-contrarianism. I think a better model might be that it looks a lot more like old school internet arguments. Several people trying really hard to out-manoeuvre each other in a long series of comments in a thread, mailing list, or debate. With each of them saying some version of, “Ah this is true, but you’ve forgotten to account for...” in order to prove that they are Right and everyone else is Wrong. Or mathematicians trying to to prove difficult and well recognized theorems, since those are solid benchmarks for demonstrating intelligence.
If you want to see what runaway intelligence signaling looks like, go to grad school in analytic philosophy. You will find amazingly creative counterexamples, papers full symbolic logic, speakers who get attacked with refutations from the audience in mid-talk, and then, sometimes, deftly parry the killing blow with a clever metaphor, taking the questioner down a peg...
It’s not too much of a stretch to see philosophers as IQ signaling athletes. Tennis has its ATP ladder, and everybody gets a rank. In philosophy it’s slightly less blatant, partly because even the task of scorekeeping in the IQ signaling game requires you to be very smart. Nonetheless, there is always a broad consensus about who the top players players are and which departments employ them.
Unlike tennis players, though, philosophers play their game without a real audience, apart from themselves. The winners get comfortable jobs and some worldly esteem, but their main achievement is just winning. Some have huge impact inside the game, but because nobody else is watching, that impact is almost never transmitted to the world outside the game. They’re not using their intelligence to improve the world. They’re using their intelligence to demonstrate their intelligence.
Could you expand a bit on why you expect a trade-off between intelligence/virtue signalling, as opposed to two independent axes? I can sort of see a case where intelligence is the “cost” part of “costly virtue signalling”, and virtue is the “cost” part of “costly intelligence signalling”, like the examples in toxoplasma of rage. On the other hand, looking at those examples of the dangers of runaway IQ signalling, they generally don’t seem to trade-off against virtue.
They are two independent axes, but when you’re at the Pareto frontier (which I think a lot of people are at), doing more of one requires doing less of the other. For virtue signaling in particular, to signal effectively you often have to parrot a very narrow party line or orthodoxy, which leaves very few degrees of freedom to do intelligence signaling. For example, if there are errors in the party line or orthodoxy, which you’d ordinarily get “intelligence points” for finding and pointing them out, in a virtue-signaling environment you’d get shamed/censored/punished.
What started this whole line of thought was this statement (linked to in the OP), which I saw someone quote in a completely serious way.
It seems like a lot of examples of virtue signalling require sacrificing intelligence, but sacrificing virtue seems like a less common requirement to signal intelligence. So one possible model would be that, rather than a pareto frontier on which the two trade off symmetrically, intelligent decisions are an input which are destructively consumed to produce virtue signals—like trees are consumed to produce paper.
Sometimes you can sacrifice a bit of virtue to signal intelligence. For example, when people talk in real life, interrupting other people may give you an opportunity to say something clever first. Or you can make a funny joke that shows how smart and quick you are, even if you know that this will derail the debate.
Then there is contrarianism for signalling sake. You disagree with people not because you truly believe they are wrong, but to show that they are unthinking sheep and you are the brave one who dares to oppose the popular opinion (even if you actually believe the popular opinion to be correct, and the thing you said is just an exercise in finding clever excuses for what is most likely the wrong answer). This can cause actual harm, when people convinced by your speech do the wrong thing instead of the right one.
Ah, I’d been looking for the link to that talk as I reference it often.
lol that trolley problem in the lower right is amazing.