The notion of specificity may be useful, but to me its presentation in terms of tone (beginning with the title “The Power to Demolish Bad Arguments”) and examples seemed rather antithetical to the Less Wrong philosophy of truth-seeking.
For instance, I read the “Uber exploits its drivers” example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart’s arguments apart, all the while insulting this fictitious person with asides like “By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.”.
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit—reversed stupidity is not intelligence, and hence even if we “demolish” our counterpart’s supposedly bad arguments, at best we discover that they could not shift our priors.
And more generally, the essay gave me a yucky sense of “rationalists try to prove their superiority by creating strawmen and then beating them in arguments”, sneer culture, etc. It doesn’t help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concepts.
The essay gave me a yucky sense of “rationalists try to prove their superiority by creating strawmen and then beating them in arguments”, sneer culture, etc. It doesn’t help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concept
Yeah, I take your point that the post’s tone and political-ish topic choice undermine the ability of readers to absorb its lessons about the power of specificity. This is a clear message I’ve gotten from many commenters, whether explicitly or implicitly. I shall edit the post.
Update: I’ve edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
Object-level reply
In the meantime, I still think it’s worth pointing out where I think you are, in fact, analyzing the content wrong and not absorbing its lessons :)
For instance, I read the “Uber exploits its drivers” example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart’s arguments apart
My dialogue character has various positive-affect a-priori beliefs about Uber, but having an a-priori belief state isn’t the same thing as having an immutable bottom line. If Steve had put forth a coherent claim, and a shred of support for that claim, then the argument would have left me with a modified a-posteriori belief state.
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit
My character is making a good-faith attempt at Double Crux. It’s just impossible for me to ascertain Steve’s claim-underlying crux until I first ascertain Steve’s claim.
even if we “demolish” our counterpart’s supposedly bad arguments, at best we discover that they could not shift our priors.
You seem to be objecting that selling “the power to demolish bad arguments” means that I’m selling a Fully General Counterargument, but I’m not. The way this dialogue goes isn’t representative of every possible dialogue where the power of specificity is applied. If Steve’s claim were coherent, then asking him to be specific would end up helping me change my own mind faster and demolish my own a-priori beliefs.
reversed stupidity is not intelligence
It doesn’t seem relevant to mention this. In the dialogue, there’s no instance of me creating or modifying my beliefs about Uber by reversing anything.
all the while insulting this fictitious person with asides like “By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.”.
I’m making an example out of Steve because I want to teach the reader about an important and widely-applicable observation about so-called “intellectual discussions”: that participants often win over a crowd by making smart-sounding general assertions whose corresponding set of possible specific interpretations is the empty set.
The problem is, “specifity” has to be handled in a really specific way and the intention has to be the desire to get from the realm of unclear arguments to clear insight.
If you see discussions as a chess game, you’re already sending your brain in the wrong direction, to the goal of “winning” the conversation, which is something fundamentally different than the goal of clarity.
Just as specificity remains abstract here and is therefore misunderstood, one would have to ask: What exactly is specificity supposed to be?
Linguistics would help here. For the problem that is negotiated grows out of the deficiencies of language, namely that language is contaminated with ambiguities. Linguistically specific is when numbers and entities (names) come in.
With “Acme” there is already an entity—otherwise everything, even the so-called specific argument—remains highly abstract. Therefore, the specificity trick in the dialogs remain just that—a manipulative trick. And tricks don’t lead to clarity.
Specificity would be possible here only by injecting numbers: “How many dollars does Acme extract in surplus value per hour worked by their workers?”
After that, the exploitation would have been specifically quantified and one could talk about whether Acme is brutally or somewhat unjustified exploiting the workers’ bad situation or whether the wages are fair.
The specific economics of Acme would, of course, be even more complicated, insofar as one would have to ask whether much of the added value is already being absorbed by overpaid senior executives.
At the end of any specific discussion, however, the panelists must ask themselves what they want to be: fair or unfair? Those who want to gain clarity about this have to answer it for themselves.
Then briefly on Uber: Uber is a bad business idea. It’s bad because it can only become profitable if Uber dominates its markets up to the point that they don’t have no competition anymore. Their costs are too high. A simple service is burdened with huge overhead costs (would have to re rechearched specifically, I know), and these overhead costs are then partly imposed on the service users when user are in desperate need, partly on the service providers.
Even with Uber, you can debate back and forth the specific figures for a long time. In the end, users have to ask themselves: Do I want to use a business model that is so bad that it can only exist as a quasi-monopolist?
I don’t do that because I don’t want to.
If someone like Peter Thiel, for example, is such a bad businessman that he only can survive in non-competitive situations, then he might say: Zero competition is my way of succeeding since I can’t make money as soon as there is some competition. Fairness doesn’t matter to me.
Hoewever, Specificity is healing. That’s right. When one talks, one can never talk specifically enough. However, many ideological debates suffer not from too many abstract concepts, but often from false specificities. Specificities, after all, are always popular for setting false frames. In the end, clarity is only achieved by those who really want clarity, and not simply by those who want to win.
The notion of specificity may be useful, but to me its presentation in terms of tone (beginning with the title “The Power to Demolish Bad Arguments”) and examples seemed rather antithetical to the Less Wrong philosophy of truth-seeking.
For instance, I read the “Uber exploits its drivers” example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart’s arguments apart, all the while insulting this fictitious person with asides like “By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.”.
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit—reversed stupidity is not intelligence, and hence even if we “demolish” our counterpart’s supposedly bad arguments, at best we discover that they could not shift our priors.
And more generally, the essay gave me a yucky sense of “rationalists try to prove their superiority by creating strawmen and then beating them in arguments”, sneer culture, etc. It doesn’t help that some of its central examples involve hot-button issues on which many readers will have strong and yet divergent opinions, which imo makes them rather unsuited as examples for teaching most rationality techniques or concepts.
Meta-level reply
Yeah, I take your point that the post’s tone and political-ish topic choice undermine the ability of readers to absorb its lessons about the power of specificity. This is a clear message I’ve gotten from many commenters, whether explicitly or implicitly. I shall edit the post.
Update: I’ve edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
Object-level reply
In the meantime, I still think it’s worth pointing out where I think you are, in fact, analyzing the content wrong and not absorbing its lessons :)
My dialogue character has various positive-affect a-priori beliefs about Uber, but having an a-priori belief state isn’t the same thing as having an immutable bottom line. If Steve had put forth a coherent claim, and a shred of support for that claim, then the argument would have left me with a modified a-posteriori belief state.
My character is making a good-faith attempt at Double Crux. It’s just impossible for me to ascertain Steve’s claim-underlying crux until I first ascertain Steve’s claim.
You seem to be objecting that selling “the power to demolish bad arguments” means that I’m selling a Fully General Counterargument, but I’m not. The way this dialogue goes isn’t representative of every possible dialogue where the power of specificity is applied. If Steve’s claim were coherent, then asking him to be specific would end up helping me change my own mind faster and demolish my own a-priori beliefs.
It doesn’t seem relevant to mention this. In the dialogue, there’s no instance of me creating or modifying my beliefs about Uber by reversing anything.
I’m making an example out of Steve because I want to teach the reader about an important and widely-applicable observation about so-called “intellectual discussions”: that participants often win over a crowd by making smart-sounding general assertions whose corresponding set of possible specific interpretations is the empty set.
I think you are on the right track.
The problem is, “specifity” has to be handled in a really specific way and the intention has to be the desire to get from the realm of unclear arguments to clear insight.
If you see discussions as a chess game, you’re already sending your brain in the wrong direction, to the goal of “winning” the conversation, which is something fundamentally different than the goal of clarity.
Just as specificity remains abstract here and is therefore misunderstood, one would have to ask: What exactly is specificity supposed to be?
Linguistics would help here. For the problem that is negotiated grows out of the deficiencies of language, namely that language is contaminated with ambiguities. Linguistically specific is when numbers and entities (names) come in.
With “Acme” there is already an entity—otherwise everything, even the so-called specific argument—remains highly abstract. Therefore, the specificity trick in the dialogs remain just that—a manipulative trick. And tricks don’t lead to clarity.
Specificity would be possible here only by injecting numbers: “How many dollars does Acme extract in surplus value per hour worked by their workers?”
After that, the exploitation would have been specifically quantified and one could talk about whether Acme is brutally or somewhat unjustified exploiting the workers’ bad situation or whether the wages are fair.
The specific economics of Acme would, of course, be even more complicated, insofar as one would have to ask whether much of the added value is already being absorbed by overpaid senior executives.
At the end of any specific discussion, however, the panelists must ask themselves what they want to be: fair or unfair? Those who want to gain clarity about this have to answer it for themselves.
Then briefly on Uber: Uber is a bad business idea. It’s bad because it can only become profitable if Uber dominates its markets up to the point that they don’t have no competition anymore. Their costs are too high. A simple service is burdened with huge overhead costs (would have to re rechearched specifically, I know), and these overhead costs are then partly imposed on the service users when user are in desperate need, partly on the service providers.
Even with Uber, you can debate back and forth the specific figures for a long time. In the end, users have to ask themselves: Do I want to use a business model that is so bad that it can only exist as a quasi-monopolist?
I don’t do that because I don’t want to.
If someone like Peter Thiel, for example, is such a bad businessman that he only can survive in non-competitive situations, then he might say: Zero competition is my way of succeeding since I can’t make money as soon as there is some competition. Fairness doesn’t matter to me.
Hoewever, Specificity is healing. That’s right. When one talks, one can never talk specifically enough. However, many ideological debates suffer not from too many abstract concepts, but often from false specificities. Specificities, after all, are always popular for setting false frames. In the end, clarity is only achieved by those who really want clarity, and not simply by those who want to win.