Is there already a concept handle for the notion of a Problem Where The Intuitive Solution Actually Makes It Worse But Makes You Want To Use Even More Dakka On It?
My most salient example is the way that political progressives in the Bay Area tried using restrictive zoning and rent control in order to prevent displacement… but this made for a housing shortage and made the existing housing stock skyrocket in value… which led to displacement happening by other (often cruel and/or backhanded) methods… which led to progressives concluding that their rules weren’t restrictive enough.
Another example is that treating a chunk of the population with contempt makes a good number of people in that chunk become even more opposed to you, which makes you want to show even more contempt for them, etc. (Which is not to say their ideas are correct or even worthy of serious consideration—but the people are always worthy of respect.)
That sort of dynamic is how you can get an absolutely fucked-up self-reinforcing situation, an inadequate quasi-equilibrium that’s not even a Nash equilibrium, but exists because at least one party is completely wrong about its incentives.
(And before you get cynical, of course there are disingenuous people whose preferences are perfectly well served in that quasi-equilibrium. But most activists do care about the outcomes, and would change their actions if they were genuinely convinced the outcomes would be different.)
More seriously, though, do you have any examples that aren’t based on the instinct-to-punish(reality, facts, people,...) that I ranted about in Curse of the Counterfactual? If they all fall in this category, one could call it an Argument With Reality, which is Byron Katie’s term for it. (You could also call it, “The Principle of the Thing”, an older and more colloquial term for people privileging the idea of a thing over the substance of the thing, usually to an irrational extent.)
When people are having an Argument With Reality, they:
Go for approaches that impose costs on some target(s), in preference to ones that are of benefit to anyone
Refuse to acknowledge other points of view except for how it proves those holding them to be the Bad Wrong Enemies
Double down as long as reality refuses to conform or insufficient Punishment has occurred (defined as the Bad Wrong Enemies surrendering and submitting or at least showing sufficiently-costly signals to that effect)
A lot of public policy is driven this way; Wars on Abstract Nouns are always more popular than rehabiliation, prevention, and other benefit-oriented policies, which will be denigrated as being too Soft On Abstract Nouns. (This also applies of course to non-governmental public policies, with much the same incentives for anybody in the public view to avoid becoming considered one of the Bad Wrong Enemies.)
In terms of naming / identifying this, do you think it would help to distinguish what makes you want to double down on the current solution? I can think of at least 3 reasons:
Not being aware that it’s making things worse
Knowing that it made things worse, but feeling like giving up on that tactic would make things get even worse instead of better
Being committed to the tactic more than to the outcome (what pjeby described as “The Principle of the Thing”) -- which could itself have multiple reasons, including emotionally-driven responses, duty-based reasoning, or explicitly believing that doubling down somehow leads to better outcomes in the long run.
Do these all fall within the phenomenon you’re trying to describe?
Is there already a concept handle for the notion of a Problem Where The Intuitive Solution Actually Makes It Worse But Makes You Want To Use Even More Dakka On It?
My most salient example is the way that political progressives in the Bay Area tried using restrictive zoning and rent control in order to prevent displacement… but this made for a housing shortage and made the existing housing stock skyrocket in value… which led to displacement happening by other (often cruel and/or backhanded) methods… which led to progressives concluding that their rules weren’t restrictive enough.
Another example is that treating a chunk of the population with contempt makes a good number of people in that chunk become even more opposed to you, which makes you want to show even more contempt for them, etc. (Which is not to say their ideas are correct or even worthy of serious consideration—but the people are always worthy of respect.)
That sort of dynamic is how you can get an absolutely fucked-up self-reinforcing situation, an inadequate quasi-equilibrium that’s not even a Nash equilibrium, but exists because at least one party is completely wrong about its incentives.
(And before you get cynical, of course there are disingenuous people whose preferences are perfectly well served in that quasi-equilibrium. But most activists do care about the outcomes, and would change their actions if they were genuinely convinced the outcomes would be different.)
“The Human Condition”? ;-)
More seriously, though, do you have any examples that aren’t based on the instinct-to-punish(reality, facts, people,...) that I ranted about in Curse of the Counterfactual? If they all fall in this category, one could call it an Argument With Reality, which is Byron Katie’s term for it. (You could also call it, “The Principle of the Thing”, an older and more colloquial term for people privileging the idea of a thing over the substance of the thing, usually to an irrational extent.)
When people are having an Argument With Reality, they:
Go for approaches that impose costs on some target(s), in preference to ones that are of benefit to anyone
Refuse to acknowledge other points of view except for how it proves those holding them to be the Bad Wrong Enemies
Double down as long as reality refuses to conform or insufficient Punishment has occurred (defined as the Bad Wrong Enemies surrendering and submitting or at least showing sufficiently-costly signals to that effect)
A lot of public policy is driven this way; Wars on Abstract Nouns are always more popular than rehabiliation, prevention, and other benefit-oriented policies, which will be denigrated as being too Soft On Abstract Nouns. (This also applies of course to non-governmental public policies, with much the same incentives for anybody in the public view to avoid becoming considered one of the Bad Wrong Enemies.)
In terms of naming / identifying this, do you think it would help to distinguish what makes you want to double down on the current solution? I can think of at least 3 reasons:
Not being aware that it’s making things worse
Knowing that it made things worse, but feeling like giving up on that tactic would make things get even worse instead of better
Being committed to the tactic more than to the outcome (what pjeby described as “The Principle of the Thing”) -- which could itself have multiple reasons, including emotionally-driven responses, duty-based reasoning, or explicitly believing that doubling down somehow leads to better outcomes in the long run.
Do these all fall within the phenomenon you’re trying to describe?
Thanks for drawing distinctions—I mean #1 only.