A general frame I often find comes in handy while analysing systems is to look for look for equilibria, figure out the key variables sustaining it (e.g., strategic complements, balancing selection, latency or asymmetrical information in commons-tragedies), and well, that’s it. Those are the leverage points to the system. If you understand them, you’re in a much better position to evaluate whether some suggested changes might work, is guaranteed to fail, or suffers from a lack of imagination.
Suggestions that fail to consider the relevant system variables are often what I call “second-best theories”. Though they might be locally correct, they’re also blind to the broader implications or underappreciative of the full space of possibilities.
(A) If it is infeasible to remove a particular market distortion, introducing one or more additional market distortions in an interdependent market may partially counteract the first, and lead to a more efficient outcome.
(B) In an economy with some uncorrectable market failure in one sector, actions to correct market failures in another related sector with the intent of increasing economic efficiency may actually decrease overall economic efficiency.
Examples
The allele that causes sickle-cell anaemia is good because it confers resistance against malaria. (A)
Just cure malaria, and sickle-cell disease ceases to be a problem as well.
Sexual liberalism is bad because people need predictable rules to avoid getting hurt. (B)
Imo, allow people to figure out how to deal with the complexities of human relationships and you eventually remove the need for excessive rules as well.
We should encourage profit-maximising behaviour because the market efficiently balances prices according to demand. (A/B)
Everyone being motivated by altruism is better because market prices only correlate with actual human need insofar as wealth is equally distributed. The more inequality there is, the less you can rely on willingness-to-pay to signal urgency of need. Modern capitalism is far from the global-optimal equilibrium in market design.
If I have a limp in one leg, I should start limping with my other leg to balance it out. (A)
Maybe the immediate effect is that you’ll walk more efficiently on the margin, but don’t forget to focus on healing whatever’s causing you to limp in the first place.
Effective altruists seem to have a bias in favour of pursuing what’s intellectually interesting & high status over pursuing the boringly effective. Thus, we should apply an equal and opposite skepticism of high-status stuff and pay more attention to what might be boringly effective. (A)
Imo, rather than introducing another distortion in your motivational system, just try to figure out why you have that bias in the first place and solve it at its root. Don’t do the equivalent of limping on both your legs.
I might edit in more examples later if I can think of them, but I hope the above gets the point across.
I think this is closely related to the more colloquial concept of “necessary evils”. I always felt the term was a bit of a misnomer—we feel they are evils, I suspect, because their necessity is questionable. Actually necessary things aren’t assigned moral value, because that would be pointless. You can’t prescribe behavior that is impossible (to paraphrase Kant).
As a recent example, someone argued that school bullying is a necessary evil because bullying in the adult world is inevitable and the schoolyard version is preparation. In that case it seems there was a sort of “all-or-nothing” fallacy, i.e., if we can’t eliminate it, we might as well not even mitigate it.
Yeah, a lot of “second-best theories” are due to smallmindedness xor realistic expectations about what you can and cannot change. And a lot of inadequate equilibria are stuck in equilibrium due to the repressive effect the Overton window has on people’s ability to imagine.
Second-best theories & Nash equilibria
A general frame I often find comes in handy while analysing systems is to look for look for equilibria, figure out the key variables sustaining it (e.g., strategic complements, balancing selection, latency or asymmetrical information in commons-tragedies), and well, that’s it. Those are the leverage points to the system. If you understand them, you’re in a much better position to evaluate whether some suggested changes might work, is guaranteed to fail, or suffers from a lack of imagination.
Suggestions that fail to consider the relevant system variables are often what I call “second-best theories”. Though they might be locally correct, they’re also blind to the broader implications or underappreciative of the full space of possibilities.
Examples
The allele that causes sickle-cell anaemia is good because it confers resistance against malaria. (A)
Just cure malaria, and sickle-cell disease ceases to be a problem as well.
Sexual liberalism is bad because people need predictable rules to avoid getting hurt. (B)
Imo, allow people to figure out how to deal with the complexities of human relationships and you eventually remove the need for excessive rules as well.
We should encourage profit-maximising behaviour because the market efficiently balances prices according to demand. (A/B)
Everyone being motivated by altruism is better because market prices only correlate with actual human need insofar as wealth is equally distributed. The more inequality there is, the less you can rely on willingness-to-pay to signal urgency of need. Modern capitalism is far from the global-optimal equilibrium in market design.
If I have a limp in one leg, I should start limping with my other leg to balance it out. (A)
Maybe the immediate effect is that you’ll walk more efficiently on the margin, but don’t forget to focus on healing whatever’s causing you to limp in the first place.
Effective altruists seem to have a bias in favour of pursuing what’s intellectually interesting & high status over pursuing the boringly effective. Thus, we should apply an equal and opposite skepticism of high-status stuff and pay more attention to what might be boringly effective. (A)
Imo, rather than introducing another distortion in your motivational system, just try to figure out why you have that bias in the first place and solve it at its root. Don’t do the equivalent of limping on both your legs.
I might edit in more examples later if I can think of them, but I hope the above gets the point across.
I think this is closely related to the more colloquial concept of “necessary evils”. I always felt the term was a bit of a misnomer—we feel they are evils, I suspect, because their necessity is questionable. Actually necessary things aren’t assigned moral value, because that would be pointless. You can’t prescribe behavior that is impossible (to paraphrase Kant).
As a recent example, someone argued that school bullying is a necessary evil because bullying in the adult world is inevitable and the schoolyard version is preparation. In that case it seems there was a sort of “all-or-nothing” fallacy, i.e., if we can’t eliminate it, we might as well not even mitigate it.
Yeah, a lot of “second-best theories” are due to smallmindedness xor realistic expectations about what you can and cannot change. And a lot of inadequate equilibria are stuck in equilibrium due to the repressive effect the Overton window has on people’s ability to imagine.