Extending the E-F-G thing: perhaps we could say “every cause/movement/organization wants to become a pile of defanged pica and ostentatious normalcy (think: Rowling’s Dursleys) that won’t be disruptive to anyone”, as an complimentary/slightly-contrasting description to “every cause wants to be a cult”.
In the extreme, this “removing of all impulses that’ll interfere with predictability-and-control” is clearly not useful for anything. But in medium-sized amounts, I think predictability/controllability-via-compartmentalization can actually help with creating physical goods, as with the surgeon or poker player or tennis player who has an easier time when they are not in touch with an intense desire for a particular outcome. And I think we see it sometimes in large amounts — large enough that they are net-detrimental to the original goal of the person/cause/business/etc.
Maybe it’s something like:
Being able to predict and control one’s own actions, or one’s organization’s actions, is in fact useful. You can use this to e.g. take three coordinated actions in sequence that will collectively but not individually move you toward a desired outcome, such as putting on your shoes in order to walk to the store in order to be able to buy pasta in order to be able to cook it for dinner. (I do not think one can do this kind of multi-step action nearly as well without prediction-and-control of one’s behavior.)
Because it is useful, we build apparatuses that support it. (“Egos” within individual humans; structures of management and deferral and conformity within organizations and businesses and social movements.)
Even though prediction-and-control is genuinely useful, a central planning entity doing prediction-and-control will tend to overestimate the usefulness of its having more prediction-and-control, and to underestimate the usefulness of aspects of behavior that it does not control. This is because it can see what it’s trying to do, and can’t see what other people are trying to do. Also, its actions are specifically those that its own map says will help, and others’ actions are those which their own maps say will help, which will bring in winner’s curse-type dynamics. So central planning will tend to over-invest in increasing its own control, and to under-invest in allowing unpredictability/disruption/alternate pulls on behavior.
… ? [I think the above three bullet points are probably a real thing that happens. But it doesn’t seem to take my imagination all the way to full-on moral mazes (for organizations), or to individuals who are full-on trying to prop up their ego at the expense of everything. Maybe it does and I’m underestimating it? Or maybe there are added steps after my third bullet point of some sort?]
[Epistemic status: I’m not confident of any of this; I just want better models and am trying to articulate mine in case that helps. Also, all of my comments on this post are as much a response to the book “Moral Mazes” as to the OP.]
Let’s say that A is good, and that B is also good. (E.g. equality and freedom, or diversity and families, or current lives saved and rationality, or any of a huge number of things.) Let’s consider how the desire-for-A and the desire-for-B might avoid having their plans/goal-achievement disrupted by one another.
In principle, you could build a larger model that explains how to trade off between A and B — a model that subsumes A and B as special cases of a more general good. And then the A-desire and the B-desire could peacefully co-exist and share influence within this larger structure, without disrupting each others’ ability to predict-and-control, or to achieve their goals. (And thereby, they could both stably remain part of your psyche. Or part of your organization. Or part of your subcultural movement. Or part of your overarching civilization’s sense of moral decency. Or whatever. Without one part of your civilization’s sense of moral decency (or etc.) straining to pitch another part of that same sense of moral decency overboard.)
Building a larger model subsuming both the A-is-good and B-is-good models is hard, though. It requires a bunch of knowledge/wisdom/culture to kind a workable model of that sort. Especially if you want everybody to coordinate within the same larger model (so that the predict-and-control thing can keep working). A simpler thing you could attempt instead is to just ban desire B. Then desire-for-B won’t get in the way of your attempt to coordinate around achieving desire A. (Or, in more degenerate cases, it won’t get in the way of your attempt to coordinate around you-the-coordinator staying coordinating, with all specific goals mostly forgotten about.) This “just abolish desire B” thing is much simpler to design. So this simpler strategy (“disown and dissociate from one of the good things”) can be reinvented even in ignorance, and can also be shared/evangelized for pretty easily, without needing to share a whole culture.
Separately: once upon a time, there used to be a shared deep culture that gave all humans in a given tribe a whole bunch of shared assumptions about how everything fit together. In that context, it was easier to create/remember/invoke common scaffolds allowing A-desire and B-desire to work together without disrupting each others’ ability to do predictability-and-control. You did not have to build such scaffolds from scratch.
Printing presses and cities and travel/commerce/conversation between many different tribes, and individuals acquiring more tools for creating new thoughts/patterns/associations, and… social media… later made different people assume different things, or fewer things. It became extra-hard to create shared templates in which A-desire and B-desire can coordinate. And so we more often saw social movements / culture wars in which the teams (which each have some memory of some fragment of what’s good) are bent on destroying one another, lest one another destroy their ability to do prediction-and-control in preservation of their own fragment of what’s good. “Humpty Dumpty sat on a wall…”
(Because the ability to do the simpler “dissociate from desire B, ban desire B” move does not break down as quickly, with increasing cultural diversity/fragmentation, as the ability to do the more difficult “assimilate A and B into a common larger good” move.)
Extending the E-F-G thing: perhaps we could say “every cause/movement/organization wants to become a pile of defanged pica and ostentatious normalcy (think: Rowling’s Dursleys) that won’t be disruptive to anyone”, as an complimentary/slightly-contrasting description to “every cause wants to be a cult”.
In the extreme, this “removing of all impulses that’ll interfere with predictability-and-control” is clearly not useful for anything. But in medium-sized amounts, I think predictability/controllability-via-compartmentalization can actually help with creating physical goods, as with the surgeon or poker player or tennis player who has an easier time when they are not in touch with an intense desire for a particular outcome. And I think we see it sometimes in large amounts — large enough that they are net-detrimental to the original goal of the person/cause/business/etc.
Maybe it’s something like:
Being able to predict and control one’s own actions, or one’s organization’s actions, is in fact useful. You can use this to e.g. take three coordinated actions in sequence that will collectively but not individually move you toward a desired outcome, such as putting on your shoes in order to walk to the store in order to be able to buy pasta in order to be able to cook it for dinner. (I do not think one can do this kind of multi-step action nearly as well without prediction-and-control of one’s behavior.)
Because it is useful, we build apparatuses that support it. (“Egos” within individual humans; structures of management and deferral and conformity within organizations and businesses and social movements.)
Even though prediction-and-control is genuinely useful, a central planning entity doing prediction-and-control will tend to overestimate the usefulness of its having more prediction-and-control, and to underestimate the usefulness of aspects of behavior that it does not control. This is because it can see what it’s trying to do, and can’t see what other people are trying to do. Also, its actions are specifically those that its own map says will help, and others’ actions are those which their own maps say will help, which will bring in winner’s curse-type dynamics. So central planning will tend to over-invest in increasing its own control, and to under-invest in allowing unpredictability/disruption/alternate pulls on behavior.
… ? [I think the above three bullet points are probably a real thing that happens. But it doesn’t seem to take my imagination all the way to full-on moral mazes (for organizations), or to individuals who are full-on trying to prop up their ego at the expense of everything. Maybe it does and I’m underestimating it? Or maybe there are added steps after my third bullet point of some sort?]
[Epistemic status: I’m not confident of any of this; I just want better models and am trying to articulate mine in case that helps. Also, all of my comments on this post are as much a response to the book “Moral Mazes” as to the OP.]
Let’s say that A is good, and that B is also good. (E.g. equality and freedom, or diversity and families, or current lives saved and rationality, or any of a huge number of things.) Let’s consider how the desire-for-A and the desire-for-B might avoid having their plans/goal-achievement disrupted by one another.
In principle, you could build a larger model that explains how to trade off between A and B — a model that subsumes A and B as special cases of a more general good. And then the A-desire and the B-desire could peacefully co-exist and share influence within this larger structure, without disrupting each others’ ability to predict-and-control, or to achieve their goals. (And thereby, they could both stably remain part of your psyche. Or part of your organization. Or part of your subcultural movement. Or part of your overarching civilization’s sense of moral decency. Or whatever. Without one part of your civilization’s sense of moral decency (or etc.) straining to pitch another part of that same sense of moral decency overboard.)
Building a larger model subsuming both the A-is-good and B-is-good models is hard, though. It requires a bunch of knowledge/wisdom/culture to kind a workable model of that sort. Especially if you want everybody to coordinate within the same larger model (so that the predict-and-control thing can keep working). A simpler thing you could attempt instead is to just ban desire B. Then desire-for-B won’t get in the way of your attempt to coordinate around achieving desire A. (Or, in more degenerate cases, it won’t get in the way of your attempt to coordinate around you-the-coordinator staying coordinating, with all specific goals mostly forgotten about.) This “just abolish desire B” thing is much simpler to design. So this simpler strategy (“disown and dissociate from one of the good things”) can be reinvented even in ignorance, and can also be shared/evangelized for pretty easily, without needing to share a whole culture.
Separately: once upon a time, there used to be a shared deep culture that gave all humans in a given tribe a whole bunch of shared assumptions about how everything fit together. In that context, it was easier to create/remember/invoke common scaffolds allowing A-desire and B-desire to work together without disrupting each others’ ability to do predictability-and-control. You did not have to build such scaffolds from scratch.
Printing presses and cities and travel/commerce/conversation between many different tribes, and individuals acquiring more tools for creating new thoughts/patterns/associations, and… social media… later made different people assume different things, or fewer things. It became extra-hard to create shared templates in which A-desire and B-desire can coordinate. And so we more often saw social movements / culture wars in which the teams (which each have some memory of some fragment of what’s good) are bent on destroying one another, lest one another destroy their ability to do prediction-and-control in preservation of their own fragment of what’s good. “Humpty Dumpty sat on a wall…”
(Because the ability to do the simpler “dissociate from desire B, ban desire B” move does not break down as quickly, with increasing cultural diversity/fragmentation, as the ability to do the more difficult “assimilate A and B into a common larger good” move.)