Optimizing for anything is costly if you’re not counting the thing itself as a benefit.
Suppose I do count the thing itself (call it X) as a benefit. Given that I’m also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or “call out” someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people’s intuitions that calling people out for this is bad.
Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.
What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the coordination cost that large companies are prepared to pay in order to gain the benefits of economies of scale.) Should we just give up on making use of such economies of scale?
Obviously the ideal outcome would be to invent or spread some better coordination technology that doesn’t produce Moral Mazes, but if it wasn’t very hard to invent/spread, someone probably would have done it already.
If academia has become a moral maze, the same applies, except that the money was never good to begin with.
As someone who explicitly opted out of academia and became an independent researcher due to similar concerns (not about faking data per se, but about generally bad coordination in academia), I obviously endorse this for anyone for whom it’s a feasible option. But I’m not sure it’s actually feasible at scale.
I think these are (at least some of) the right questions to be asking.
The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?
Which I won’t answer here, because it’s a hard question, but my current best guess on question one is: It’s the natural endpoint if you don’t create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.
My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn’t mean we can give up on major corporations or national governments without better options that we don’t currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.
As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I’m not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It’s also not clear that academia as it currently exists at scale is feasible at that scale. I’m not close enough to it, to be the one who should make such claims.
“Selling out” has been in the well-known concept space for a long long time—it’s not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity.
Do we have any examples of groups that both behave well AND get significant things done?
One idea on the subject of government is “eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies.”
One alternative to this, would be to start a group/country/etc. with an explicit end date—something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)
Suppose I do count the thing itself (call it X) as a benefit. Given that I’m also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or “call out” someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people’s intuitions that calling people out for this is bad.
What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the coordination cost that large companies are prepared to pay in order to gain the benefits of economies of scale.) Should we just give up on making use of such economies of scale?
Obviously the ideal outcome would be to invent or spread some better coordination technology that doesn’t produce Moral Mazes, but if it wasn’t very hard to invent/spread, someone probably would have done it already.
As someone who explicitly opted out of academia and became an independent researcher due to similar concerns (not about faking data per se, but about generally bad coordination in academia), I obviously endorse this for anyone for whom it’s a feasible option. But I’m not sure it’s actually feasible at scale.
I think these are (at least some of) the right questions to be asking.
The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?
Which I won’t answer here, because it’s a hard question, but my current best guess on question one is: It’s the natural endpoint if you don’t create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.
My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn’t mean we can give up on major corporations or national governments without better options that we don’t currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.
As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I’m not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It’s also not clear that academia as it currently exists at scale is feasible at that scale. I’m not close enough to it, to be the one who should make such claims.
This comment feels like it correctly summarizes a lot of my thinking on this topic, and I would feel excited about a top-level post version of it.
Same.
“Selling out” has been in the well-known concept space for a long long time—it’s not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity.
Do we have any examples of groups that both behave well AND get significant things done?
One idea on the subject of government is “eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies.”
One alternative to this, would be to start a group/country/etc. with an explicit end date—something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)