In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data… is nice, but isn’t actually going to help unless it is part of a broader, more concerted strategy.
I think this claim is a hugely important error.
One scientist unilaterally deciding to stop faking data isn’t going to magically make the whole world come around. But the idea that it doesn’t help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn’t make it worse?
I don’t understand how one can think that.
That’s not unique to the example of faking data. That’s true of anything (at least partially) observable that you’d like to change.
One can argue that coordinated action would be more efficient, and I’d agree. One can argue that in context, it’s not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that’s better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don’t pretend it doesn’t matter.
Similarly, I find it odd that one uses the idea that ‘doing the right thing is not free’ as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you’re not counting the thing itself as a benefit.
But the whole point of some things being right is that you do them even though it’s not free, because It’s Not The Incentives, It’s You. You’re making a choice.
Ideally we’d design a system where one not only cultivated the virtue of doing the right thing, and was rewarded for doing that, one would also be rewarded in expectation for doing the right thing as often as possible. Doing the right thing is, in fact, a prime way of moving towards that.
Again, sometimes the cost of doing the otherwise ‘right thing’ gets too high. Especially if you can’t coordinate on it. There are trade-offs. One can’t do every good thing or never compromise.
But if there is one takeaway from Moral Mazes that everyone should have, it’s a really, really simple one:
Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.
If academia has become a moral maze, the same applies, except that the money was never good to begin with.
One can argue that coordinated action would be more efficient, and I’d agree. One can argue that in context, it’s not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that’s better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don’t pretend it doesn’t matter.
This reads as enormously uncharitable to Raemon, and I don’t actually know where you’re getting it from. As far as I can tell, not a single person in this conversation has made the claim that it “doesn’t matter”—and for good reason: such a claim would be ridiculous. That you seem willing to accuse someone else in the conversation of making such a claim (or “pretending” it, which is just as bad) doesn’t say good things about the level of conversation.
What has been claimed is that “doing the thing that reinforces good norms” is ineffective, i.e. it doesn’t actually reinforce the good norms. The claim is that without a coordinated effort, changes in behavior on an individual level have almost no effect on the behavior of the field as a whole. If this claim is true (and even if it’s false, it’s not obviously false), then there’s no point hoping to see knock-on effects from such a change—and that in turn means all that’s left is the cost-benefit calculation: is the amount of good that I would do by publishing a paper with non-fabricated data (even if I did, how would people know to pay attention to my paper and not all the other papers out there that totally did use fabricated data?), worth the time/effort/willpower it would take me to do so?
As you say: it is indeed a trade-off. Now, you might argue (perhaps rightly so!) that one individual’s personal time/effort/willpower is nowhere near as important as the effects of their decision whether to fabricate data. That they ought to be willing to expend their own blood, sweat, and tears to Do The Right Thing—at least, if they consider themselves a moral person. And in fact, you made just such an argument in your comment:
Similarly, I find it odd that one uses the idea that ‘doing the right thing is not free’ as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you’re not counting the thing itself as a benefit.
But the whole point of some things being right is that you do them even though it’s not free, because It’s Not The Incentives, It’s You. You’re making a choice.
But this ignores the fact that every decision has an opportunity cost: if I spend vast amounts of time and effort designing and conducting a rigorous study, pre-registering my plan, controlling for all possible confounders (and then possibly getting a negative result and needing to go back to the drawing board, all while my colleague Joe Schmoe across the hall fabricates his way into Nature), this will naturally make me more tired than I would be otherwise. Perhaps it will cause me to have less patience than I normally do, become more easily frustrated at events outside of my control, be less willing to tolerate inconveniences in other areas of my life, etc. If, for example, I believed eating meat was morally wrong, I might nonetheless find meat more difficult to deliberately deprive myself of it if I was already spending a great deal of willpower every day on seeing this study through. And if I expect that to be the case, then I have to ask myself which thing I ought to prioritize: not eating meat, or doing the study properly?
This is the (somewhat derisively named) “goodness budget” Benquo mentioned upthread. But another name for it might be Moral Slack. It’s the limited amount of room we have to be less than maximally good in our lives, without being socially punished for it. It’s the privilege we’re granted, to not have to constantly ask ourselves “Should I be doing this? Am I being a bad person for doing this?” It’s—look, you wrote half the posts I just linked to. You know the concept. I don’t know why you’re not applying it here, but it seems pretty obvious to me that it applies just as well here as it does in any other aspect of life.
To be clear: you know that falsifying data is a Very Bad Thing. I know that falsifying data is a Very Bad Thing. Raemon knows that falsifying data is a Very Bad Thing. We all know that falsifying data is bad. But if that’s the way the incentives point (and that’s a very important if!), then it’s also bad to call people out for doing it. If you do that, then you’re using moral indignation as a weapon—a way to not only coerce other people into using up their willpower, but to come out of it looking good yourself.
People who manage to resist the incentives—who ignore the various siren calls they constantly hear—are worthy of extremely high praise. They are exceptionally good people—by definition, in fact, because if they weren’t exceptional, everyone else would be doing it, too. By all means, praise those people as much as you want. But that doesn’t mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn’t actually a very good way to get them to change. “It’s Not The Incentives, It’s You” puts the emphasis in the wrong place, and it degrades communication with people who might have been reachable with a more nuanced take.
We all know that falsifying data is bad. But if that’s the way the incentives point (and that’s a very important if!), then it’s also bad to call people out for doing it.
(We all agree with that first sentence, everyone here knows these things are bad, that’s just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)
I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I’m happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.
I’m still worried that such treatment will mostly occur...
There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot ‘call them out’ on such action, even if such calling out has no tangible consequences.
I’m too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I’m simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.
When debriefing / investigating a bad outcome, it’s better for participants to expect not to be labeled as “bad people” (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.
More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.
In general, I’m curious what Zvi and Ben think about the interaction between “I expect people to yell at me if I say I’m doing this” and promoting/enabling “honest accounting”.
If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.
I’m not sure I endorse the specific example there but in a personal example:
My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.
I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.
If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.
I read your steelman as importantly different from the quoted section.
It uses the weak claim that such action ‘could be bad’ rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.
It changes the standard of behavior from ‘any behavior that responds to local incentives is automatically all right’ to ‘behaviors that are above average and net helpful, but imperfect.’
This is an example of the kind of equivalence/transformation/Mott and Bailey I’ve observed, and am attempting to highlight—not that you’re doing it, you’re not because this is explicitly a steelman, but that I’ve seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).
That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.
I might try and write up a reply of my own (to Zvi’s comment), but right now I’m fairly pressed for time and emotional energy, so until/unless that happens, I’m going to go ahead and endorse this response as closest to the one I would have given.
EDIT: I will note that this bit is (on my view) extremely important:
If one were to be above average but imperfect (emphasis mine)
“Above average”, of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I’m not claiming that this is actually the case in academia, to be clear.) And if it’s true that I’m only doing what everyone else does, then it makes no sense to call me out, especially if your “call-out” is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least.
(An interesting analogy can be made here regarding speeding—most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers—all of whom are driving at similarly high speeds—get by unscathed. I don’t think it’s particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of “intervention” has pretty much zero impact on driving behavior as a whole.)
Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?
As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:
[...] I think the point I’m making actually works well for speeding too: when you get pulled over by a police officer for going 10 over the limit, nobody is going to take you seriously if your objection to the ticket is “but I’m incentivized to go 10 over, because I can get home a little faster, and hardly anyone ever gets pulled over at that speed!” The way we all think about speeding tickets is that, sure, there may be reasons we choose to break the law, but it’s still our informed decision to do so. We don’t try shirk the responsibility for speeding by pretending that we’re helpless in the face of the huge incentive to get where we’re going just a little bit faster than the law actually allows. I think if we looked at research practice the same way, that would be a considerable improvement.
On reflection I’m not sure “above average” is a helpful frame.
I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).
I don’t endorse the quoted statement, I think it’s just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it’s wrong to shame someone for violating a norm they didn’t explicitly agree to follow. If you call me out for falsifying data, you’re not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you’re simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.
(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don’t see it that way.)
It’s an assumption of a pact among fraudsters (a fraud ring). I’ll cover for your lies if you cover for mine. It’s a kind of peace treaty.
In the context of fraud rings being pervasive, it’s valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.
are there any others here, that would endorse the quoted statement as written?
I don’t endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where “bad”/”good” seems like a non-issue*/counterproductive.
But that doesn’t mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn’t actually a very good way to get them to change.
I haven’t said ‘bad person’ unless I’m missing something. I’ve said things like ‘doing net harm in your career’ or ‘making it worse’ or ‘not doing the right thing.’ I’m talking about actions, and when I say ‘right thing’ I mean shorthand for ‘that which moves things in the directions you’d like to see’ rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.
It’s a strange but consistent thing that people’s brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don’t take the better actions of being bad people. Or even, as you say, ‘exceptionally bad’ people.
I haven’t said ‘bad person’ unless I’m missing something.
I mean, you haven’t called anyone a bad person, but “It’s Not The Incentives, It’s You” is a pretty damn accusatory thing to say, I’d argue. (Of course, I’m also aware that you weren’t the originator of that phrase—the author of the linked article was—but you at least endorse its use enough to repeat it in your own comments, so I think it’s worth pointing out.)
Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.
On two levels.
Level one is the one where some level of endorsement of something means that I’m making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.
Level two is that the OP doesn’t make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for ‘science’ or ‘the world’ but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.
That is importantly different from claiming that these are bad people.
Is it possible to say ‘your actions are bad and maybe you should stop’ or even ‘your actions are having these results and maybe you should stop’ without saying ‘you are bad and you should feel bad’?
Is it possible to say ‘your actions are bad and maybe you should stop’ or even ‘your actions are having these results and maybe you should stop’ without saying ‘you are bad and you should feel bad’?
I actually am asking, because I don’t know.
I’ve touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It’s hugely inefficient. But I don’t actually see alternatives.
It sucks even more because it’s probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I’m not confident how common this is. I think a more prototypical example is “southern politeness” with “Oh bless your heart”).
So I don’t think there’s a permanent answer for public discourse. There’s just costly signaling via phrasing things carefully in a way that suggests you’re paying attention to your reader’s mental state (including their mental map of the current landscape of social moves people commonly pull) and writing things that expressly work to build trust given that mental state.
(Duncan’s more recent writing often seems to be making an effort at this. It doesn’t work universally, due to the unfortunate fact that not all one’s readers will be having the same mental state. A disclaimer that reassures one person may alienate another)
It seems… hypothetically possible for LessWrong to someday establish this sort of trust, but I think it actually requires hours and hours of doublecrux for each pair of people with different worldviews, and then that trust isn’t necessarily transitive between the next pair of people with different different worldviews. (Worldviews which affect what even seem like reasonable meta-level norms within the paradigm of ‘we’re all here to truthseek’. See tensions in truthseeking for some [possibly out of date] thoughts on mine on that)
Optimizing for anything is costly if you’re not counting the thing itself as a benefit.
Suppose I do count the thing itself (call it X) as a benefit. Given that I’m also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or “call out” someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people’s intuitions that calling people out for this is bad.
Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.
What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the coordination cost that large companies are prepared to pay in order to gain the benefits of economies of scale.) Should we just give up on making use of such economies of scale?
Obviously the ideal outcome would be to invent or spread some better coordination technology that doesn’t produce Moral Mazes, but if it wasn’t very hard to invent/spread, someone probably would have done it already.
If academia has become a moral maze, the same applies, except that the money was never good to begin with.
As someone who explicitly opted out of academia and became an independent researcher due to similar concerns (not about faking data per se, but about generally bad coordination in academia), I obviously endorse this for anyone for whom it’s a feasible option. But I’m not sure it’s actually feasible at scale.
I think these are (at least some of) the right questions to be asking.
The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?
Which I won’t answer here, because it’s a hard question, but my current best guess on question one is: It’s the natural endpoint if you don’t create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.
My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn’t mean we can give up on major corporations or national governments without better options that we don’t currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.
As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I’m not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It’s also not clear that academia as it currently exists at scale is feasible at that scale. I’m not close enough to it, to be the one who should make such claims.
“Selling out” has been in the well-known concept space for a long long time—it’s not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity.
Do we have any examples of groups that both behave well AND get significant things done?
One idea on the subject of government is “eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies.”
One alternative to this, would be to start a group/country/etc. with an explicit end date—something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)
Nod. I don’t know that I disagree with any of this per se. I’ll respond more on Sunday. Any disagreements I have I think are about how to weight things and how to strategize (with slightly different caveats for individuals, for groups with fences, and for amorphous society)
I think this claim is a hugely important error.
One scientist unilaterally deciding to stop faking data isn’t going to magically make the whole world come around. But the idea that it doesn’t help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn’t make it worse?
I don’t understand how one can think that.
That’s not unique to the example of faking data. That’s true of anything (at least partially) observable that you’d like to change.
One can argue that coordinated action would be more efficient, and I’d agree. One can argue that in context, it’s not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that’s better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don’t pretend it doesn’t matter.
Similarly, I find it odd that one uses the idea that ‘doing the right thing is not free’ as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you’re not counting the thing itself as a benefit.
But the whole point of some things being right is that you do them even though it’s not free, because It’s Not The Incentives, It’s You. You’re making a choice.
Ideally we’d design a system where one not only cultivated the virtue of doing the right thing, and was rewarded for doing that, one would also be rewarded in expectation for doing the right thing as often as possible. Doing the right thing is, in fact, a prime way of moving towards that.
Again, sometimes the cost of doing the otherwise ‘right thing’ gets too high. Especially if you can’t coordinate on it. There are trade-offs. One can’t do every good thing or never compromise.
But if there is one takeaway from Moral Mazes that everyone should have, it’s a really, really simple one:
Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.
If academia has become a moral maze, the same applies, except that the money was never good to begin with.
This reads as enormously uncharitable to Raemon, and I don’t actually know where you’re getting it from. As far as I can tell, not a single person in this conversation has made the claim that it “doesn’t matter”—and for good reason: such a claim would be ridiculous. That you seem willing to accuse someone else in the conversation of making such a claim (or “pretending” it, which is just as bad) doesn’t say good things about the level of conversation.
What has been claimed is that “doing the thing that reinforces good norms” is ineffective, i.e. it doesn’t actually reinforce the good norms. The claim is that without a coordinated effort, changes in behavior on an individual level have almost no effect on the behavior of the field as a whole. If this claim is true (and even if it’s false, it’s not obviously false), then there’s no point hoping to see knock-on effects from such a change—and that in turn means all that’s left is the cost-benefit calculation: is the amount of good that I would do by publishing a paper with non-fabricated data (even if I did, how would people know to pay attention to my paper and not all the other papers out there that totally did use fabricated data?), worth the time/effort/willpower it would take me to do so?
As you say: it is indeed a trade-off. Now, you might argue (perhaps rightly so!) that one individual’s personal time/effort/willpower is nowhere near as important as the effects of their decision whether to fabricate data. That they ought to be willing to expend their own blood, sweat, and tears to Do The Right Thing—at least, if they consider themselves a moral person. And in fact, you made just such an argument in your comment:
But this ignores the fact that every decision has an opportunity cost: if I spend vast amounts of time and effort designing and conducting a rigorous study, pre-registering my plan, controlling for all possible confounders (and then possibly getting a negative result and needing to go back to the drawing board, all while my colleague Joe Schmoe across the hall fabricates his way into Nature), this will naturally make me more tired than I would be otherwise. Perhaps it will cause me to have less patience than I normally do, become more easily frustrated at events outside of my control, be less willing to tolerate inconveniences in other areas of my life, etc. If, for example, I believed eating meat was morally wrong, I might nonetheless find meat more difficult to deliberately deprive myself of it if I was already spending a great deal of willpower every day on seeing this study through. And if I expect that to be the case, then I have to ask myself which thing I ought to prioritize: not eating meat, or doing the study properly?
This is the (somewhat derisively named) “goodness budget” Benquo mentioned upthread. But another name for it might be Moral Slack. It’s the limited amount of room we have to be less than maximally good in our lives, without being socially punished for it. It’s the privilege we’re granted, to not have to constantly ask ourselves “Should I be doing this? Am I being a bad person for doing this?” It’s—look, you wrote half the posts I just linked to. You know the concept. I don’t know why you’re not applying it here, but it seems pretty obvious to me that it applies just as well here as it does in any other aspect of life.
To be clear: you know that falsifying data is a Very Bad Thing. I know that falsifying data is a Very Bad Thing. Raemon knows that falsifying data is a Very Bad Thing. We all know that falsifying data is bad. But if that’s the way the incentives point (and that’s a very important if!), then it’s also bad to call people out for doing it. If you do that, then you’re using moral indignation as a weapon—a way to not only coerce other people into using up their willpower, but to come out of it looking good yourself.
People who manage to resist the incentives—who ignore the various siren calls they constantly hear—are worthy of extremely high praise. They are exceptionally good people—by definition, in fact, because if they weren’t exceptional, everyone else would be doing it, too. By all means, praise those people as much as you want. But that doesn’t mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn’t actually a very good way to get them to change. “It’s Not The Incentives, It’s You” puts the emphasis in the wrong place, and it degrades communication with people who might have been reachable with a more nuanced take.
No. No. Big No. A thousand times no.
(We all agree with that first sentence, everyone here knows these things are bad, that’s just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)
I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I’m happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.
I’m still worried that such treatment will mostly occur...
There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot ‘call them out’ on such action, even if such calling out has no tangible consequences.
I’m too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I’m simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.
Here’s another further-afield steelman, inspired by blameless postmortem culture.
When debriefing / investigating a bad outcome, it’s better for participants to expect not to be labeled as “bad people” (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.
More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.
In general, I’m curious what Zvi and Ben think about the interaction between “I expect people to yell at me if I say I’m doing this” and promoting/enabling “honest accounting”.
Trying to steelman the quoted section:
If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.
I’m not sure I endorse the specific example there but in a personal example:
My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.
I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.
If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.
Thank you.
I read your steelman as importantly different from the quoted section.
It uses the weak claim that such action ‘could be bad’ rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.
It changes the standard of behavior from ‘any behavior that responds to local incentives is automatically all right’ to ‘behaviors that are above average and net helpful, but imperfect.’
This is an example of the kind of equivalence/transformation/Mott and Bailey I’ve observed, and am attempting to highlight—not that you’re doing it, you’re not because this is explicitly a steelman, but that I’ve seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).
That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.
I might try and write up a reply of my own (to Zvi’s comment), but right now I’m fairly pressed for time and emotional energy, so until/unless that happens, I’m going to go ahead and endorse this response as closest to the one I would have given.
EDIT: I will note that this bit is (on my view) extremely important:
“Above average”, of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I’m not claiming that this is actually the case in academia, to be clear.) And if it’s true that I’m only doing what everyone else does, then it makes no sense to call me out, especially if your “call-out” is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least.
(An interesting analogy can be made here regarding speeding—most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers—all of whom are driving at similarly high speeds—get by unscathed. I don’t think it’s particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of “intervention” has pretty much zero impact on driving behavior as a whole.)
Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?
Take out the “10mph over” and I think this would be both fairer than the existing system and more effective.
(Maybe some modification to the calculation of the average to account for queues etc.)
As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:
On reflection I’m not sure “above average” is a helpful frame.
I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).
I don’t endorse the quoted statement, I think it’s just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it’s wrong to shame someone for violating a norm they didn’t explicitly agree to follow. If you call me out for falsifying data, you’re not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you’re simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.
(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don’t see it that way.)
It’s an assumption of a pact among fraudsters (a fraud ring). I’ll cover for your lies if you cover for mine. It’s a kind of peace treaty.
In the context of fraud rings being pervasive, it’s valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.
Right… but fraud rings need something to initially nucleate around. (As do honesty rings)
I don’t endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where “bad”/”good” seems like a non-issue*/counterproductive.
*If not outright beneficial.
I haven’t said ‘bad person’ unless I’m missing something. I’ve said things like ‘doing net harm in your career’ or ‘making it worse’ or ‘not doing the right thing.’ I’m talking about actions, and when I say ‘right thing’ I mean shorthand for ‘that which moves things in the directions you’d like to see’ rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.
It’s a strange but consistent thing that people’s brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don’t take the better actions of being bad people. Or even, as you say, ‘exceptionally bad’ people.
I mean, you haven’t called anyone a bad person, but “It’s Not The Incentives, It’s You” is a pretty damn accusatory thing to say, I’d argue. (Of course, I’m also aware that you weren’t the originator of that phrase—the author of the linked article was—but you at least endorse its use enough to repeat it in your own comments, so I think it’s worth pointing out.)
Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.
On two levels.
Level one is the one where some level of endorsement of something means that I’m making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.
Level two is that the OP doesn’t make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for ‘science’ or ‘the world’ but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.
That is importantly different from claiming that these are bad people.
Is it possible to say ‘your actions are bad and maybe you should stop’ or even ‘your actions are having these results and maybe you should stop’ without saying ‘you are bad and you should feel bad’?
I actually am asking, because I don’t know.
I’ve touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It’s hugely inefficient. But I don’t actually see alternatives.
It sucks even more because it’s probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I’m not confident how common this is. I think a more prototypical example is “southern politeness” with “Oh bless your heart”).
So I don’t think there’s a permanent answer for public discourse. There’s just costly signaling via phrasing things carefully in a way that suggests you’re paying attention to your reader’s mental state (including their mental map of the current landscape of social moves people commonly pull) and writing things that expressly work to build trust given that mental state.
(Duncan’s more recent writing often seems to be making an effort at this. It doesn’t work universally, due to the unfortunate fact that not all one’s readers will be having the same mental state. A disclaimer that reassures one person may alienate another)
It seems… hypothetically possible for LessWrong to someday establish this sort of trust, but I think it actually requires hours and hours of doublecrux for each pair of people with different worldviews, and then that trust isn’t necessarily transitive between the next pair of people with different different worldviews. (Worldviews which affect what even seem like reasonable meta-level norms within the paradigm of ‘we’re all here to truthseek’. See tensions in truthseeking for some [possibly out of date] thoughts on mine on that)
I’ve noted issues with Public Archipelago given current technologies, but it still seems like the best solution to me.
It seems pretty fucked up to take positive proposals at face value given that context.
Suppose I do count the thing itself (call it X) as a benefit. Given that I’m also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or “call out” someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people’s intuitions that calling people out for this is bad.
What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the coordination cost that large companies are prepared to pay in order to gain the benefits of economies of scale.) Should we just give up on making use of such economies of scale?
Obviously the ideal outcome would be to invent or spread some better coordination technology that doesn’t produce Moral Mazes, but if it wasn’t very hard to invent/spread, someone probably would have done it already.
As someone who explicitly opted out of academia and became an independent researcher due to similar concerns (not about faking data per se, but about generally bad coordination in academia), I obviously endorse this for anyone for whom it’s a feasible option. But I’m not sure it’s actually feasible at scale.
I think these are (at least some of) the right questions to be asking.
The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?
Which I won’t answer here, because it’s a hard question, but my current best guess on question one is: It’s the natural endpoint if you don’t create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.
My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn’t mean we can give up on major corporations or national governments without better options that we don’t currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.
As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I’m not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It’s also not clear that academia as it currently exists at scale is feasible at that scale. I’m not close enough to it, to be the one who should make such claims.
This comment feels like it correctly summarizes a lot of my thinking on this topic, and I would feel excited about a top-level post version of it.
Same.
“Selling out” has been in the well-known concept space for a long long time—it’s not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity.
Do we have any examples of groups that both behave well AND get significant things done?
One idea on the subject of government is “eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies.”
One alternative to this, would be to start a group/country/etc. with an explicit end date—something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)
Nod. I don’t know that I disagree with any of this per se. I’ll respond more on Sunday. Any disagreements I have I think are about how to weight things and how to strategize (with slightly different caveats for individuals, for groups with fences, and for amorphous society)