We all know that falsifying data is bad. But if that’s the way the incentives point (and that’s a very important if!), then it’s also bad to call people out for doing it.
(We all agree with that first sentence, everyone here knows these things are bad, that’s just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)
I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I’m happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.
I’m still worried that such treatment will mostly occur...
There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot ‘call them out’ on such action, even if such calling out has no tangible consequences.
I’m too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I’m simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.
When debriefing / investigating a bad outcome, it’s better for participants to expect not to be labeled as “bad people” (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.
More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.
In general, I’m curious what Zvi and Ben think about the interaction between “I expect people to yell at me if I say I’m doing this” and promoting/enabling “honest accounting”.
If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.
I’m not sure I endorse the specific example there but in a personal example:
My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.
I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.
If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.
I read your steelman as importantly different from the quoted section.
It uses the weak claim that such action ‘could be bad’ rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.
It changes the standard of behavior from ‘any behavior that responds to local incentives is automatically all right’ to ‘behaviors that are above average and net helpful, but imperfect.’
This is an example of the kind of equivalence/transformation/Mott and Bailey I’ve observed, and am attempting to highlight—not that you’re doing it, you’re not because this is explicitly a steelman, but that I’ve seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).
That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.
I might try and write up a reply of my own (to Zvi’s comment), but right now I’m fairly pressed for time and emotional energy, so until/unless that happens, I’m going to go ahead and endorse this response as closest to the one I would have given.
EDIT: I will note that this bit is (on my view) extremely important:
If one were to be above average but imperfect (emphasis mine)
“Above average”, of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I’m not claiming that this is actually the case in academia, to be clear.) And if it’s true that I’m only doing what everyone else does, then it makes no sense to call me out, especially if your “call-out” is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least.
(An interesting analogy can be made here regarding speeding—most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers—all of whom are driving at similarly high speeds—get by unscathed. I don’t think it’s particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of “intervention” has pretty much zero impact on driving behavior as a whole.)
Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?
As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:
[...] I think the point I’m making actually works well for speeding too: when you get pulled over by a police officer for going 10 over the limit, nobody is going to take you seriously if your objection to the ticket is “but I’m incentivized to go 10 over, because I can get home a little faster, and hardly anyone ever gets pulled over at that speed!” The way we all think about speeding tickets is that, sure, there may be reasons we choose to break the law, but it’s still our informed decision to do so. We don’t try shirk the responsibility for speeding by pretending that we’re helpless in the face of the huge incentive to get where we’re going just a little bit faster than the law actually allows. I think if we looked at research practice the same way, that would be a considerable improvement.
On reflection I’m not sure “above average” is a helpful frame.
I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).
I don’t endorse the quoted statement, I think it’s just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it’s wrong to shame someone for violating a norm they didn’t explicitly agree to follow. If you call me out for falsifying data, you’re not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you’re simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.
(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don’t see it that way.)
It’s an assumption of a pact among fraudsters (a fraud ring). I’ll cover for your lies if you cover for mine. It’s a kind of peace treaty.
In the context of fraud rings being pervasive, it’s valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.
are there any others here, that would endorse the quoted statement as written?
I don’t endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where “bad”/”good” seems like a non-issue*/counterproductive.
No. No. Big No. A thousand times no.
(We all agree with that first sentence, everyone here knows these things are bad, that’s just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)
I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I’m happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.
I’m still worried that such treatment will mostly occur...
There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot ‘call them out’ on such action, even if such calling out has no tangible consequences.
I’m too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I’m simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.
Here’s another further-afield steelman, inspired by blameless postmortem culture.
When debriefing / investigating a bad outcome, it’s better for participants to expect not to be labeled as “bad people” (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.
More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.
In general, I’m curious what Zvi and Ben think about the interaction between “I expect people to yell at me if I say I’m doing this” and promoting/enabling “honest accounting”.
Trying to steelman the quoted section:
If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.
I’m not sure I endorse the specific example there but in a personal example:
My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.
I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.
If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.
Thank you.
I read your steelman as importantly different from the quoted section.
It uses the weak claim that such action ‘could be bad’ rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.
It changes the standard of behavior from ‘any behavior that responds to local incentives is automatically all right’ to ‘behaviors that are above average and net helpful, but imperfect.’
This is an example of the kind of equivalence/transformation/Mott and Bailey I’ve observed, and am attempting to highlight—not that you’re doing it, you’re not because this is explicitly a steelman, but that I’ve seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).
That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.
I might try and write up a reply of my own (to Zvi’s comment), but right now I’m fairly pressed for time and emotional energy, so until/unless that happens, I’m going to go ahead and endorse this response as closest to the one I would have given.
EDIT: I will note that this bit is (on my view) extremely important:
“Above average”, of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I’m not claiming that this is actually the case in academia, to be clear.) And if it’s true that I’m only doing what everyone else does, then it makes no sense to call me out, especially if your “call-out” is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least.
(An interesting analogy can be made here regarding speeding—most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers—all of whom are driving at similarly high speeds—get by unscathed. I don’t think it’s particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of “intervention” has pretty much zero impact on driving behavior as a whole.)
Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?
Take out the “10mph over” and I think this would be both fairer than the existing system and more effective.
(Maybe some modification to the calculation of the average to account for queues etc.)
As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:
On reflection I’m not sure “above average” is a helpful frame.
I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).
I don’t endorse the quoted statement, I think it’s just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it’s wrong to shame someone for violating a norm they didn’t explicitly agree to follow. If you call me out for falsifying data, you’re not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you’re simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.
(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don’t see it that way.)
It’s an assumption of a pact among fraudsters (a fraud ring). I’ll cover for your lies if you cover for mine. It’s a kind of peace treaty.
In the context of fraud rings being pervasive, it’s valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.
Right… but fraud rings need something to initially nucleate around. (As do honesty rings)
I don’t endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where “bad”/”good” seems like a non-issue*/counterproductive.
*If not outright beneficial.