This is a really good point and a great distinction to make.
As an example, suppose I hear a claim that some terrorist group likes to eat babies. Such a claim may very well be true. On the other hand, it’s the sort of claim which I would expect to hear even in cases where it isn’t true. In general, I expect claims of the form “<enemy> is/wants/does <evil thing>”, regardless of whether those claims have any basis.
Now, clearly looking into the claim is an all-around solid solution, but it’s also an expensive solution—it takes time and effort. So, a reasonable question to ask is: should the burden of proof be on writer or critic? One could imagine a community norm where that sort of statement needs to come with a citation, or a community norm where it’s the commenters’ job to prove it wrong. I don’t think either of those standards are a good idea, because both of them require the expensive work to be done. There’s a correct Bayesian update whether or not the work of finding a citation is done, and community norms should work reasonably well whether or not the work is done.
A norm which makes more sense to me: there’s nothing wrong with writers occasionally dropping conflict-theory-esque claims. But readers should be suspicious of such claims a-priori, and just as it’s reasonable for authors to make the claim without citation, it’s reasonable for readers to question the claim on a-priori grounds. It makes sense to say “I haven’t specifically looked into whether <enemy> wants <evil thing>, but that sounds suspicious a-priori.”
This feels very similar to the debate on the MTG color system a while ago, which went (as half-remembered some time so much later I don’t remember how long it’s been, and it’s since been deleted):
A: [proposal of personality sorting system.]
B: [statement/argument that personality sorting systems are typically useless-to-harmful]
A: but this doesn’t respond to my particular personality system.
I’m sympathetic to B (equivalent to jonhswentworth) here. If members of category X are generally useless-to-harmful, it’s unfair and anti-truth to disallow incorporating that knowledge into your evaluations of an X. On the other hand, A could have provided rich evidence of why their particular system was good, and B could have made the exact same statement, and it would still be true. If there are ever exceptions to the rule of “category X is useless-to-harmful”, you need to have a system for identifying them
[I’m going to keep talking about this in the MTG case because I think a specific case is easier to read that “category X”, and it’s less loaded for me than talking about my own piece, if the correspondences aren’t obvious let me know and I can clarify]
A partial solution would be for B to outline not only why they’re skeptical of personality systems, but why, and what specific things would increase their estimation of a particular system. This is a lot to ask, which is a tax on this particular form of criticism. But if the problem is as described there’s a lot of utility in writing it up once, well, and linking to it as necessary.
@johnswentworth, if you’re up for it I think for this and other reasons there’s a lot of value in doing a full post on your general principle (with a link to this discussion). People clearly want to talk about it, and it seems valuable for it to have its own, easily-discoverable, space instead of being hidden behind my post. I would also like to resolve the general principle before discussing how to apply it to this post, which is one reason I’ve held back on participating in this sub-thread.
I probably won’t get to that soon, but I’ll put it on the list.
I also want to say that I’m sorry for kicking off this giant tangential thread on your post. I know this sort of thing can be a disincentive to write in the future, so I want to explicitly say that you’re a good writer, this was a piece worth reading, and I would like to read more of your posts in the future.
Who, specifically, is the enemy here, and what, specifically, is the evil thing they want?
It seems to me as though you’re describing motives as evil which I’d consider pretty relatable, so as far as I can tell, you’re calling me an enemy with evil motives. Are people like me (and Elizabeth’s cousin, and Elizabeth herself, both of whom are featured in examples) a special exception whom it’s nonsuspect to call evil, or is there some other reason why this is less suspect than the OP?
By “enemy” I meant the hypothetical terrorist in the “some terrorist group likes to eat babies” example.
I’m very confused about what you’re perceiving here, so I think some very severe miscommunication has occurred. Did you accidentally respond to a different comment than you thought?
This is a really good point and a great distinction to make.
As an example, suppose I hear a claim that some terrorist group likes to eat babies. Such a claim may very well be true. On the other hand, it’s the sort of claim which I would expect to hear even in cases where it isn’t true. In general, I expect claims of the form “<enemy> is/wants/does <evil thing>”, regardless of whether those claims have any basis.
Now, clearly looking into the claim is an all-around solid solution, but it’s also an expensive solution—it takes time and effort. So, a reasonable question to ask is: should the burden of proof be on writer or critic? One could imagine a community norm where that sort of statement needs to come with a citation, or a community norm where it’s the commenters’ job to prove it wrong. I don’t think either of those standards are a good idea, because both of them require the expensive work to be done. There’s a correct Bayesian update whether or not the work of finding a citation is done, and community norms should work reasonably well whether or not the work is done.
A norm which makes more sense to me: there’s nothing wrong with writers occasionally dropping conflict-theory-esque claims. But readers should be suspicious of such claims a-priori, and just as it’s reasonable for authors to make the claim without citation, it’s reasonable for readers to question the claim on a-priori grounds. It makes sense to say “I haven’t specifically looked into whether <enemy> wants <evil thing>, but that sounds suspicious a-priori.”
This feels very similar to the debate on the MTG color system a while ago, which went (as half-remembered some time so much later I don’t remember how long it’s been, and it’s since been deleted):
A: [proposal of personality sorting system.]
B: [statement/argument that personality sorting systems are typically useless-to-harmful]
A: but this doesn’t respond to my particular personality system.
I’m sympathetic to B (equivalent to jonhswentworth) here. If members of category X are generally useless-to-harmful, it’s unfair and anti-truth to disallow incorporating that knowledge into your evaluations of an X. On the other hand, A could have provided rich evidence of why their particular system was good, and B could have made the exact same statement, and it would still be true. If there are ever exceptions to the rule of “category X is useless-to-harmful”, you need to have a system for identifying them
[I’m going to keep talking about this in the MTG case because I think a specific case is easier to read that “category X”, and it’s less loaded for me than talking about my own piece, if the correspondences aren’t obvious let me know and I can clarify]
A partial solution would be for B to outline not only why they’re skeptical of personality systems, but why, and what specific things would increase their estimation of a particular system. This is a lot to ask, which is a tax on this particular form of criticism. But if the problem is as described there’s a lot of utility in writing it up once, well, and linking to it as necessary.
@johnswentworth, if you’re up for it I think for this and other reasons there’s a lot of value in doing a full post on your general principle (with a link to this discussion). People clearly want to talk about it, and it seems valuable for it to have its own, easily-discoverable, space instead of being hidden behind my post. I would also like to resolve the general principle before discussing how to apply it to this post, which is one reason I’ve held back on participating in this sub-thread.
I probably won’t get to that soon, but I’ll put it on the list.
I also want to say that I’m sorry for kicking off this giant tangential thread on your post. I know this sort of thing can be a disincentive to write in the future, so I want to explicitly say that you’re a good writer, this was a piece worth reading, and I would like to read more of your posts in the future.
Who, specifically, is the enemy here, and what, specifically, is the evil thing they want?
It seems to me as though you’re describing motives as evil which I’d consider pretty relatable, so as far as I can tell, you’re calling me an enemy with evil motives. Are people like me (and Elizabeth’s cousin, and Elizabeth herself, both of whom are featured in examples) a special exception whom it’s nonsuspect to call evil, or is there some other reason why this is less suspect than the OP?
By “enemy” I meant the hypothetical terrorist in the “some terrorist group likes to eat babies” example.
I’m very confused about what you’re perceiving here, so I think some very severe miscommunication has occurred. Did you accidentally respond to a different comment than you thought?
How is that relevant to the OP?