The problem is that we don’t have the Law for bad arguments written down well enough. You have your ideas of what is bad, I have mine, but I think pointing out irony is still the best most universally acceptable thing I can do. Especially since we’re in the comments of a post called “Local Validity as a Key to Sanity and Civilization”.
“isolated data points can cause people to over-update when they’re presented in vivid, concrete terms”
This is a valid concern, but it is not a valid law. Imagine someone telling you “your evidence is valid, but you presented it in overly vivid, concrete terms, so I downvoted you”. It would be frustrating. Who decides what is and isn’t too emotionally persuasive? Who even measures or compares persuasiveness? That sort of rule is unenforceable.
I think you’re making overly strong symmetry claims here in ways that make for a cleaner narrative.
Oh absolutely, there are many differences between the two claims, though my comparison is less charitable for EY than yours. Let H be “global warming” and let E be “a random day was hot”. Then P(E|H) > P(E|not H) is a mathematically true fact, and therefore E is valid, even if weak, evidence for H. Now, let H be “memetic collapse” and let E be “modern fiction has fewer Law abiding characters”. Does P(E|H) > P(E|not H) hold? I don’t know, if I had to guess, I’d say yes, but it’s very dubious. I.e. I can’t say for certain that EY’s evidence is even technically valid.
a claim where we shouldn’t expect there to be an easy-to-communicate/compress argument if the claim is true
This often happens. However the correct response is not the take the single data point provided more charitably. The correct response is to accept that this claim will never have high certainty. If a perfect Bayesian knew nothing at all, and you told it that “yesterday was hot” and that “modern fiction has fewer Law abiding characters”, then this Bayesian would update P(“global warming”) and P(“memetic collapse”) by about the same amount. It’s true that there exist strong arguments for global warming, and that there might not exists strong arguments for memetic collapse, however these facts are not reflected in the Bayesian mathematics. Intuitively this suggests to me that this difference you described is not something we want to look at.
“I think you don’t have lots of unshared evidence for your belief”
This is a simple claim that I make. EY seems to be quite certain of memetic collapse, at least that’s the impression I get from the text. If EY is more certain than me, then, charitably, that’s because he has more evidence than me. Note, the uncharitable explanation would be that he’s a crackpot. Now, I don’t really know if he has described this evidence somewhere, if he has, I’d love a link.
However the correct response is not the take the single data point provided more charitably.
You’re conflating two senses of “take a single data point charitably”: (a) “treat the data point as relatively strong evidence for a hypothesis”, and (b) “treat the author as having a relatively benign reason to cite the data point even though it’s weak”. The first is obviously bad (since we’re assuming the data is weak evidence), but you aren’t claiming I did the first thing. The second is more like what I actually said, but it’s not problematic (assuming I have a good estimate of the citer’s epistemics).
“Charity” framings are also confusingly imprecise in their own right, since like “steelmanning,” they naturally encourage people to equivocate between “I’m trying to get a more accurate read on you by adopting a more positive interpretation” and “I’m trying to be nice/polite to you by adopting a more positive interpretation”.
The correct response is to accept that this claim will never have high certainty.
A simple counterexample is “I assign 40:1 odds that my friend Bob has personality trait [blah],” where a lifetime of interactions with Bob can let you accumulate that much confidence without it being easy for you to compress the evidence into an elevator pitch that will push strangers to similar levels of confidence. (Unless the stranger simply defers to your judgment, which is different from them having access to your evidence.)
(a) “treat the data point as relatively strong evidence for a hypothesis”, <...>. The first is obviously bad (since we’re assuming the data is weak evidence), but you aren’t claiming I did the first thing.
Honestly, I’m not sure what you did. You said I should distinguish claims that can have short arguments and claims that can’t. I assumed that by “distinguish”, you meant we should update on the two claims differently, which sounds like (a). What did “distinguish” really mean?
(b) “treat the author as having a relatively benign reason to cite the data point even though it’s weak”
I wasn’t considering malicious/motivated authors at all. In my mind the climate supporter either doesn’t know about long term measurements, or doesn’t trust them for whatever reason. Sure, a malicious author would prefer using weak evidence when strong evidence exists, but they would also prefer topics where strong evidence doesn’t exist, so ultimately I don’t know in what way I should distinguish the two claims in relation to (b).
A simple counterexample is “I assign 40:1 odds that my friend Bob has personality trait [blah],” where a lifetime of interactions with Bob can let you accumulate that much confidence
The problem with many small pieces of evidence is that they are often correlated, and it’s easy not to account for that. The problem with humans is that they are very complicated, so you really shouldn’t have very high confidence that you know what’s going on in their heads. But I don’t think I would be able to show you that your confidence is too high. Of course, it is technically possible to reach high confidence with a large quantity of weak evidence, I just said it as a rule of thumb. By the way, 40:1 could be high or low confidence, depending on the prior probability of the trait.
The problem is that we don’t have the Law for bad arguments written down well enough. You have your ideas of what is bad, I have mine, but I think pointing out irony is still the best most universally acceptable thing I can do. Especially since we’re in the comments of a post called “Local Validity as a Key to Sanity and Civilization”.
This is a valid concern, but it is not a valid law. Imagine someone telling you “your evidence is valid, but you presented it in overly vivid, concrete terms, so I downvoted you”. It would be frustrating. Who decides what is and isn’t too emotionally persuasive? Who even measures or compares persuasiveness? That sort of rule is unenforceable.
Oh absolutely, there are many differences between the two claims, though my comparison is less charitable for EY than yours. Let H be “global warming” and let E be “a random day was hot”. Then P(E|H) > P(E|not H) is a mathematically true fact, and therefore E is valid, even if weak, evidence for H. Now, let H be “memetic collapse” and let E be “modern fiction has fewer Law abiding characters”. Does P(E|H) > P(E|not H) hold? I don’t know, if I had to guess, I’d say yes, but it’s very dubious. I.e. I can’t say for certain that EY’s evidence is even technically valid.
This often happens. However the correct response is not the take the single data point provided more charitably. The correct response is to accept that this claim will never have high certainty. If a perfect Bayesian knew nothing at all, and you told it that “yesterday was hot” and that “modern fiction has fewer Law abiding characters”, then this Bayesian would update P(“global warming”) and P(“memetic collapse”) by about the same amount. It’s true that there exist strong arguments for global warming, and that there might not exists strong arguments for memetic collapse, however these facts are not reflected in the Bayesian mathematics. Intuitively this suggests to me that this difference you described is not something we want to look at.
This is a simple claim that I make. EY seems to be quite certain of memetic collapse, at least that’s the impression I get from the text. If EY is more certain than me, then, charitably, that’s because he has more evidence than me. Note, the uncharitable explanation would be that he’s a crackpot. Now, I don’t really know if he has described this evidence somewhere, if he has, I’d love a link.
You’re conflating two senses of “take a single data point charitably”: (a) “treat the data point as relatively strong evidence for a hypothesis”, and (b) “treat the author as having a relatively benign reason to cite the data point even though it’s weak”. The first is obviously bad (since we’re assuming the data is weak evidence), but you aren’t claiming I did the first thing. The second is more like what I actually said, but it’s not problematic (assuming I have a good estimate of the citer’s epistemics).
“Charity” framings are also confusingly imprecise in their own right, since like “steelmanning,” they naturally encourage people to equivocate between “I’m trying to get a more accurate read on you by adopting a more positive interpretation” and “I’m trying to be nice/polite to you by adopting a more positive interpretation”.
A simple counterexample is “I assign 40:1 odds that my friend Bob has personality trait [blah],” where a lifetime of interactions with Bob can let you accumulate that much confidence without it being easy for you to compress the evidence into an elevator pitch that will push strangers to similar levels of confidence. (Unless the stranger simply defers to your judgment, which is different from them having access to your evidence.)
Honestly, I’m not sure what you did. You said I should distinguish claims that can have short arguments and claims that can’t. I assumed that by “distinguish”, you meant we should update on the two claims differently, which sounds like (a). What did “distinguish” really mean?
I wasn’t considering malicious/motivated authors at all. In my mind the climate supporter either doesn’t know about long term measurements, or doesn’t trust them for whatever reason. Sure, a malicious author would prefer using weak evidence when strong evidence exists, but they would also prefer topics where strong evidence doesn’t exist, so ultimately I don’t know in what way I should distinguish the two claims in relation to (b).
The problem with many small pieces of evidence is that they are often correlated, and it’s easy not to account for that. The problem with humans is that they are very complicated, so you really shouldn’t have very high confidence that you know what’s going on in their heads. But I don’t think I would be able to show you that your confidence is too high. Of course, it is technically possible to reach high confidence with a large quantity of weak evidence, I just said it as a rule of thumb. By the way, 40:1 could be high or low confidence, depending on the prior probability of the trait.