I think a satisfactory discussion of the memetic collapse claim would probably have to be a lot longer, and a lot of it would just be talking about more data points and considering different interpretations of them.
I think the criticism “isolated data points can cause people to over-update when they’re presented in vivid, concrete terms” makes sense, and this is a big part of why it’s pragmatically valuable to push back against “one hot day ergo climate change”, because even though it’s nonzero Bayesian evidence for climate change, the strength of evidence is way weaker than the emotional persuasiveness. I don’t have a strong view on whether Eliezer should add some more caveats in cases like this to ensure people are aware that he hasn’t demonstrated the memetic collapse thesis here, vs. expecting his readers to appropriately discount vivid anecdotes as a matter of course. I can see the appeal of both options.
I think the particular way you phrased your objection, in terms of “is this a locally valid inference?” rather than “is this likely to be emotionally appealing in a way that causes people to over-update?”, is wrong, though, and I think reflects an insufficiently bright line between personal-epistemics norms like “make good inferences” and social norms like “show your work”. I think you’re making overly strong symmetry claims here in ways that make for a cleaner narrative, and not seriously distinguishing “here’s a data point I’ll treat as strong supporting evidence for a claim where we should expect there to be a much stronger easy-to-communicate/compress argument if the claim is true” and “here’s a data point I’ll use to illustrate a claim where we shouldn’t expect there to be an easy-to-communicate/compress argument if the claim is true”. But it shouldn’t be necessary to push for symmetry here in any case; mistake seriousness is orthogonal to mistake irony.
I remain unconvinced by the arguments I’ve seen for the memetic collapse claim, and I’ve given some counterarguments to collapse claims in the past, but “I think you’re plausibly wrong” and “I haven’t seen enough evidence to find your view convincing” are pretty different from “I think you don’t have lots of unshared evidence for your belief” or “I think you’re making an easily demonstrated inference mistake”. I don’t think the latter two things are true, and I think it would take a lot of time and effort to actually resolve the disagreement.
(Also, I don’t mean to be glib or dismissive here about your ingroup bias worries; this was something I was already thinking about while I was composing my earlier comments, because there are lots of risk factors for motivated reasoning in this kind of discussion. I just want to be clear about what my beliefs and thinking are, factoring in bias risks as a big input.)
The problem is that we don’t have the Law for bad arguments written down well enough. You have your ideas of what is bad, I have mine, but I think pointing out irony is still the best most universally acceptable thing I can do. Especially since we’re in the comments of a post called “Local Validity as a Key to Sanity and Civilization”.
“isolated data points can cause people to over-update when they’re presented in vivid, concrete terms”
This is a valid concern, but it is not a valid law. Imagine someone telling you “your evidence is valid, but you presented it in overly vivid, concrete terms, so I downvoted you”. It would be frustrating. Who decides what is and isn’t too emotionally persuasive? Who even measures or compares persuasiveness? That sort of rule is unenforceable.
I think you’re making overly strong symmetry claims here in ways that make for a cleaner narrative.
Oh absolutely, there are many differences between the two claims, though my comparison is less charitable for EY than yours. Let H be “global warming” and let E be “a random day was hot”. Then P(E|H) > P(E|not H) is a mathematically true fact, and therefore E is valid, even if weak, evidence for H. Now, let H be “memetic collapse” and let E be “modern fiction has fewer Law abiding characters”. Does P(E|H) > P(E|not H) hold? I don’t know, if I had to guess, I’d say yes, but it’s very dubious. I.e. I can’t say for certain that EY’s evidence is even technically valid.
a claim where we shouldn’t expect there to be an easy-to-communicate/compress argument if the claim is true
This often happens. However the correct response is not the take the single data point provided more charitably. The correct response is to accept that this claim will never have high certainty. If a perfect Bayesian knew nothing at all, and you told it that “yesterday was hot” and that “modern fiction has fewer Law abiding characters”, then this Bayesian would update P(“global warming”) and P(“memetic collapse”) by about the same amount. It’s true that there exist strong arguments for global warming, and that there might not exists strong arguments for memetic collapse, however these facts are not reflected in the Bayesian mathematics. Intuitively this suggests to me that this difference you described is not something we want to look at.
“I think you don’t have lots of unshared evidence for your belief”
This is a simple claim that I make. EY seems to be quite certain of memetic collapse, at least that’s the impression I get from the text. If EY is more certain than me, then, charitably, that’s because he has more evidence than me. Note, the uncharitable explanation would be that he’s a crackpot. Now, I don’t really know if he has described this evidence somewhere, if he has, I’d love a link.
However the correct response is not the take the single data point provided more charitably.
You’re conflating two senses of “take a single data point charitably”: (a) “treat the data point as relatively strong evidence for a hypothesis”, and (b) “treat the author as having a relatively benign reason to cite the data point even though it’s weak”. The first is obviously bad (since we’re assuming the data is weak evidence), but you aren’t claiming I did the first thing. The second is more like what I actually said, but it’s not problematic (assuming I have a good estimate of the citer’s epistemics).
“Charity” framings are also confusingly imprecise in their own right, since like “steelmanning,” they naturally encourage people to equivocate between “I’m trying to get a more accurate read on you by adopting a more positive interpretation” and “I’m trying to be nice/polite to you by adopting a more positive interpretation”.
The correct response is to accept that this claim will never have high certainty.
A simple counterexample is “I assign 40:1 odds that my friend Bob has personality trait [blah],” where a lifetime of interactions with Bob can let you accumulate that much confidence without it being easy for you to compress the evidence into an elevator pitch that will push strangers to similar levels of confidence. (Unless the stranger simply defers to your judgment, which is different from them having access to your evidence.)
(a) “treat the data point as relatively strong evidence for a hypothesis”, <...>. The first is obviously bad (since we’re assuming the data is weak evidence), but you aren’t claiming I did the first thing.
Honestly, I’m not sure what you did. You said I should distinguish claims that can have short arguments and claims that can’t. I assumed that by “distinguish”, you meant we should update on the two claims differently, which sounds like (a). What did “distinguish” really mean?
(b) “treat the author as having a relatively benign reason to cite the data point even though it’s weak”
I wasn’t considering malicious/motivated authors at all. In my mind the climate supporter either doesn’t know about long term measurements, or doesn’t trust them for whatever reason. Sure, a malicious author would prefer using weak evidence when strong evidence exists, but they would also prefer topics where strong evidence doesn’t exist, so ultimately I don’t know in what way I should distinguish the two claims in relation to (b).
A simple counterexample is “I assign 40:1 odds that my friend Bob has personality trait [blah],” where a lifetime of interactions with Bob can let you accumulate that much confidence
The problem with many small pieces of evidence is that they are often correlated, and it’s easy not to account for that. The problem with humans is that they are very complicated, so you really shouldn’t have very high confidence that you know what’s going on in their heads. But I don’t think I would be able to show you that your confidence is too high. Of course, it is technically possible to reach high confidence with a large quantity of weak evidence, I just said it as a rule of thumb. By the way, 40:1 could be high or low confidence, depending on the prior probability of the trait.
I think a satisfactory discussion of the memetic collapse claim would probably have to be a lot longer, and a lot of it would just be talking about more data points and considering different interpretations of them.
I think the criticism “isolated data points can cause people to over-update when they’re presented in vivid, concrete terms” makes sense, and this is a big part of why it’s pragmatically valuable to push back against “one hot day ergo climate change”, because even though it’s nonzero Bayesian evidence for climate change, the strength of evidence is way weaker than the emotional persuasiveness. I don’t have a strong view on whether Eliezer should add some more caveats in cases like this to ensure people are aware that he hasn’t demonstrated the memetic collapse thesis here, vs. expecting his readers to appropriately discount vivid anecdotes as a matter of course. I can see the appeal of both options.
I think the particular way you phrased your objection, in terms of “is this a locally valid inference?” rather than “is this likely to be emotionally appealing in a way that causes people to over-update?”, is wrong, though, and I think reflects an insufficiently bright line between personal-epistemics norms like “make good inferences” and social norms like “show your work”. I think you’re making overly strong symmetry claims here in ways that make for a cleaner narrative, and not seriously distinguishing “here’s a data point I’ll treat as strong supporting evidence for a claim where we should expect there to be a much stronger easy-to-communicate/compress argument if the claim is true” and “here’s a data point I’ll use to illustrate a claim where we shouldn’t expect there to be an easy-to-communicate/compress argument if the claim is true”. But it shouldn’t be necessary to push for symmetry here in any case; mistake seriousness is orthogonal to mistake irony.
I remain unconvinced by the arguments I’ve seen for the memetic collapse claim, and I’ve given some counterarguments to collapse claims in the past, but “I think you’re plausibly wrong” and “I haven’t seen enough evidence to find your view convincing” are pretty different from “I think you don’t have lots of unshared evidence for your belief” or “I think you’re making an easily demonstrated inference mistake”. I don’t think the latter two things are true, and I think it would take a lot of time and effort to actually resolve the disagreement.
(Also, I don’t mean to be glib or dismissive here about your ingroup bias worries; this was something I was already thinking about while I was composing my earlier comments, because there are lots of risk factors for motivated reasoning in this kind of discussion. I just want to be clear about what my beliefs and thinking are, factoring in bias risks as a big input.)
The problem is that we don’t have the Law for bad arguments written down well enough. You have your ideas of what is bad, I have mine, but I think pointing out irony is still the best most universally acceptable thing I can do. Especially since we’re in the comments of a post called “Local Validity as a Key to Sanity and Civilization”.
This is a valid concern, but it is not a valid law. Imagine someone telling you “your evidence is valid, but you presented it in overly vivid, concrete terms, so I downvoted you”. It would be frustrating. Who decides what is and isn’t too emotionally persuasive? Who even measures or compares persuasiveness? That sort of rule is unenforceable.
Oh absolutely, there are many differences between the two claims, though my comparison is less charitable for EY than yours. Let H be “global warming” and let E be “a random day was hot”. Then P(E|H) > P(E|not H) is a mathematically true fact, and therefore E is valid, even if weak, evidence for H. Now, let H be “memetic collapse” and let E be “modern fiction has fewer Law abiding characters”. Does P(E|H) > P(E|not H) hold? I don’t know, if I had to guess, I’d say yes, but it’s very dubious. I.e. I can’t say for certain that EY’s evidence is even technically valid.
This often happens. However the correct response is not the take the single data point provided more charitably. The correct response is to accept that this claim will never have high certainty. If a perfect Bayesian knew nothing at all, and you told it that “yesterday was hot” and that “modern fiction has fewer Law abiding characters”, then this Bayesian would update P(“global warming”) and P(“memetic collapse”) by about the same amount. It’s true that there exist strong arguments for global warming, and that there might not exists strong arguments for memetic collapse, however these facts are not reflected in the Bayesian mathematics. Intuitively this suggests to me that this difference you described is not something we want to look at.
This is a simple claim that I make. EY seems to be quite certain of memetic collapse, at least that’s the impression I get from the text. If EY is more certain than me, then, charitably, that’s because he has more evidence than me. Note, the uncharitable explanation would be that he’s a crackpot. Now, I don’t really know if he has described this evidence somewhere, if he has, I’d love a link.
You’re conflating two senses of “take a single data point charitably”: (a) “treat the data point as relatively strong evidence for a hypothesis”, and (b) “treat the author as having a relatively benign reason to cite the data point even though it’s weak”. The first is obviously bad (since we’re assuming the data is weak evidence), but you aren’t claiming I did the first thing. The second is more like what I actually said, but it’s not problematic (assuming I have a good estimate of the citer’s epistemics).
“Charity” framings are also confusingly imprecise in their own right, since like “steelmanning,” they naturally encourage people to equivocate between “I’m trying to get a more accurate read on you by adopting a more positive interpretation” and “I’m trying to be nice/polite to you by adopting a more positive interpretation”.
A simple counterexample is “I assign 40:1 odds that my friend Bob has personality trait [blah],” where a lifetime of interactions with Bob can let you accumulate that much confidence without it being easy for you to compress the evidence into an elevator pitch that will push strangers to similar levels of confidence. (Unless the stranger simply defers to your judgment, which is different from them having access to your evidence.)
Honestly, I’m not sure what you did. You said I should distinguish claims that can have short arguments and claims that can’t. I assumed that by “distinguish”, you meant we should update on the two claims differently, which sounds like (a). What did “distinguish” really mean?
I wasn’t considering malicious/motivated authors at all. In my mind the climate supporter either doesn’t know about long term measurements, or doesn’t trust them for whatever reason. Sure, a malicious author would prefer using weak evidence when strong evidence exists, but they would also prefer topics where strong evidence doesn’t exist, so ultimately I don’t know in what way I should distinguish the two claims in relation to (b).
The problem with many small pieces of evidence is that they are often correlated, and it’s easy not to account for that. The problem with humans is that they are very complicated, so you really shouldn’t have very high confidence that you know what’s going on in their heads. But I don’t think I would be able to show you that your confidence is too high. Of course, it is technically possible to reach high confidence with a large quantity of weak evidence, I just said it as a rule of thumb. By the way, 40:1 could be high or low confidence, depending on the prior probability of the trait.