EY read more than ‘a couple of fictional stories’. But I think his pointing toward the general degradation of discourse on the Internet is reasonable. Certainly some segments of Tumblr would seem to be a new low acting as a harbinger of the end times. :P
The problem with this sort of hypothesis, is that it’s very hard to prove rigorously. And the reason that’s a problem is sometimes hypothesis that are hard to prove rigorously happen to be true anyway. The territory does not relent for a bit because you haven’t figured out how to prove your point. People still get lead poisoning even if the levers of authority insist your argument for toxicity is groundless. That’s a large part of why I think of measurement as the queen of science. If you can observe things but aren’t entirely sure what to make of the observations, that makes it hard to really do rigorous science with them.
The person who says that it was hot yesterday also remembers more than one hot day, but that doesn’t make their argument much stronger. In fact, even if EY had read all fiction books in the last 100 years, and counted all the Law abiding characters in them by year, that still wouldn’t be a strong argument.
the general degradation of discourse on the Internet is reasonable.
He didn’t say anything about the internet. I’m pretty sure he’s talking about general public discourse. The internet is very new, and mainstream discourse on it is even newer, so drawing trends from is is a bit fishy. And it’s not clear that those trends would imply anything at all about general public discourse.
The problem with this sort of hypothesis, is that it’s very hard to prove rigorously. And the reason that’s a problem is sometimes hypothesis that are hard to prove rigorously happen to be true anyway.
I feel like you’re doing something this EY’s post is arguing against.
I’m suggesting that he (Hypothesis) is making an argument that’s almost reasonable, but that he probably wouldn’t accept if the same argument was used to defend a statement he didn’t agree with (or if the statement was made by someone of lower status than EY).
It might be true that EY’s claim is very hard to prove with any rigor, but that is not a reason to accept it. The text of EY’s post suggests that he is quite confident in his belief, but if he has no strong arguments (and especially if no strong arguments can exist), then his confidence is itself an error.
Of course, I don’t know what Hypothesis is thinking, but I think we can all agree that “sometimes hypothesis that are hard to prove rigorously happen to be true anyway” is a complete cop-out. Because sometimes hard-to-prove hypotheses also happen to be false.
I’m suggesting that he (Hypothesis) is making an argument that’s almost reasonable, but that he probably wouldn’t accept if the same argument was used to defend a statement he didn’t agree with (or if the statement was made by someone of lower status than EY).
This kind of claim is plausible on priors, but I don’t think you’ve provided Bayesian evidence in this case that actually discriminates pathological ingroup deference from healthy garden-variety deference. “You’re putting more stock in a claim because you agree with other things the claimant has said” isn’t in itself doing epistemics wrong.
In a community where we try to assign status/esteem/respect based on epistemics, there’s always some risk that it will be hard to notice evidence of ingroup bias because we’ll so often be able to say “I’m not biased; I’m just correctly using evidence about track records to determine whose views to put more weight on”. I could see an argument for having more of a presumption of bias in order to correct for the fact that our culture makes it hard to spot particular instances of bias when they do occur. On the other hand, being too trigger-happy to yell “bias!” without concrete evidence can cause a lot of pointless arguments, and it’s easy to end up miscalibrated in the end; the goal is to end up with accurate beliefs about the particular error rate of different epistemic processes, rather than to play Bias Bingo for its own sake.
So on the whole I still think it’s best to focus discussion on evidence that actually helps us discriminate the level of bias, even if it takes some extra work to find that evidence. At least, I endorse that for public conversations targeting specific individuals; making new top-level posts about the problem that speak in generalities doesn’t run into the same issues, and I think private messaging also has less of the pointless-arguments problem.
It might be true that EY’s claim is very hard to prove with any rigor, but that is not a reason to accept it.
Obviously not; but “if someone had a justified true belief in this claim, it would probably be hard to transmit the justification in a blog-post-sized argument” does block the inferences “no one’s written a convincing short argument for this claim, therefore it’s false” and “no one’s written a convincing short argument for this claim, therefore no one has justified belief in it”. That’s what I was saying earlier, not “it must be true because it hasn’t been proven”.
The text of EY’s post suggests that he is quite confident in his belief, but if he has no strong arguments (and especially if no strong arguments can exist), then his confidence is itself an error.
You’re conflating “the evidence is hard to transmit” with “no evidence exists”. The latter justifies the inference to “therefore confidence is unreasonable”, but the former doesn’t, and the former is what we’ve been talking about.
I think we can all agree that “sometimes hypothesis that are hard to prove rigorously happen to be true anyway” is a complete cop-out. Because sometimes hard-to-prove hypotheses also happen to be false.
It’s not a cop-out to say “evidence for this kind of claim can take a while to transmit” in response to “since you haven’t transmitted strong evidence, doesn’t that mean that your confidence is ipso facto unwarranted?”. It would be an error to say “evidence for this kind of claim can take a while to transmit, therefore the claim is true”, but no one’s said that.
In a community where we try to assign status/esteem/respect based on epistemics, there’s always some risk that it will be hard to notice evidence of ingroup bias because we’ll so often be able to say “I’m not biased; I’m just correctly using evidence about track records to determine whose views to put more weight on”. I could see an argument for having more of a presumption of bias in order to correct for the fact that our culture makes it hard to spot particular instances of bias when they do occur. On the other hand, being too trigger-happy to yell “bias!” without concrete evidence can cause a lot of pointless arguments, and it’s easy to end up miscalibrated in the end.
I’d also want to explicitly warn against confusing epistemic motivations with ‘I want to make this social heuristic cheater-resistant’ motivations, since I think this is a common problem. Highly general arguments against the existence of hard-to-transmit evidence (or conflation of ‘has the claimant transmitted their evidence?’ with ‘is the claimant’s view reasonable?’) raise a lot of alarm bells for me in line with Status Regulation and Anxious Underconfidence and Hero Licensing.
On one hand, I’d much rather talk about how valid “memetic collapse” is, then about how valid someone’s response to “memetic collapse” is. One the other hand, I really do believe that the response to this post is a lot less negative than it should be. Then again, these are largely the same question: why is my reaction to this post seemingly so different from other users’? “Bias” isn’t necessarily my favorite answer. Maybe they’re all just very polite.
“You’re putting more stock in a claim because you agree with other things the claimant has said” isn’t in itself doing epistemics wrong.
It’s not wrong, but it’s not locally valid. Here again, I’m going for that sweet irony.
but “if someone had a justified true be,lief in this claim, it would probably be hard to transmit the justification in a blog-post-sized argument” does block the inferences “no one’s written a convincing short argument for this claim, therefore it’s false”
Indeed, that inference is blocked. Actually most inferences are “blocked”. I could trust EY to be right, but personally I don’t. Therefore, EY’s post didn’t really force me to update my estimate of P(“memetic collapse”) in either direction. I should point out that my prior for “memetic collapse” is extremely low. I’m not sure if that needs an explanation or if it’s something we all agree on.
So, when I finish reading a post and my probability estimate for one of the central claims of the post does not increase, despite apparent attempts by the author to increase it, I say it’s a “bad post”. Is that not reasonable? What does your P(“memetic collapse”) look like, and how did the post affect it?
“the evidence is hard to transmit”
You have said this a lot, but I don’t really see why it should be true. Did EY even suggest so himself? Sure, it’s probably harder to transmit than evidence for climate change, but I don’t see how citing some fictional characters is the best EY can do. Of course, there is one case where evidence is very hard to transmit—that’s when evidence doesn’t exist.
That’s what I was saying earlier
Oh, hey, we talked a lot in another thread. What happened to that?
It’s not wrong, but it’s not locally valid. Here again, I’m going for that sweet irony.
If local validity meant never sharing your confidence levels without providing all your evidence for your beliefs, local validity would be a bad desideratum.
I could trust EY to be right, but personally I don’t. Therefore, EY’s post didn’t really force me to update my estimate of P(“memetic collapse”) in either direction.
Yes. I think that this is a completely normal state of affairs, and if it doesn’t happen very often then there’s probably something very wrong with the community’s health and epistemic hygiene:
Person A makes a claim they don’t have time to back up.
Person B trusts A’s judgment enough to update nontrivially in the direction of the claim. B says as much, but perhaps expresses an interest in hearing the arguments in more detail (e.g., to see if it makes them update further, or out of intellectual curiosity, or to develop a model with more working parts, or to do a spot check on whether they’re correct to trust A that much).
Person C doesn’t trust A’s (or, implicitly, B’s) judgment enough to make a nontrivial update toward the claim. C says as much, and expresses an interest in hearing the arguments in more detail so they can update on the merits directly (and e.g. learn more about A’s reliability).
This situation is a sign of a healthy community (though not a strong sign). There’s no realistic way for everyone to have the same judgments about everyone else’s epistemic reliability — this is another case where it’s just too time-consuming for everyone to fully share all their evidence, though they can do some information-sharing here and there (and it’s particularly valuable to do so with people like Eliezer who get cited so much) — so this should be the normal way of things.
I’m not just saying that B and C’s conduct in this hypothetical is healthy; I think A’s is healthy too, because I don’t think people should hide their conclusions just because they can’t always concisely communicate their premises.
Like I said earlier, I’m sympathetic to the idea that Eliezer should explicitly highlight “this is a point I haven’t defended” in cases like this. I’ve said that I think your criticisms have been inconsistent, unclear, or equivocation-prone on a lot of points, and that I think you’ve been failing a lot on other people’s ITTs here; but I continue to fully endorse your interjection of “I disagree with A on this point” (both as a belief a reasonable person can hold, and as a positive thing for people to express given that they hold it), and I also continue to think that doing more signposting of “I haven’t defended this here” may be a good idea. I’d like to see it discussed more.
You have said this a lot, but I don’t really see why it should be true.
It’s just a really common state of affairs, maybe even the default when you’re talking about most practically important temporal properties of human individuals and groups. Compare claims like “top evopsych journals tend to be more careful and rigorous than top nutrition science journals” or “4th-century AD Roman literature used less complex wordplay and chained literary associations than 1st-century AD Roman literature”.
These are the kinds of claims where it’s certainly possible to reach a confident conclusion if (as it happens) the effect size is large, but where there will be plenty of finicky details and counter-examples and compressing the evidence into an easy-to-communicate form is a pretty large project. A skeptical interlocutor in those cases could reasonably doubt the claim until they see a lot of the same evidence (while acknowledging that other people may indeed have access to sufficient evidence to justify the conclusion).
(Maybe the memetic collapse claim, at the effect size we’re probably talking about, is just a much harder thing to eyeball than those sorts of claims, such that it’s reasonable to demand extraordinary evidence before you think that human brains can reach correct nontrivial conclusions about things like memetic collapse at all. I think that sort of skepticism has some merit to it, and it’s a factor going into my skepticism; I just don’t think the particular arguments you’ve given make sense as factors.)
I’ve said that I think your criticisms have been inconsistent, unclear, or equivocation-prone on a lot of points
Elaborate please. My claims about EY’s “memetic collapse” should be clear and simple: it’s a bad idea supported by bad arguments. My claims about how reasonable your response to “memetic collapse” is, are much weaker and more complicated. This is largely because I can’t read your mind, and you haven’t shared your reasoning much. What was your prior for “memetic collapse” before you read this? What is your probability estimate after reading it? Do you agree that EY does try to make multiple arguments, and that they are all very bad? Maybe you actually agree that it is a very bad post, maybe you even downvoted it, I wouldn’t know.
There’s no realistic way for everyone to have the same judgments about everyone else’s epistemic reliability
You example with A, B, C is correct, but it’s irrelevant. Nobody is saying that the statement “I believe X” is bad. The problem is with statement “I believe X because Y”, where X does not follow from Y. “Memetic collapse” is not some sidenote in this post, EY does repeatedly try to share his intuitions about it. The argument about fictional characters is the one I’ve cited, because it’s the most valid argument he’s made (twice), and I was being charitable. But he also cites, e.g. Martin Shkreli trial and other current events, without even bothering to compare those situations to events in the past. Surely this is an implicit argument “it’s bad now, so it was better in the past”. How is that acceptable?
Epistemic reliability of the author is useful when he provides no arguments. But when he does write arguments, you’re supposed to consider them.
You may point out that the claim “author used bad argument for X” is does not imply “X is false”, and this is correct, but I believe that faulty arguments need to be pointed out and in some way discouraged. Surely this is what comments are for.
The level of charity you are exhibiting is ridiculous. Your arguments are fully general. You could take any post, no matter how stupid, and say “the author didn’t have time to share his hard-to-transmit evidence”, in defense of it. This is not healthy reasoning. I could believe that you’re just that charitable to everyone, but then I’m not feeling quite that much charity directed at myself. Why did you feel a need to reply to my original comment, but not a need to leave a direct comment on EY’s post?
If local validity meant never sharing your confidence levels without providing all your evidence for your beliefs, local validity would be a bad desideratum.
Local validity is a criteria that rejects the argument “climate change is true because it was hot yesterday”. EY does not consider whether the climate supporter had the time to lay out his evidence, and he is not worried about passing the climate supporter’s ITT. I think half of your criticisms directed to me would fit EY just fine, so I don’t really understand why you wouldn’t say them to him.
“top evopsych journals tend to be more careful and rigorous than top nutrition science journals” or “4th-century AD Roman literature used less complex wordplay and chained literary associations than 1st-century AD Roman literature”
These aren’t actually much harder to transmit then “climate change” (i.e. “daily temperatures over the recent years tend to be higher than daily temperatures over many years before that”). You examples are more subjective (and therefore shouldn’t have very high confidences), but apart from that, their evidence would look a lot like the evidence for climate change: counts and averages of some simple features, performed by a trusted source. And even if you didn’t have that, citing one example of complex wordplay and one example of lack of it, would be a stronger argument than what EY did.
Regarding “memetic collapse”, you haven’t yet explained to me why the fictional character argument is the best EY could do. I feel like even I can find better ones myself (although it is hard to find good arguments for false claims). E.g. take some old newspaper and suggest that it is more willing to consider the outgroup’s views than current papers.
The level of charity you are exhibiting is ridiculous. Your arguments are fully general. You could take any post, no matter how stupid, and say “the author didn’t have time to share his hard-to-transmit evidence”, in defense of it. This is not healthy reasoning.
If Fully General Counterargument A exists, but is invalid, then any defense against Counterargument A will necessarily also be Fully General.
I don’t understand what you’re trying to say. All fully general arguments are invalid, and pointing out that an argument is fully general is a reasonable defence against it. This defence is not fully general, in the sense that it only works when the original argument is, in fact, fully general.
Rob isn’t saying that “complex ideas are hard to quickly explain” supports Yudkowsky’s claim. He’s saying that it weakens your argument against Yudkowsky’s claim. The generality of Rob’s argument should be considered relative to what he’s defending against. You are saying that since the defense can apply to any complex idea, it is fully general. But it’s a defense against the implied claim that only quick-to-explain ideas are valid.
A fully general counter-argument can attack all claims equally. A good defense against FGCAs should be capable of defending all claims just as equally. Pointing out that you can defend any complex idea by saying “complex ideas are hard to quickly explain” does not, in fact, show the defense to be invalid. (Often FGCAs can’t attack all claims equally, but only all claims within a large reference class which is guaranteed to contain some true statements. Mutandis mutandum.)
Here is what our exchange looks like from my point of view.
Me: EY’s arguments are bad.
Rob: But EY didn’t have time to transmit his evidence.
Indeed he is not saying “EY is correct”. But what is he saying? What is the purpose of that reply? In what way is it a reasonable reply to make? I’d love to hear an opinion from you as a third party.
Here is my point of view. I’m trying to evaluate the arguments, and see if I want to update P(“memetic collapse”) as well as P(“EY makes good arguments”) or P(“EY is a crackpot”), and then Rob tells me not to, while providing no substance as to why I shouldn’t. Indeed I should update P(“EY is a crackpot”), and so should you. And if you don’t, I need you to explain to me how exactly that works.
And I’m very much bothered by the literal content of the argument. Not enough time? Quickly? Where are these coming from? Am I the only one seeing the 3000 word post that surely took hours to write? You could use the “too little time” defense for a tweet, or a short comment on LW. But if you have the time to make a dozen bad arguments and emotional appeals, then surely you could also find the time for one decent argument. How long does a post have to be for Rob to actually engage with its arguments?
As I see it, Rob is defending the use of [(possibly shared) intuition?] in an argument, since not everything can be feasibly and quickly proved rigorously to the satisfaction of everyone involved:
These are the kinds of claims where it’s certainly possible to reach a confident conclusion if (as it happens) the effect size is large, but where there will be plenty of finicky details and counter-examples and compressing the evidence into an easy-to-communicate form is a pretty large project. A skeptical interlocutor in those cases could reasonably doubt the claim until they see a lot of the same evidence (while acknowledging that other people may indeed have access to sufficient evidence to justify the conclusion).
(My summary is probably influenced by my memory of Wei Dai’s top-level comment, which has a similar view, so it’s possible that Rob wouldn’t use the word “intuition”, but I think that I have the gist of his argument.)
It appears that Yudkowsky simply wasn’t trying to convince a skeptic of memetic collapse in this post—Little Fuzzy provided more of an example than a proof. This is more about connecting the concepts “memetic collapse” and “local validity” and some other things. Not every post needs to prove the validity of each concept it connects with. And in fact, Yudkowsky supported his idea of memetic collapse in the linked Facebook post. Does he need to go over the same supporting arguments in each related post?
Not every post needs to prove the validity of each concept it connects with.
Nobody ever said that it does. It’s ok not to give any arguments. It’s bad when you do give arguments and those arguments are bad. Can you confirm whether you see any arguments in the OP and whether you find them logically sound? Maybe I am hallucinating.
Yudkowsky simply wasn’t trying to convince a skeptic of memetic collapse in this post
That would be fine, I could almost believe that it’s ok to give bad arguments when the purpose of the post is different. But then, he also linked to another facebook post which is explicitly about explaining memetic collapse, and the arguments there are no better.
Rob is defending the use of [(possibly shared) intuition?]
What is that intuition exactly? And is it really shared?
I’m a bit late to this but I’m glad to see that you were pointing this stuff out in thread. I see this post as basically containing 2 things:
some useful observations about how the law (and The Law) requires even-handed application to serve its purpose, and how thinking about the law at this abstract level has parallels in other sorts of logical thinking such as the sort mathematicians do a lot of. this stuff feels like the heart of the post and i think it’s mostly correct. i’m unsure how convinced i would be if i didn’t already mostly agree with it, though.
some stuff about how people used to be better in the past, which strikes me as basically the “le wrong generation” meme applied to Being Smart rather than Having Taste. this stuff i think is all basically false and is certainly unsupported in the text.
i think you’re seeing (2) as more central to the post than I am, so I’m less bothered by its inclusion.
But I think you’re correct to point out that it’s unsupported, and i’m in agreement that it’s probably false, and I’m glad you pointed out the irony of giving locally-invalid evidence in a post about how doing that is bad, and it seems to me that Rob spent quite a lot of words totally failing to engage with your actual criticism.
EY read more than ‘a couple of fictional stories’. But I think his pointing toward the general degradation of discourse on the Internet is reasonable. Certainly some segments of Tumblr would seem to be a new low acting as a harbinger of the end times. :P
The problem with this sort of hypothesis, is that it’s very hard to prove rigorously. And the reason that’s a problem is sometimes hypothesis that are hard to prove rigorously happen to be true anyway. The territory does not relent for a bit because you haven’t figured out how to prove your point. People still get lead poisoning even if the levers of authority insist your argument for toxicity is groundless. That’s a large part of why I think of measurement as the queen of science. If you can observe things but aren’t entirely sure what to make of the observations, that makes it hard to really do rigorous science with them.
The person who says that it was hot yesterday also remembers more than one hot day, but that doesn’t make their argument much stronger. In fact, even if EY had read all fiction books in the last 100 years, and counted all the Law abiding characters in them by year, that still wouldn’t be a strong argument.
He didn’t say anything about the internet. I’m pretty sure he’s talking about general public discourse. The internet is very new, and mainstream discourse on it is even newer, so drawing trends from is is a bit fishy. And it’s not clear that those trends would imply anything at all about general public discourse.
I feel like you’re doing something this EY’s post is arguing against.
Care to specify how that is the case?
I’m suggesting that he (Hypothesis) is making an argument that’s almost reasonable, but that he probably wouldn’t accept if the same argument was used to defend a statement he didn’t agree with (or if the statement was made by someone of lower status than EY).
It might be true that EY’s claim is very hard to prove with any rigor, but that is not a reason to accept it. The text of EY’s post suggests that he is quite confident in his belief, but if he has no strong arguments (and especially if no strong arguments can exist), then his confidence is itself an error.
Of course, I don’t know what Hypothesis is thinking, but I think we can all agree that “sometimes hypothesis that are hard to prove rigorously happen to be true anyway” is a complete cop-out. Because sometimes hard-to-prove hypotheses also happen to be false.
This kind of claim is plausible on priors, but I don’t think you’ve provided Bayesian evidence in this case that actually discriminates pathological ingroup deference from healthy garden-variety deference. “You’re putting more stock in a claim because you agree with other things the claimant has said” isn’t in itself doing epistemics wrong.
In a community where we try to assign status/esteem/respect based on epistemics, there’s always some risk that it will be hard to notice evidence of ingroup bias because we’ll so often be able to say “I’m not biased; I’m just correctly using evidence about track records to determine whose views to put more weight on”. I could see an argument for having more of a presumption of bias in order to correct for the fact that our culture makes it hard to spot particular instances of bias when they do occur. On the other hand, being too trigger-happy to yell “bias!” without concrete evidence can cause a lot of pointless arguments, and it’s easy to end up miscalibrated in the end; the goal is to end up with accurate beliefs about the particular error rate of different epistemic processes, rather than to play Bias Bingo for its own sake.
So on the whole I still think it’s best to focus discussion on evidence that actually helps us discriminate the level of bias, even if it takes some extra work to find that evidence. At least, I endorse that for public conversations targeting specific individuals; making new top-level posts about the problem that speak in generalities doesn’t run into the same issues, and I think private messaging also has less of the pointless-arguments problem.
Obviously not; but “if someone had a justified true belief in this claim, it would probably be hard to transmit the justification in a blog-post-sized argument” does block the inferences “no one’s written a convincing short argument for this claim, therefore it’s false” and “no one’s written a convincing short argument for this claim, therefore no one has justified belief in it”. That’s what I was saying earlier, not “it must be true because it hasn’t been proven”.
You’re conflating “the evidence is hard to transmit” with “no evidence exists”. The latter justifies the inference to “therefore confidence is unreasonable”, but the former doesn’t, and the former is what we’ve been talking about.
It’s not a cop-out to say “evidence for this kind of claim can take a while to transmit” in response to “since you haven’t transmitted strong evidence, doesn’t that mean that your confidence is ipso facto unwarranted?”. It would be an error to say “evidence for this kind of claim can take a while to transmit, therefore the claim is true”, but no one’s said that.
I’d also want to explicitly warn against confusing epistemic motivations with ‘I want to make this social heuristic cheater-resistant’ motivations, since I think this is a common problem. Highly general arguments against the existence of hard-to-transmit evidence (or conflation of ‘has the claimant transmitted their evidence?’ with ‘is the claimant’s view reasonable?’) raise a lot of alarm bells for me in line with Status Regulation and Anxious Underconfidence and Hero Licensing.
Would it surprise you to know that I have issues with those posts as well?
On one hand, I’d much rather talk about how valid “memetic collapse” is, then about how valid someone’s response to “memetic collapse” is. One the other hand, I really do believe that the response to this post is a lot less negative than it should be. Then again, these are largely the same question: why is my reaction to this post seemingly so different from other users’? “Bias” isn’t necessarily my favorite answer. Maybe they’re all just very polite.
It’s not wrong, but it’s not locally valid. Here again, I’m going for that sweet irony.
Indeed, that inference is blocked. Actually most inferences are “blocked”. I could trust EY to be right, but personally I don’t. Therefore, EY’s post didn’t really force me to update my estimate of P(“memetic collapse”) in either direction. I should point out that my prior for “memetic collapse” is extremely low. I’m not sure if that needs an explanation or if it’s something we all agree on.
So, when I finish reading a post and my probability estimate for one of the central claims of the post does not increase, despite apparent attempts by the author to increase it, I say it’s a “bad post”. Is that not reasonable? What does your P(“memetic collapse”) look like, and how did the post affect it?
You have said this a lot, but I don’t really see why it should be true. Did EY even suggest so himself? Sure, it’s probably harder to transmit than evidence for climate change, but I don’t see how citing some fictional characters is the best EY can do. Of course, there is one case where evidence is very hard to transmit—that’s when evidence doesn’t exist.
Oh, hey, we talked a lot in another thread. What happened to that?
If local validity meant never sharing your confidence levels without providing all your evidence for your beliefs, local validity would be a bad desideratum.
Yes. I think that this is a completely normal state of affairs, and if it doesn’t happen very often then there’s probably something very wrong with the community’s health and epistemic hygiene:
Person A makes a claim they don’t have time to back up.
Person B trusts A’s judgment enough to update nontrivially in the direction of the claim. B says as much, but perhaps expresses an interest in hearing the arguments in more detail (e.g., to see if it makes them update further, or out of intellectual curiosity, or to develop a model with more working parts, or to do a spot check on whether they’re correct to trust A that much).
Person C doesn’t trust A’s (or, implicitly, B’s) judgment enough to make a nontrivial update toward the claim. C says as much, and expresses an interest in hearing the arguments in more detail so they can update on the merits directly (and e.g. learn more about A’s reliability).
This situation is a sign of a healthy community (though not a strong sign). There’s no realistic way for everyone to have the same judgments about everyone else’s epistemic reliability — this is another case where it’s just too time-consuming for everyone to fully share all their evidence, though they can do some information-sharing here and there (and it’s particularly valuable to do so with people like Eliezer who get cited so much) — so this should be the normal way of things.
I’m not just saying that B and C’s conduct in this hypothetical is healthy; I think A’s is healthy too, because I don’t think people should hide their conclusions just because they can’t always concisely communicate their premises.
Like I said earlier, I’m sympathetic to the idea that Eliezer should explicitly highlight “this is a point I haven’t defended” in cases like this. I’ve said that I think your criticisms have been inconsistent, unclear, or equivocation-prone on a lot of points, and that I think you’ve been failing a lot on other people’s ITTs here; but I continue to fully endorse your interjection of “I disagree with A on this point” (both as a belief a reasonable person can hold, and as a positive thing for people to express given that they hold it), and I also continue to think that doing more signposting of “I haven’t defended this here” may be a good idea. I’d like to see it discussed more.
It’s just a really common state of affairs, maybe even the default when you’re talking about most practically important temporal properties of human individuals and groups. Compare claims like “top evopsych journals tend to be more careful and rigorous than top nutrition science journals” or “4th-century AD Roman literature used less complex wordplay and chained literary associations than 1st-century AD Roman literature”.
These are the kinds of claims where it’s certainly possible to reach a confident conclusion if (as it happens) the effect size is large, but where there will be plenty of finicky details and counter-examples and compressing the evidence into an easy-to-communicate form is a pretty large project. A skeptical interlocutor in those cases could reasonably doubt the claim until they see a lot of the same evidence (while acknowledging that other people may indeed have access to sufficient evidence to justify the conclusion).
(Maybe the memetic collapse claim, at the effect size we’re probably talking about, is just a much harder thing to eyeball than those sorts of claims, such that it’s reasonable to demand extraordinary evidence before you think that human brains can reach correct nontrivial conclusions about things like memetic collapse at all. I think that sort of skepticism has some merit to it, and it’s a factor going into my skepticism; I just don’t think the particular arguments you’ve given make sense as factors.)
Elaborate please. My claims about EY’s “memetic collapse” should be clear and simple: it’s a bad idea supported by bad arguments. My claims about how reasonable your response to “memetic collapse” is, are much weaker and more complicated. This is largely because I can’t read your mind, and you haven’t shared your reasoning much. What was your prior for “memetic collapse” before you read this? What is your probability estimate after reading it? Do you agree that EY does try to make multiple arguments, and that they are all very bad? Maybe you actually agree that it is a very bad post, maybe you even downvoted it, I wouldn’t know.
You example with A, B, C is correct, but it’s irrelevant. Nobody is saying that the statement “I believe X” is bad. The problem is with statement “I believe X because Y”, where X does not follow from Y. “Memetic collapse” is not some sidenote in this post, EY does repeatedly try to share his intuitions about it. The argument about fictional characters is the one I’ve cited, because it’s the most valid argument he’s made (twice), and I was being charitable. But he also cites, e.g. Martin Shkreli trial and other current events, without even bothering to compare those situations to events in the past. Surely this is an implicit argument “it’s bad now, so it was better in the past”. How is that acceptable?
Epistemic reliability of the author is useful when he provides no arguments. But when he does write arguments, you’re supposed to consider them.
You may point out that the claim “author used bad argument for X” is does not imply “X is false”, and this is correct, but I believe that faulty arguments need to be pointed out and in some way discouraged. Surely this is what comments are for.
The level of charity you are exhibiting is ridiculous. Your arguments are fully general. You could take any post, no matter how stupid, and say “the author didn’t have time to share his hard-to-transmit evidence”, in defense of it. This is not healthy reasoning. I could believe that you’re just that charitable to everyone, but then I’m not feeling quite that much charity directed at myself. Why did you feel a need to reply to my original comment, but not a need to leave a direct comment on EY’s post?
Local validity is a criteria that rejects the argument “climate change is true because it was hot yesterday”. EY does not consider whether the climate supporter had the time to lay out his evidence, and he is not worried about passing the climate supporter’s ITT. I think half of your criticisms directed to me would fit EY just fine, so I don’t really understand why you wouldn’t say them to him.
These aren’t actually much harder to transmit then “climate change” (i.e. “daily temperatures over the recent years tend to be higher than daily temperatures over many years before that”). You examples are more subjective (and therefore shouldn’t have very high confidences), but apart from that, their evidence would look a lot like the evidence for climate change: counts and averages of some simple features, performed by a trusted source. And even if you didn’t have that, citing one example of complex wordplay and one example of lack of it, would be a stronger argument than what EY did.
Regarding “memetic collapse”, you haven’t yet explained to me why the fictional character argument is the best EY could do. I feel like even I can find better ones myself (although it is hard to find good arguments for false claims). E.g. take some old newspaper and suggest that it is more willing to consider the outgroup’s views than current papers.
If Fully General Counterargument A exists, but is invalid, then any defense against Counterargument A will necessarily also be Fully General.
I don’t understand what you’re trying to say. All fully general arguments are invalid, and pointing out that an argument is fully general is a reasonable defence against it. This defence is not fully general, in the sense that it only works when the original argument is, in fact, fully general.
Rob isn’t saying that “complex ideas are hard to quickly explain” supports Yudkowsky’s claim. He’s saying that it weakens your argument against Yudkowsky’s claim. The generality of Rob’s argument should be considered relative to what he’s defending against. You are saying that since the defense can apply to any complex idea, it is fully general. But it’s a defense against the implied claim that only quick-to-explain ideas are valid.
A fully general counter-argument can attack all claims equally. A good defense against FGCAs should be capable of defending all claims just as equally. Pointing out that you can defend any complex idea by saying “complex ideas are hard to quickly explain” does not, in fact, show the defense to be invalid. (Often FGCAs can’t attack all claims equally, but only all claims within a large reference class which is guaranteed to contain some true statements. Mutandis mutandum.)
Here is what our exchange looks like from my point of view.
Me: EY’s arguments are bad.
Rob: But EY didn’t have time to transmit his evidence.
Indeed he is not saying “EY is correct”. But what is he saying? What is the purpose of that reply? In what way is it a reasonable reply to make? I’d love to hear an opinion from you as a third party.
Here is my point of view. I’m trying to evaluate the arguments, and see if I want to update P(“memetic collapse”) as well as P(“EY makes good arguments”) or P(“EY is a crackpot”), and then Rob tells me not to, while providing no substance as to why I shouldn’t. Indeed I should update P(“EY is a crackpot”), and so should you. And if you don’t, I need you to explain to me how exactly that works.
And I’m very much bothered by the literal content of the argument. Not enough time? Quickly? Where are these coming from? Am I the only one seeing the 3000 word post that surely took hours to write? You could use the “too little time” defense for a tweet, or a short comment on LW. But if you have the time to make a dozen bad arguments and emotional appeals, then surely you could also find the time for one decent argument. How long does a post have to be for Rob to actually engage with its arguments?
As I see it, Rob is defending the use of [(possibly shared) intuition?] in an argument, since not everything can be feasibly and quickly proved rigorously to the satisfaction of everyone involved:
(My summary is probably influenced by my memory of Wei Dai’s top-level comment, which has a similar view, so it’s possible that Rob wouldn’t use the word “intuition”, but I think that I have the gist of his argument.)
It appears that Yudkowsky simply wasn’t trying to convince a skeptic of memetic collapse in this post—Little Fuzzy provided more of an example than a proof. This is more about connecting the concepts “memetic collapse” and “local validity” and some other things. Not every post needs to prove the validity of each concept it connects with. And in fact, Yudkowsky supported his idea of memetic collapse in the linked Facebook post. Does he need to go over the same supporting arguments in each related post?
Nobody ever said that it does. It’s ok not to give any arguments. It’s bad when you do give arguments and those arguments are bad. Can you confirm whether you see any arguments in the OP and whether you find them logically sound? Maybe I am hallucinating.
That would be fine, I could almost believe that it’s ok to give bad arguments when the purpose of the post is different. But then, he also linked to another facebook post which is explicitly about explaining memetic collapse, and the arguments there are no better.
What is that intuition exactly? And is it really shared?
I’m a bit late to this but I’m glad to see that you were pointing this stuff out in thread. I see this post as basically containing 2 things:
some useful observations about how the law (and The Law) requires even-handed application to serve its purpose, and how thinking about the law at this abstract level has parallels in other sorts of logical thinking such as the sort mathematicians do a lot of. this stuff feels like the heart of the post and i think it’s mostly correct. i’m unsure how convinced i would be if i didn’t already mostly agree with it, though.
some stuff about how people used to be better in the past, which strikes me as basically the “le wrong generation” meme applied to Being Smart rather than Having Taste. this stuff i think is all basically false and is certainly unsupported in the text.
i think you’re seeing (2) as more central to the post than I am, so I’m less bothered by its inclusion.
But I think you’re correct to point out that it’s unsupported, and i’m in agreement that it’s probably false, and I’m glad you pointed out the irony of giving locally-invalid evidence in a post about how doing that is bad, and it seems to me that Rob spent quite a lot of words totally failing to engage with your actual criticism.