It perhaps unhelpful that you never said which if your previous comments they referred to .
I’m afraid I don’t think this was very ambiguous. Like, the bit where I introduced X and Y read
I read my comment as: I think Eliezer was not saying X. I think Eliezer was saying Y. I think Y is still active here, and if you think otherwise I’m surprised.
At the time of writing, I had exactly one comment in this thread which remotely fit that schema. And, the start of that comment was
I think this is a false dichotomy; and also I do not know how it intends to engage with my comment.
where I think “this” is, in context, fairly obviously referring to the comment it replies to; and so “my comment” is fairly obviously the comment that was replying to? And then when I refer to “my comment” in the next paragraph, the obvious conclusion is that it’s the same comment, which again is the only one which remotely fit the schema outlined in that paragraph.
I predict that at least 90% of LW commenters would have parsed this correctly.
(Is this relevant? I think it must be, because a lot of this discussion is about “what did person mean when they said thing?” And, there’s no polite way to say this, but if you can’t get this right… when you accuse Eliezer of being an unclear writer, I cannot help but consider that maybe instead you’re a bad reader, and I suggest that you consider this possibility too. Of course, if I’m wrong and what I said wasn’t clear, I have to downweight my trust in my own reading comprehension.)
I suggested Bayes should replace science if it is objectively, systemstically better. In other words, Bayes replacing science is somethung EY should have said , because it follows from the other claim.
But do you think he actually said it? I reminded you, earlier, of the Sally-Anne fallacy, the failure to distinguish between “this person said a thing” and “this person said something that implies a thing”, and I feel I must remind you again. Because if the thing you think LW has “quitely forgotten about” is something that Eliezer didn’t say, but that you think follows from something Eliezer said, that is a very different accusation!
It might be that LW and/or Eliezer don’t realize the thing follows from what Eliezer said, and this would reflect badly on LW and/or Eliezer but it wouldn’t say much about how errata work around here.
Or, of course, it might be that you are wrong, and the thing doesn’t follow from what Eliezer said.
But I can’t get “you” to make a clear statement that “individuals should use Bayes” means “Bayes is systematically better”.
I mean, I think individuals should use Bayes. Whether Bayes is “systematically better” than science is, I think, a meaningless question without specifying what it’s supposed to be better at. And even if we specify that, I don’t see that the first thing would mean the second thing. So I’m not sure what clear statement you expect me to make...
...and I don’t know why you’d care whether or not I make it? My own personal opinions here seem basically irrelevant. You accused LW-at-large of quietly forgetting something that Eliezer said. Whether that accusation holds or not has little to do with whether I personally agree with the thing.
If Bayes is better without being systematically better,if it only works for some people, then you shouldn’t replace science with it. But what does that even mean? Why would it only work for some people?
Ugh, fine. I’ve been trying to avoid this but here goes.
So first off I don’t think I know what you mean by “systematically”. Eliezer doesn’t use the word. It seems clear, at least, that he’s dubious “teach more Bayes to Robert Aumann” would cause Robert Aumann to have more correct beliefs. So, maybe Eliezer doesn’t even think Bayes is systematically better in the sense that you mean? Again, I don’t know what that sense is, so I don’t know. But putting that aside...
One reason it might only work for some people is because some people are less intelligent than others? Like, if I tell you you’re going to need to solve a hedge maze and you’ll be judged on time but you can see the layout before you enter, then “learn the fastest route before you enter” is systematically better than “take the left path at every fork”, in the sense that you’ll get through the maze faster—if you’re capable of memorizing the fastest route, and keeping track of where you are. If you’re not capable of that, I’d advise you to stick to the left path strategy.
I’m not saying this is what’s going on, just… it seems like an obvious sort of thing to consider, and I find it bizarre that you haven’t considered it.
Another thing here is: what works/is optimal for a person might not work/be optimal for a group? One person making paperclips will do a bunch of different things, two people making paperclips together might only do half of those things each, but also some extra things because of coordination overhead and perhaps differing incentives.
And then it might not even be meaningful to talk about a group in the same way as an individual. An agent might assign probability 0.6 to a hypothesis; another agent might assign probability 0.2; what probability does “the group consisting of these two agents” assign? If each agent does a Bayesian update upon observing evidence, does the group also do a Bayesian update?
All of which is to say, I am baffled by your insistence that if Bayes is better than science for individuals, we should replace science-the-institution with Bayes. This seems unjustified on many levels.
And where the hell did Yudkowsky say anything of the kind?
It sounds to me like:
You said “X iff Y”.
I said, “I don’t think so, here’s one reason you might have one but not the other.” (I’ve now given a much more detailed explanation of why I think you’re mistaken.)
You’re asking where EY said anything like what I said.
This seems confused, because I never said that EY said anything like what I said. I don’t think there’s any particular reason to expect him to have done. He cannot explicitly reject every possible mistake someone might make while reading his essays.
I’m not the only person who ever thought EY meant to replace science with Bayes [...] For instance see this...please
Okay, so another person misinterpreted him in a similar way. I’m not sure what I’m supposed to make of this. Even if EY was unclear, that’s also a different criticism than the idea that LW has quietly forgotten things.
you can’t be completely sure of your interpretation either, for the same reason.
Maybe not, but, like… you brought it up? If you think you know what he meant, stand by it and defend your interpretation. If you don’t think you know what he meant, admit that outright. If you don’t think you know what he meant, but you think I don’t know either… so what? Does me being also wrong vindicate you somehow? Feels like a prosecutor being asked “do you have any reason to think the defendant was near the scene of the crime that night” and replying “okay, maybe not, but you don’t know where he was either”.
I note that you have once again declined to answer my direct questions, so I’ll try to fill in what I think you think.
Do you think Eliezer was saying “Bayes should replace science as an institution”?
You apparently don’t think he said this? (At least I don’t think you’ve justified the idea that he has. Nor have you addressed the bit I quoted above about Robert Aumann, where I think he suggests the opposite of this.) You just think it follows from something he did say. I’ve now explained in some detail both why I think you’re wrong about that, and why even if you were right, it would be important to distinguish from him actually saying it.
Do you think Eliezer was saying “individuals should be willing to trust in Bayes over science”?
Dunno if you think this, probably not super relevant.
Do you think LessWrong-at-large currently thinks “Bayes should replace science as an institution”?
I guess you think this is not the case, and this is what you think has been “quietly forgotten about”. I agree LW-at-large does not currently think this, I just think EY never proposed it either.
Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
Dunno if you think this either, also probably not super relevant.
I don’t think he said it clearly, and I don’t think he said anything else clearly. Believe it or not, what I am doing is charitable interpretation...I am trying to make sense of what he said. If he thinks Bayes is systematically better than science, that would imply “Bayes is better than science, so replace science with Bayes”, because that makes more sense than “Bayes is better than science, so don’t replace Science with Bayes”. So I think that is what he is probably saying.
he failure to distinguish between “this person said a thing” and “this person said something that implies a thing”,
Maybe it’s Sally Anne fallacy, maybe its charitable interperetation. One should only use charitable interpretation where the meaning is unclear. Sally-Anne is only a fallacy where the meaning is clear.
If you think you know what he meant, stand by it and defend your interpretation. If you don’t think you know what he meant, admit that outright.
I am engaging in probablistic reasoning.
Okay, so another person misinterpreted him in a similar way.
Why should I make any attempt to provide evidence, when you are going to reject it out of hand.
He cannot explicitly reject every possible mistake someone might make while reading his essays.
No, but he could do a lot better. (An elephant-in-the-room issue here is that even though he is still alive, no-one expects him to pop up and say something that actually clarifies the issue).
So first off I don’t think I know what you mean by “systematically”.
It’s about the most basic principle of epistemology, and one which the rationalsphere accepts: lucky guesses stopped clocks are not knowledge, even when they are right, because they are not reliable and systematic.
I think, a meaningless question without specifying what it’s supposed to be better at.
Obviously, that would be the stuff that science is already doing, since EY has argued, at immense length, that it gets quantum mechanics right,.
Eliezer doesn’t use the word. It seems clear, at least, that he’s dubious “teach more Bayes to Robert Aumann” would cause Robert Aumann to have more correct beliefs. So, maybe Eliezer doesn’t even think Bayes is systematically better in the sense that you mean?
If there is some objective factor about a person that makes them incapable of understanding Bayes , then a Bayesian should surely identify it. But where else has EY ever so much as hinted that some people are un-Bayesian?
Dunno if you think this either, also probably not super relevant.
Why do I have to tell you what I think in order for you to tell me what you think?
Here’s the exchange:
Me: Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
You: Dunno if you think this either, also probably not super relevant.
Believe it or not, what I am doing is charitable interpretation...I am trying to make sense of what he said.
You may be trying to be charitable. You are not succeeding, partly because what you consider to be “making sense” does not make sense.
But also partly because you’re routinely failing to acknowledge that you’re putting our own spin on things. There is a big difference between “Eliezer said X” and “I don’t know what Eliezer was trying to say, but my best guess is that he meant X”.
After you say “Eliezer said X” and I say “I don’t think Eliezer was trying to say X, I think he was trying to say Y”, there’s a big difference between “Y implies X” and “okay, I guess I don’t really know what he was trying to say, but it seems to me that X follows from Y so my best guess is he meant X”.
If this is your idea of charitable interpretation, I wish you would be less charitable.
If he thinks Bayes is systematically better than science, that would imply “Bayes is better than science, so replace science with Bayes”, because that makes more sense than “Bayes is better than science, so don’t replace Science with Bayes”.
I have explained why this is wrong.
Sally-Anne is only a fallacy where the meaning is clear.
This seems exactly wrong. Deciding “someone believes P, and P implies Q, so they must believe Q” is a fallacy because it is possible for someone to believe P, and for P to imply Q, and yet for the person not to believe Q. It’s possible even if they additionally believe that P implies Q; people have been known to be inconsistent.
This inference may be correct, mind you, and certainly someone believing P (which implies Q) is reason to suspect that they believe Q. “Fallacies as weak Bayesian evidence” and so forth. But it’s still a fallacy in the same way as “P implies Q; Q; therefore P”. It is not a valid inference in general.
That’s where the meaning is clear. Where the meaning is unclear… what you’re doing instead is “someone kind of seems to believe P? I dunno though. And P implies Q. So they definitely believe Q”. Which is quite clearly worse.
I am engaging in probablistic reasoning.
Are you, really? Can you point to anything you’ve said which supports this?
Like, skimming the conversation, as far as I can tell I have not once seen you express uncertainty in your conclusions. You have not once said anything along the lines of “I think Eliezer might have meant this, and if so then… but on the other hand he might have meant this other thing, in which case...”
You have, after much goading, admitted that you can’t be sure you know what Eliezer meant. But I haven’t seen you carry that uncertainty through to anything else.
I don’t know what’s going on in your head, but I would be surprised if “probabilistic reasoning” was a good description of the thing you’re doing. From the outside, it looks like the thing you’re doing might be better termed “making a guess, and then forgetting it was a guess”.
Why should I make any attempt to provide evidence, when you are going to reject it out of hand.
I didn’t reject the evidence? I agree that it is evidence someone else interpreted Eliezer in the same way you did, which as far as I can tell is what you were trying to show when you presented the evidence?
I still think it’s a misinterpretation. This should not be a surprise. It’s not like the other person gave me any more reason than you have, to think that Eliezer meant the thing you think he meant. Neither you nor he appears to have actually quoted Eliezer, for example, beyond his titles. (Whereas I have provided a quote which suggests your interpretation is wrong, and which you have all-but ignored.)
And I still don’t know what I’m to make of it. I still don’t know why you think “someone else also interpreted EY in this way” is particularly relevant.
No, but he could do a lot better. (An elephant-in-the-room issue here is that even though he is still alive, no-one expects him to pop up and say something that actually clarifies the issue).
Perhaps he could, but like… from my perspective you’ve made an insane leap of logic and you’re expecting him to clear up that it’s not what he meant. But there are an awful lot of possible insane leaps of logic people can make, and surely have made when reading this essay and others. Why would he spend his time clearing up yours specifically?
It’s about the most basic principle of epistemology, and one which the rationalsphere accepts: lucky guesses stopped clocks are not knowledge, even when they are right, because they are not reliable and systematic.
I still don’t know what you mean by “systematically”. If you expected this example to help, I don’t know why you expected that.
Obviously, that would be the stuff that science is already doing, since EY has argued, at immense length, that it gets quantum mechanics right,.
What stuff specifically? Science is doing a lot of things. Is Bayes supposed to be better than science at producing jobs for grant writers?
And, I point out that this is you replying to what was mostly an aside, while ignoring the bits that seemed to me more important. You’ve ignored where I said “even if we specify that, I don’t see...”. You’ve ignored where I said “I’m not sure what clear statement you expect me to make”. You’ve ignored where I said “I don’t know why you’d care whether or not I make it”.
If there is some objective factor about a person that makes them incapable of understanding Bayes , then a Bayesian should surely identify it. But where else has EY ever so much as hinted that some people are un-Bayesian?
I don’t know why you’re asking the question, but I’m pretty sure it rests on a confused premise in ways that I’ve explained, so I’m not going to try to figure this out.
Why do I have to tell you what I think in order for you to tell me what you think?
I don’t think I’ve been coy about what I think? I don’t know what you’re getting at here. My best guess is that you wanted me to explain why I thought “individuals should be willing to trust in Bayes over science” does not imply “we should replace science-the-institution with Bayes”, and you were unwilling to answer my questions until I answered this?
If that’s what you wanted, then I’d point out firstly that I did give a brief explanation, “Bayes might give some people better answers than science, and not give other people better answers than science”. Apparently my brief answer didn’t satisfy you, but, well… why should I give a more in depth answer when you’ve been refusing to answer my questions? I’m not convinced there’s any relationship between these things (whether or not I should answer your questions and whether or not you’ve answered mine; and whether or not you should answer my questions and whether or not I’ve answered yours). But if there is, I guess I think I feel like the ball’s in your court and has been for a while.
But also, I repeatedly said that I thought it was besides the point. “I do not know how it intends to engage with my comment.” “I think you’re wrong, but even if you’re right… okay, so what?” “I’m not inclined to engage on that further, because I don’t think it’s particularly relevant”. “I think this is wrong. But even if it’s right, I do not think your reply engaged with the comment it was replying to.”
If you thought it was not besides the point, you had ample opportunity to try to convince me? As far as I can tell you did not try.
And also, recall that you started this thread by leveling an accusation. I was asking my questions to try to get you to elaborate on what you had said, because I did not know what accusation you were making. If you refuse to explain the meaning of your terms, until the person asking for clarification answers other questions you ask of them...
...and if you do this while complaining about someone else not writing as clearly as you might hope, and not popping in to clarify his meanings...
then, honestly, I cannot help but wonder what you think is happening, here. It does not feel, for example, like your accusation was coming from a place of “here is a mistake LW is making, I will try to help LW see that they are making this mistake, and that will make LW a better place”.
Here’s the exchange:
Me: Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
You: Dunno if you think this either, also probably not super relevant.
What? This exchange has not happened. I asked that question, and when you declined to answer, I wrote the second line too. I hope you already know this? (Honestly though, I’m genuinely not sure that you do.) But I have no idea what you’re trying to say.
(Are you asking me whether I think LW-at-large currently thinks that? If so the answer is “yes, I already said that I think that, that’s why I mentioned Covid content.”)
I am now tapping out. I guess I might make low-effort comments going forwards, if I think that will serve my purposes. But I won’t expend significant energy talking to you on this subject unless I somehow become convinced it will be worth my time. (I hope not to expend significant energy talking to you on any subject, in fact.)
I am strong-downvoting your original accusation. I was reluctant to do so at first, because I don’t want LW to be an echo chamber; I wanted to check whether the accusation had merit. I am satisfied it has little-to-none.
I’m afraid I don’t think this was very ambiguous. Like, the bit where I introduced X and Y read
At the time of writing, I had exactly one comment in this thread which remotely fit that schema. And, the start of that comment was
where I think “this” is, in context, fairly obviously referring to the comment it replies to; and so “my comment” is fairly obviously the comment that was replying to? And then when I refer to “my comment” in the next paragraph, the obvious conclusion is that it’s the same comment, which again is the only one which remotely fit the schema outlined in that paragraph.
I predict that at least 90% of LW commenters would have parsed this correctly.
(Is this relevant? I think it must be, because a lot of this discussion is about “what did person mean when they said thing?” And, there’s no polite way to say this, but if you can’t get this right… when you accuse Eliezer of being an unclear writer, I cannot help but consider that maybe instead you’re a bad reader, and I suggest that you consider this possibility too. Of course, if I’m wrong and what I said wasn’t clear, I have to downweight my trust in my own reading comprehension.)
But do you think he actually said it? I reminded you, earlier, of the Sally-Anne fallacy, the failure to distinguish between “this person said a thing” and “this person said something that implies a thing”, and I feel I must remind you again. Because if the thing you think LW has “quitely forgotten about” is something that Eliezer didn’t say, but that you think follows from something Eliezer said, that is a very different accusation!
It might be that LW and/or Eliezer don’t realize the thing follows from what Eliezer said, and this would reflect badly on LW and/or Eliezer but it wouldn’t say much about how errata work around here.
Or, of course, it might be that you are wrong, and the thing doesn’t follow from what Eliezer said.
I mean, I think individuals should use Bayes. Whether Bayes is “systematically better” than science is, I think, a meaningless question without specifying what it’s supposed to be better at. And even if we specify that, I don’t see that the first thing would mean the second thing. So I’m not sure what clear statement you expect me to make...
...and I don’t know why you’d care whether or not I make it? My own personal opinions here seem basically irrelevant. You accused LW-at-large of quietly forgetting something that Eliezer said. Whether that accusation holds or not has little to do with whether I personally agree with the thing.
Ugh, fine. I’ve been trying to avoid this but here goes.
So first off I don’t think I know what you mean by “systematically”. Eliezer doesn’t use the word. It seems clear, at least, that he’s dubious “teach more Bayes to Robert Aumann” would cause Robert Aumann to have more correct beliefs. So, maybe Eliezer doesn’t even think Bayes is systematically better in the sense that you mean? Again, I don’t know what that sense is, so I don’t know. But putting that aside...
One reason it might only work for some people is because some people are less intelligent than others? Like, if I tell you you’re going to need to solve a hedge maze and you’ll be judged on time but you can see the layout before you enter, then “learn the fastest route before you enter” is systematically better than “take the left path at every fork”, in the sense that you’ll get through the maze faster—if you’re capable of memorizing the fastest route, and keeping track of where you are. If you’re not capable of that, I’d advise you to stick to the left path strategy.
I’m not saying this is what’s going on, just… it seems like an obvious sort of thing to consider, and I find it bizarre that you haven’t considered it.
Another thing here is: what works/is optimal for a person might not work/be optimal for a group? One person making paperclips will do a bunch of different things, two people making paperclips together might only do half of those things each, but also some extra things because of coordination overhead and perhaps differing incentives.
And then it might not even be meaningful to talk about a group in the same way as an individual. An agent might assign probability 0.6 to a hypothesis; another agent might assign probability 0.2; what probability does “the group consisting of these two agents” assign? If each agent does a Bayesian update upon observing evidence, does the group also do a Bayesian update?
All of which is to say, I am baffled by your insistence that if Bayes is better than science for individuals, we should replace science-the-institution with Bayes. This seems unjustified on many levels.
It sounds to me like:
You said “X iff Y”.
I said, “I don’t think so, here’s one reason you might have one but not the other.” (I’ve now given a much more detailed explanation of why I think you’re mistaken.)
You’re asking where EY said anything like what I said.
This seems confused, because I never said that EY said anything like what I said. I don’t think there’s any particular reason to expect him to have done. He cannot explicitly reject every possible mistake someone might make while reading his essays.
Okay, so another person misinterpreted him in a similar way. I’m not sure what I’m supposed to make of this. Even if EY was unclear, that’s also a different criticism than the idea that LW has quietly forgotten things.
Maybe not, but, like… you brought it up? If you think you know what he meant, stand by it and defend your interpretation. If you don’t think you know what he meant, admit that outright. If you don’t think you know what he meant, but you think I don’t know either… so what? Does me being also wrong vindicate you somehow? Feels like a prosecutor being asked “do you have any reason to think the defendant was near the scene of the crime that night” and replying “okay, maybe not, but you don’t know where he was either”.
I note that you have once again declined to answer my direct questions, so I’ll try to fill in what I think you think.
You apparently don’t think he said this? (At least I don’t think you’ve justified the idea that he has. Nor have you addressed the bit I quoted above about Robert Aumann, where I think he suggests the opposite of this.) You just think it follows from something he did say. I’ve now explained in some detail both why I think you’re wrong about that, and why even if you were right, it would be important to distinguish from him actually saying it.
Dunno if you think this, probably not super relevant.
I guess you think this is not the case, and this is what you think has been “quietly forgotten about”. I agree LW-at-large does not currently think this, I just think EY never proposed it either.
Dunno if you think this either, also probably not super relevant.
I don’t think he said it clearly, and I don’t think he said anything else clearly. Believe it or not, what I am doing is charitable interpretation...I am trying to make sense of what he said. If he thinks Bayes is systematically better than science, that would imply “Bayes is better than science, so replace science with Bayes”, because that makes more sense than “Bayes is better than science, so don’t replace Science with Bayes”. So I think that is what he is probably saying.
Maybe it’s Sally Anne fallacy, maybe its charitable interperetation. One should only use charitable interpretation where the meaning is unclear. Sally-Anne is only a fallacy where the meaning is clear.
I am engaging in probablistic reasoning.
Why should I make any attempt to provide evidence, when you are going to reject it out of hand.
No, but he could do a lot better. (An elephant-in-the-room issue here is that even though he is still alive, no-one expects him to pop up and say something that actually clarifies the issue).
It’s about the most basic principle of epistemology, and one which the rationalsphere accepts: lucky guesses stopped clocks are not knowledge, even when they are right, because they are not reliable and systematic.
Obviously, that would be the stuff that science is already doing, since EY has argued, at immense length, that it gets quantum mechanics right,.
If there is some objective factor about a person that makes them incapable of understanding Bayes , then a Bayesian should surely identify it. But where else has EY ever so much as hinted that some people are un-Bayesian?
Why do I have to tell you what I think in order for you to tell me what you think?
Here’s the exchange:
Me: Do you think LessWrong-at-large currently thinks “individuals should be willing to trust in Bayes over science”?
You: Dunno if you think this either, also probably not super relevant.
You may be trying to be charitable. You are not succeeding, partly because what you consider to be “making sense” does not make sense.
But also partly because you’re routinely failing to acknowledge that you’re putting our own spin on things. There is a big difference between “Eliezer said X” and “I don’t know what Eliezer was trying to say, but my best guess is that he meant X”.
After you say “Eliezer said X” and I say “I don’t think Eliezer was trying to say X, I think he was trying to say Y”, there’s a big difference between “Y implies X” and “okay, I guess I don’t really know what he was trying to say, but it seems to me that X follows from Y so my best guess is he meant X”.
If this is your idea of charitable interpretation, I wish you would be less charitable.
I have explained why this is wrong.
This seems exactly wrong. Deciding “someone believes P, and P implies Q, so they must believe Q” is a fallacy because it is possible for someone to believe P, and for P to imply Q, and yet for the person not to believe Q. It’s possible even if they additionally believe that P implies Q; people have been known to be inconsistent.
This inference may be correct, mind you, and certainly someone believing P (which implies Q) is reason to suspect that they believe Q. “Fallacies as weak Bayesian evidence” and so forth. But it’s still a fallacy in the same way as “P implies Q; Q; therefore P”. It is not a valid inference in general.
That’s where the meaning is clear. Where the meaning is unclear… what you’re doing instead is “someone kind of seems to believe P? I dunno though. And P implies Q. So they definitely believe Q”. Which is quite clearly worse.
Are you, really? Can you point to anything you’ve said which supports this?
Like, skimming the conversation, as far as I can tell I have not once seen you express uncertainty in your conclusions. You have not once said anything along the lines of “I think Eliezer might have meant this, and if so then… but on the other hand he might have meant this other thing, in which case...”
You have, after much goading, admitted that you can’t be sure you know what Eliezer meant. But I haven’t seen you carry that uncertainty through to anything else.
I don’t know what’s going on in your head, but I would be surprised if “probabilistic reasoning” was a good description of the thing you’re doing. From the outside, it looks like the thing you’re doing might be better termed “making a guess, and then forgetting it was a guess”.
I didn’t reject the evidence? I agree that it is evidence someone else interpreted Eliezer in the same way you did, which as far as I can tell is what you were trying to show when you presented the evidence?
I still think it’s a misinterpretation. This should not be a surprise. It’s not like the other person gave me any more reason than you have, to think that Eliezer meant the thing you think he meant. Neither you nor he appears to have actually quoted Eliezer, for example, beyond his titles. (Whereas I have provided a quote which suggests your interpretation is wrong, and which you have all-but ignored.)
And I still don’t know what I’m to make of it. I still don’t know why you think “someone else also interpreted EY in this way” is particularly relevant.
Perhaps he could, but like… from my perspective you’ve made an insane leap of logic and you’re expecting him to clear up that it’s not what he meant. But there are an awful lot of possible insane leaps of logic people can make, and surely have made when reading this essay and others. Why would he spend his time clearing up yours specifically?
I still don’t know what you mean by “systematically”. If you expected this example to help, I don’t know why you expected that.
What stuff specifically? Science is doing a lot of things. Is Bayes supposed to be better than science at producing jobs for grant writers?
And, I point out that this is you replying to what was mostly an aside, while ignoring the bits that seemed to me more important. You’ve ignored where I said “even if we specify that, I don’t see...”. You’ve ignored where I said “I’m not sure what clear statement you expect me to make”. You’ve ignored where I said “I don’t know why you’d care whether or not I make it”.
I don’t know why you’re asking the question, but I’m pretty sure it rests on a confused premise in ways that I’ve explained, so I’m not going to try to figure this out.
I don’t think I’ve been coy about what I think? I don’t know what you’re getting at here. My best guess is that you wanted me to explain why I thought “individuals should be willing to trust in Bayes over science” does not imply “we should replace science-the-institution with Bayes”, and you were unwilling to answer my questions until I answered this?
If that’s what you wanted, then I’d point out firstly that I did give a brief explanation, “Bayes might give some people better answers than science, and not give other people better answers than science”. Apparently my brief answer didn’t satisfy you, but, well… why should I give a more in depth answer when you’ve been refusing to answer my questions? I’m not convinced there’s any relationship between these things (whether or not I should answer your questions and whether or not you’ve answered mine; and whether or not you should answer my questions and whether or not I’ve answered yours). But if there is, I guess I think I feel like the ball’s in your court and has been for a while.
But also, I repeatedly said that I thought it was besides the point. “I do not know how it intends to engage with my comment.” “I think you’re wrong, but even if you’re right… okay, so what?” “I’m not inclined to engage on that further, because I don’t think it’s particularly relevant”. “I think this is wrong. But even if it’s right, I do not think your reply engaged with the comment it was replying to.”
If you thought it was not besides the point, you had ample opportunity to try to convince me? As far as I can tell you did not try.
And also, recall that you started this thread by leveling an accusation. I was asking my questions to try to get you to elaborate on what you had said, because I did not know what accusation you were making. If you refuse to explain the meaning of your terms, until the person asking for clarification answers other questions you ask of them...
...and if you do this while complaining about someone else not writing as clearly as you might hope, and not popping in to clarify his meanings...
then, honestly, I cannot help but wonder what you think is happening, here. It does not feel, for example, like your accusation was coming from a place of “here is a mistake LW is making, I will try to help LW see that they are making this mistake, and that will make LW a better place”.
What? This exchange has not happened. I asked that question, and when you declined to answer, I wrote the second line too. I hope you already know this? (Honestly though, I’m genuinely not sure that you do.) But I have no idea what you’re trying to say.
(Are you asking me whether I think LW-at-large currently thinks that? If so the answer is “yes, I already said that I think that, that’s why I mentioned Covid content.”)
I am now tapping out. I guess I might make low-effort comments going forwards, if I think that will serve my purposes. But I won’t expend significant energy talking to you on this subject unless I somehow become convinced it will be worth my time. (I hope not to expend significant energy talking to you on any subject, in fact.)
I am strong-downvoting your original accusation. I was reluctant to do so at first, because I don’t want LW to be an echo chamber; I wanted to check whether the accusation had merit. I am satisfied it has little-to-none.