I feel like a lot of Bob’s responses are natural consequences of Eliezer’s position that you describe as “strong bayesianism”, except where he talks about what he actually recommends, and as such this post feels very uncompelling to me. Where they aren’t, “strong bayesianism” is correct: it seems useful for someone to actually think about what the likelihood ratio of “a random thought popped into my head” is, and similarly about how likely skeptical hypotheses are.
Similarly,
In other words, an ideal bayesian is not thinking in any reasonable sense of the word—instead, it’s simulating every logically possible universe. By default, we should not expect to learn much about thinking based on analysing a different type of operation that just happens to look the same in the infinite limit.
seems like it just isn’t an argument against
Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation—and fails to the extent that it departs.
(and also I dispute the gatekeeping around the term ‘thinking’: when I simulate future worlds, that sure feels like thinking to me! but this is less important)
In general, I feel like I must be missing some aspect of your world-view that underlies this, because I’m seeing almost no connection between your arguments and the thesis you’re putting forwards.
Just wanted to note that your point that I didn’t properly rebut “Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation—and fails to the extent that it departs.” was a good one, and it has nagged at me for a while. In general in this post I think he’s implying that Bayesianism is not only correct in the limit, but also relevant to the way we actually do thinking. But I agree that interpreting this particular quote in that way is a bit of a stretch, so I’ve replaced it with “You may not be able to compute the optimal [Bayesian] answer. But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory.” which more directly draws the link between methods we might actually use, and the ideal bayesian case.
I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the ‘Simplicio’ character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn’t try sufficiently hard to pass the ITT of the view they’re arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type “as I understand it, defenders of proposition P might state X, but of course I could be wrong”.
I’m a little confused by this one, because in your previous response you say that you think Bob accurately represents Eliezer’s position, and now you seem to be complaining about the opposite?
Actually, I think the synthesis is that many of the things that Bob is saying are implications of Eliezer’s description and ways of getting close to Bayesian reasoning, but seem like they’re almost presented as concessions. I could try to get into some responses chosen by you if that would be helpful.
I feel like a lot of Bob’s responses are natural consequences of Eliezer’s position that you describe as “strong bayesianism”, except where he talks about what he actually recommends, and as such this post feels very uncompelling to me. Where they aren’t, “strong bayesianism” is correct: it seems useful for someone to actually think about what the likelihood ratio of “a random thought popped into my head” is, and similarly about how likely skeptical hypotheses are.
Similarly,
seems like it just isn’t an argument against
(and also I dispute the gatekeeping around the term ‘thinking’: when I simulate future worlds, that sure feels like thinking to me! but this is less important)
In general, I feel like I must be missing some aspect of your world-view that underlies this, because I’m seeing almost no connection between your arguments and the thesis you’re putting forwards.
Just wanted to note that your point that I didn’t properly rebut “Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation—and fails to the extent that it departs.” was a good one, and it has nagged at me for a while. In general in this post I think he’s implying that Bayesianism is not only correct in the limit, but also relevant to the way we actually do thinking. But I agree that interpreting this particular quote in that way is a bit of a stretch, so I’ve replaced it with “You may not be able to compute the optimal [Bayesian] answer. But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory.” which more directly draws the link between methods we might actually use, and the ideal bayesian case.
Also (crossposted to shortform):
I think the use of dialogues to illustrate a point of view is overdone on LessWrong. Almost always, the ‘Simplicio’ character fails to accurately represent the smart version of the viewpoint he stands in for, because the author doesn’t try sufficiently hard to pass the ITT of the view they’re arguing against. As a result, not only is the dialogue unconvincing, it runs the risk of misleading readers about the actual content of a worldview. I think this is true to a greater extent than posts that just state a point of view and argue against it, because the dialogue format naively appears to actually represent a named representative of a point of view, and structurally discourages disclaimers of the type “as I understand it, defenders of proposition P might state X, but of course I could be wrong”.
I’m a little confused by this one, because in your previous response you say that you think Bob accurately represents Eliezer’s position, and now you seem to be complaining about the opposite?
Actually, I think the synthesis is that many of the things that Bob is saying are implications of Eliezer’s description and ways of getting close to Bayesian reasoning, but seem like they’re almost presented as concessions. I could try to get into some responses chosen by you if that would be helpful.
A lot of Bob’s responses seem like natural consequences of Eliezer’s claim, but some of them aren’t.