I’d be more inclined to agree with him if he was God, too, but I wouldn’t say “my rejection of his ideas has something to do with the fact that he is not God”. For that matter I would be more inclined to agree with him if he used mind control rays on me, but “I reject Eliezer’s ideas partly because he isn’t using a mind control ray on me” would be a ludicrous thing to say.
“My rejection has to do with X” connotes more than just a Bayseian probability estimate.
Both of your hypothetical statements are correct, and both of them would be bad reasons to believe something (well, I’m a bit fuzzy on the God hypothetical—is God defined as always correct?), just as the presence or absence of a PhD would be a bad reason to believe something. (This is not to say that PhD’s offer zero evidence one way or the other, but rather that the evidence they offer is often overwhelmed by other, more immediate factors.) The phrasing of your comment gives me the impression that you’re trying to express disagreement with me about something, but I can’t detect any actual disagreement. Could you clarify your intent for me here? Thanks in advance.
I disagree that if you would be less likely to reject his ideas if X were true, that can always be usefully described as “my rejection has something to do with X”. The statement “my rejection has something to do with X” literally means “I would be less likely to reject it if X were true”, but it does not connote that.
Generally, that you would be less likely to reject it if X were true, and a couple of other ideas:
X is specific to your rejectiion—that is, that the truthfulness of X affects the probability of your rejection to a much larger degree than it affects the probability of other propositions that are conceptually distant.
The line of reasoning from X to reducing the probability of your rejection proceeds through certain types of connections, such as ones that are conceptually closer.
The effect of X is not too small, where “not to small” depends on how strongly the other factors apply.
(Human beings, of course, have lots of fuzzy concepts.)
I’d be more inclined to agree with him if he was God, too, but I wouldn’t say “my rejection of his ideas has something to do with the fact that he is not God”. For that matter I would be more inclined to agree with him if he used mind control rays on me, but “I reject Eliezer’s ideas partly because he isn’t using a mind control ray on me” would be a ludicrous thing to say.
“My rejection has to do with X” connotes more than just a Bayseian probability estimate.
Both of your hypothetical statements are correct, and both of them would be bad reasons to believe something (well, I’m a bit fuzzy on the God hypothetical—is God defined as always correct?), just as the presence or absence of a PhD would be a bad reason to believe something. (This is not to say that PhD’s offer zero evidence one way or the other, but rather that the evidence they offer is often overwhelmed by other, more immediate factors.) The phrasing of your comment gives me the impression that you’re trying to express disagreement with me about something, but I can’t detect any actual disagreement. Could you clarify your intent for me here? Thanks in advance.
I disagree that if you would be less likely to reject his ideas if X were true, that can always be usefully described as “my rejection has something to do with X”. The statement “my rejection has something to do with X” literally means “I would be less likely to reject it if X were true”, but it does not connote that.
All right, I can accept that. So what does it connote, by your reckoning?
Generally, that you would be less likely to reject it if X were true, and a couple of other ideas:
X is specific to your rejectiion—that is, that the truthfulness of X affects the probability of your rejection to a much larger degree than it affects the probability of other propositions that are conceptually distant.
The line of reasoning from X to reducing the probability of your rejection proceeds through certain types of connections, such as ones that are conceptually closer.
The effect of X is not too small, where “not to small” depends on how strongly the other factors apply.
(Human beings, of course, have lots of fuzzy concepts.)