Similarly, you can eliminate the sentence ‘rational’ from almost any sentence in which it appears. “It’s rational to believe the sky is blue”, “It’s true that the sky is blue”, and “The sky is blue”, all convey exactly the same information about what color you think the sky is—no more, no less.
I might be missing the point of this paragraph, but it seems to me that “it’s rational to believe the sky is blue” and “the sky is blue” do not convey the same information. I can conceive of situations in which it is rational to believe the sky is blue, and yet the sky is not blue. For example, the sky is green, but superintelligent alien pranksters install undetected nanotech devices into my optic and auditory nerves/brain, altering my perceptions and memories so that I see the green sky as blue, and hear (read) the word “blue” where other people have actually said (written) the word “green” when describing the sky.
Under these circumstances, all my evidence would indicate the sky is blue—and so it would be rational to believe that the sky is blue. And yet the sky is not blue. But the first statement doesn’t feel like I am generalising over cognitive algorithms in the sense I took from the big paragraph.
When discussing these third-person as you are now, cognitive algorithms as algorithms are being invoked. But we all know that “p” and “Alice thinks that p” are hardly reducible to each other, it’s first-person items like “I believe that p” that are deflationary. So while it is clearly the case that you can imagine situations where the sky is not blue but it would be epistemically rational to believe that it is, that does not demonstrate situations where one could justifiably claim only one of “the sky is blue” and “it is rational to believe that the sky is blue” (indeed the justifiability of the former just is the content of the latter.)
“I believe that ‘P’.” is only deflationary because it treats belief as if it were binary, but it isn’t. “I have 0.8 belief in ‘P’.” is certainly not the same as “It is true that ‘P’.” Yes? One is a claim about the world, and one is a claim about my model of the world.
I am pretty sure that p and “it is rational to believe that p” can come apart even from a first-person perspective. At least, they can come apart if belief is cashed out in terms of inclination to action in a single case.
Let me illustrate. Suppose there are five live hypotheses to account for some evidence, and suppose that I assign credences as follows:
Further suppose that I am in a situation where I need to take some action, and each of the five hypotheses recommends a different action in the circumstances.
Assuming that by “belief” one means something like “what one proposes to act on in forced situations,” then it is rational to believe h2. It is rational to act as if h2 were true. But one need not think that h2 is true. It is more likely to be true than any of the other options, but given the credences above, one ought to think that h2 is false. That is, it is much more likely on the evidence that h2 is false than that it is true.
“It’s rational to believe that #32 will win” and “It’s rational to bet on #32″ are not the same thing. In fact, they’re using different senses of “rational”, as we usually carve things up.
Thus in your example, “it’s rational to believe h2” and “h2″ are still equivalent, but “act as though h2” is not.
Similarly, you can eliminate the [word] ‘rational’ from almost any sentence [you utter]. [Saying] “It’s rational to believe the sky is blue”, [saying] “It’s true that the sky is blue”, and [saying] “The sky is blue”, all convey exactly the same information about what color you think the sky is—no more, no less.
As you pointed out, the first sentence is not logically equivalent to the second and third (the second and third are logically equivalent according to Tarski’s semantic theory of truth).
Alternately, if the sky IS blue, and someone objects to jumping to that conclusion, you can point out that the obvious conclusion is in fact rational in addition to claiming that it’s correct.
“the sky is blue” and “it is rational to believe that the sky is blue” (indeed the justifiability of the former just is the content of the latter.)
This. You have created an example that shows that it is utterly impossible for a creature with our limited primate capabilities to actually know The Truth. What we can do is pay close attention to what we think and why we think that to be so.
In your case, there’s an overwhelming amount of good reason to believe the sky is blue to anyone who doesn’t know about these loki-like aliens (I really wanted to call them ‘lokiens’). You might be wrong, and it could possibly lead to bad consequences in the future. But the alternative, believing something to be true without good reason, is crazy.
I might be missing the point of this paragraph, but it seems to me that “it’s rational to believe the sky is blue” and “the sky is blue” do not convey the same information. I can conceive of situations in which it is rational to believe the sky is blue, and yet the sky is not blue. For example, the sky is green, but superintelligent alien pranksters install undetected nanotech devices into my optic and auditory nerves/brain, altering my perceptions and memories so that I see the green sky as blue, and hear (read) the word “blue” where other people have actually said (written) the word “green” when describing the sky.
Under these circumstances, all my evidence would indicate the sky is blue—and so it would be rational to believe that the sky is blue. And yet the sky is not blue. But the first statement doesn’t feel like I am generalising over cognitive algorithms in the sense I took from the big paragraph.
Am I missing or misinterpreting something?
When discussing these third-person as you are now, cognitive algorithms as algorithms are being invoked. But we all know that “p” and “Alice thinks that p” are hardly reducible to each other, it’s first-person items like “I believe that p” that are deflationary. So while it is clearly the case that you can imagine situations where the sky is not blue but it would be epistemically rational to believe that it is, that does not demonstrate situations where one could justifiably claim only one of “the sky is blue” and “it is rational to believe that the sky is blue” (indeed the justifiability of the former just is the content of the latter.)
“I believe that ‘P’.” is only deflationary because it treats belief as if it were binary, but it isn’t. “I have 0.8 belief in ‘P’.” is certainly not the same as “It is true that ‘P’.” Yes? One is a claim about the world, and one is a claim about my model of the world.
I am pretty sure that p and “it is rational to believe that p” can come apart even from a first-person perspective. At least, they can come apart if belief is cashed out in terms of inclination to action in a single case.
Let me illustrate. Suppose there are five live hypotheses to account for some evidence, and suppose that I assign credences as follows:
C(h1) = 0.1; C(h2) = 0.35; C(h3) = 0.25; C(h4) = 0.15; C(h5) = 0.1; and C(other) = 0.05.
Further suppose that I am in a situation where I need to take some action, and each of the five hypotheses recommends a different action in the circumstances.
Assuming that by “belief” one means something like “what one proposes to act on in forced situations,” then it is rational to believe h2. It is rational to act as if h2 were true. But one need not think that h2 is true. It is more likely to be true than any of the other options, but given the credences above, one ought to think that h2 is false. That is, it is much more likely on the evidence that h2 is false than that it is true.
“It’s rational to believe that #32 will win” and “It’s rational to bet on #32″ are not the same thing. In fact, they’re using different senses of “rational”, as we usually carve things up.
Thus in your example, “it’s rational to believe h2” and “h2″ are still equivalent, but “act as though h2” is not.
Could you elaborate on the mistake you think I’m making? I’m not seeing it, yet.
I think the intended meaning is as follows:
As you pointed out, the first sentence is not logically equivalent to the second and third (the second and third are logically equivalent according to Tarski’s semantic theory of truth).
Alternately, if the sky IS blue, and someone objects to jumping to that conclusion, you can point out that the obvious conclusion is in fact rational in addition to claiming that it’s correct.
This. You have created an example that shows that it is utterly impossible for a creature with our limited primate capabilities to actually know The Truth. What we can do is pay close attention to what we think and why we think that to be so.
In your case, there’s an overwhelming amount of good reason to believe the sky is blue to anyone who doesn’t know about these loki-like aliens (I really wanted to call them ‘lokiens’). You might be wrong, and it could possibly lead to bad consequences in the future. But the alternative, believing something to be true without good reason, is crazy.