In the real world, it depends. With most people in practice, assuming they have enough of an understanding of me to know I am a skeptic on these things and are implicitly asking for one or the other, I give that. Therefore I normally give advice on faith.
I guess it’s hard for me to understand what’s irrational about advising them to eat the rice (as you indicated you would do). It seems like the only sane choice. I’m not sure exactly what you mean by “faith”, but if advising people to eat the rice is based on it, then it must be compatible with rationality, right?
Right—choose the rice, assuming you (or they) want to live. That seems like the only sane choice, doesn’t it?
Maybe this is a problem of terminology. You seem to be using the labels “faith” and “reason” in certain ways. Especially, you seem to be using the label “reason” to refer to the following of certain rules, but which you can’t see how to justify.
Maybe instead of focusing on those rules (whatever they may happen to be), you should focus on why the rules are valuable in the first place (if they are). Presumably, it’s because they reliably lead to success in achieving one’s goals. The worth of the rules is contingent on their usefulness; it’s not rational to believe only things you can prove with absolute certainty, because that would mean believing nothing, doing nothing, dying early and having no fun, and nobody wants that!
My conception of reason is based on determining what is true, completely and entirely irrespective of pragmatism. To call skeptical arguments irrational and call an anti-skeptical case rational would mean losing sight of the important fact that ONLY pragmatic considerations lead to the rejection of skepticism.
Rationality, to me, is defined as the hypothetical set of rules which reliably determine truth, not by coincidence, but because they must determine truth by their nature. Anything which does not follow said rules are irrational. Even if skepticism is false, believing in the world is irrational for me (and you, based on what I’ve heard from you and my definition) because nothing necessarily leads to a correlation between the senses and reality.
One of the rules of my rationality is that pragmatic considerations are not to be taken into account, as what is useful to believe and what is true have no necessary correlation. This applies for anything which has no necessary correlation with what is true.
What you’re talking about is pragmatic, not rational. It is important to be aware of the distinction between what one may ‘believe’ for some reason and what is likely to be actually true, completely independent of such beliefs.
what is useful to believe and what is true have no necessary correlation
You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below.
rationality = a set of rules which reliably and necessarily determine truth
and
X is irrational = X does not follow rationality
If that’s how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable—just not with absolute certainty. She would not have a very accurate picture of the world.
And this is not just because of “pragmatics”; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs.
Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn’t first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests.
Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long—possibly forever. And secondly (and this is the aspect I’m trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating).
Now, how about a person’s epistemic diet—does it make sense, from a purely epistemic perspective, for an agent to believe only what she can prove with absolute certainty? No. For one thing, it would take way too long—possibly forever. And secondly, lots of true things cannot be proven so, at least not with the kind of transcendent certainty you seem to be talking about. So an agent who insisted on such a filter would end up blocking much truth, thus “learning” a highly distorted map.
If the agent is interested in truth, she should ditch that filter and find a standard that lets her accept more true correct claims about the world, even if they aren’t totally proven.
By the way, have you read many of the Sequences? They are quite helpful and much better written than my comments. I’d say to start here. This one and this one also heavily impinge on our topic.
This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.
The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.
You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
my posistion [sic] all along has a distinction between faith and rational belief
OK, but you are not using the term “rational” in the (what I thought was) the standard way. So the only reason what you’re saying seems contentious is because of your terminology.
You have not yet addressed much of what I’ve written. Automatically rejecting everything that isn’t 100% proven is a poor strategy if the agent’s goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you’re using the word “rational,” or do you actually recommend “Reject everything that isn’t known 100%” as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already—that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
You are the only who is making assumptions without evidence and ignoring what I’m saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it’s existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I’m getting this right.
(1) Let A = any agent and P = any proposition
(2) Define “justified belief” such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, … and z.
b. A justifiably believes every a, b, c, … and z.
c. A believes P because of its proof from a, b, c, … and z.
(3) The claim “The sun will rise tomorrow” (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim “The sun will rise tomorrow” is not justified.
Is this a fair characterization of your argument? If so, I’ll work from this. If not, please improve it.
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn’t have any justification.
So, which advice would you give?
In the real world, it depends. With most people in practice, assuming they have enough of an understanding of me to know I am a skeptic on these things and are implicitly asking for one or the other, I give that. Therefore I normally give advice on faith.
I guess it’s hard for me to understand what’s irrational about advising them to eat the rice (as you indicated you would do). It seems like the only sane choice. I’m not sure exactly what you mean by “faith”, but if advising people to eat the rice is based on it, then it must be compatible with rationality, right?
Right—choose the rice, assuming you (or they) want to live. That seems like the only sane choice, doesn’t it?
Maybe this is a problem of terminology. You seem to be using the labels “faith” and “reason” in certain ways. Especially, you seem to be using the label “reason” to refer to the following of certain rules, but which you can’t see how to justify.
Maybe instead of focusing on those rules (whatever they may happen to be), you should focus on why the rules are valuable in the first place (if they are). Presumably, it’s because they reliably lead to success in achieving one’s goals. The worth of the rules is contingent on their usefulness; it’s not rational to believe only things you can prove with absolute certainty, because that would mean believing nothing, doing nothing, dying early and having no fun, and nobody wants that!
(In case you haven’t read it, you might want to check you Newcomb’s Problem and Regret of Rationality, from 2008.)
My conception of reason is based on determining what is true, completely and entirely irrespective of pragmatism. To call skeptical arguments irrational and call an anti-skeptical case rational would mean losing sight of the important fact that ONLY pragmatic considerations lead to the rejection of skepticism.
Rationality, to me, is defined as the hypothetical set of rules which reliably determine truth, not by coincidence, but because they must determine truth by their nature. Anything which does not follow said rules are irrational. Even if skepticism is false, believing in the world is irrational for me (and you, based on what I’ve heard from you and my definition) because nothing necessarily leads to a correlation between the senses and reality.
One of the rules of my rationality is that pragmatic considerations are not to be taken into account, as what is useful to believe and what is true have no necessary correlation. This applies for anything which has no necessary correlation with what is true.
What you’re talking about is pragmatic, not rational. It is important to be aware of the distinction between what one may ‘believe’ for some reason and what is likely to be actually true, completely independent of such beliefs.
You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below.
OK, so I think your labeling system, which is clearly different from the one to which I am accustomed, looks like this:
and
If that’s how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable—just not with absolute certainty. She would not have a very accurate picture of the world.
And this is not just because of “pragmatics”; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs.
Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn’t first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests.
Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long—possibly forever. And secondly (and this is the aspect I’m trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating).
Now, how about a person’s epistemic diet—does it make sense, from a purely epistemic perspective, for an agent to believe only what she can prove with absolute certainty? No. For one thing, it would take way too long—possibly forever. And secondly, lots of true things cannot be proven so, at least not with the kind of transcendent certainty you seem to be talking about. So an agent who insisted on such a filter would end up blocking much truth, thus “learning” a highly distorted map.
If the agent is interested in truth, she should ditch that filter and find a standard that lets her accept more true correct claims about the world, even if they aren’t totally proven.
By the way, have you read many of the Sequences? They are quite helpful and much better written than my comments. I’d say to start here. This one and this one also heavily impinge on our topic.
This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.
The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.
I do not think anything I wrote above depends on using probability to discuss reality.
Please elaborate. I believe it is not only relevant, but decisive.
You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
OK, but you are not using the term “rational” in the (what I thought was) the standard way. So the only reason what you’re saying seems contentious is because of your terminology.
You have not yet addressed much of what I’ve written. Automatically rejecting everything that isn’t 100% proven is a poor strategy if the agent’s goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you’re using the word “rational,” or do you actually recommend “Reject everything that isn’t known 100%” as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already—that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
You are the only who is making assumptions without evidence and ignoring what I’m saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it’s existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I’m getting this right.
(1) Let A = any agent and P = any proposition
(2) Define “justified belief” such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, … and z.
b. A justifiably believes every a, b, c, … and z.
c. A believes P because of its proof from a, b, c, … and z.
(3) The claim “The sun will rise tomorrow” (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim “The sun will rise tomorrow” is not justified.
Is this a fair characterization of your argument? If so, I’ll work from this. If not, please improve it.
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn’t have any justification.