You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
my posistion [sic] all along has a distinction between faith and rational belief
OK, but you are not using the term “rational” in the (what I thought was) the standard way. So the only reason what you’re saying seems contentious is because of your terminology.
You have not yet addressed much of what I’ve written. Automatically rejecting everything that isn’t 100% proven is a poor strategy if the agent’s goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you’re using the word “rational,” or do you actually recommend “Reject everything that isn’t known 100%” as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already—that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
You are the only who is making assumptions without evidence and ignoring what I’m saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it’s existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I’m getting this right.
(1) Let A = any agent and P = any proposition
(2) Define “justified belief” such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, … and z.
b. A justifiably believes every a, b, c, … and z.
c. A believes P because of its proof from a, b, c, … and z.
(3) The claim “The sun will rise tomorrow” (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim “The sun will rise tomorrow” is not justified.
Is this a fair characterization of your argument? If so, I’ll work from this. If not, please improve it.
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn’t have any justification.
You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
OK, but you are not using the term “rational” in the (what I thought was) the standard way. So the only reason what you’re saying seems contentious is because of your terminology.
You have not yet addressed much of what I’ve written. Automatically rejecting everything that isn’t 100% proven is a poor strategy if the agent’s goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you’re using the word “rational,” or do you actually recommend “Reject everything that isn’t known 100%” as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already—that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
You are the only who is making assumptions without evidence and ignoring what I’m saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it’s existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I’m getting this right.
(1) Let A = any agent and P = any proposition
(2) Define “justified belief” such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, … and z.
b. A justifiably believes every a, b, c, … and z.
c. A believes P because of its proof from a, b, c, … and z.
(3) The claim “The sun will rise tomorrow” (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim “The sun will rise tomorrow” is not justified.
Is this a fair characterization of your argument? If so, I’ll work from this. If not, please improve it.
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn’t have any justification.