If we can have no a priori knowledge, it means skepticism wins because everything is based on faith. Given this, I try to find a means to make a priori knowledge work despite objections, both of this sort and skeptical.
If this is right, then radical skepticism wins entirely. The point is if it can be shown false on probabilities.
Yes and no. I do believe it hopeless, but I search because I’m looking anyway,
If what is right—that you can’t be sure you’re not dreaming? Of course that’s right; how would you ever tell? Any method of distinguishing you came up with can’t possibly be relied upon, because if you are dreaming, then that method only works in your dreamworld. In other words, it can distinguish between meta-dreams and dreams, but not between dreams and reality. (And there’s no real reason to think it can even do the former, because hey, it’s a dreamworld after all, and no rules apply.)
You search because you’re looking? What does that mean?
Here’s a question. I assume you are familiar with the probability-theoretic notion of maximum entropy. By “radical skepticism” do you mean the thesis that the only possible rational belief-state is maximum entropy?
It means we cannot be justified in knowing anything, and are isolated from any objective reality. The basic rules of probability from which we assume the reliability of memory, senses etc are taken on religious style faith.
I’ve been trying to find a way around this, but you are probably right.
I mean I am checking and and again just in case because I don’t like the idea that scepticism is right.
It means we cannot be justified in knowing anything
Justification depends on a function that tells you whether something is justified.
I can easily justify a belief with the fact that a teacher taught it to me.
In what sense do you think it can not be justified and why do you think that framework of justification has some sort of reality to it?
Something is epistemically justified if, as you said, it has some sort of reality to it not by coincidence but because the rule reliably shows what is real. I am trying to find a framework with some sort of reality to it, and that requires dealing with scepticism.
If you don’t believe in reality in the first place how could you check whether something has reality?
You need to look at reality to check whether something is real. There no way around it. Your idea for justification has no solid basis in reality if you don’t believe in it in the first place.
You don’t get to be certain about justification and be skeptic about reality.
There are certain types of Buddhism who you could call skeptic about reality but they would also not accept the concept of justification in which you happen to believe.
I don’t believe in the reality around us, not on a rational level- that does not mean I don’t believe there are things which are real(there may be, anyway). I just have no idea what they are.
Justification is DEFINED in a certain manner, and I think the best one to use is the definition I have given. That is how I can be certain about justification (or at least what I am calling justification) and a skeptic about reality.
OK, let’s skip to (4), as that might help you formulate your skepticism more precisely. “Maximum entropy” has more than one meaning, but here it basically means a belief-state that assigns an equal probability to all possibilities. In other words, it’s the probability distribution you would use if you had zero information. For example, if I ask you whether glappzug is thuxreq or not thuxreq, you can’t do better than to just pick an answer randomly. You have no clue to go on, so just get the choice over with and move on.
A thorough-going skeptic, it seems to me, would have to think that all choices are just like that one. Even when we think we have information, we don’t really (because we could be dreaming!). Therefore there’s no reason to discriminate between any pair of alternatives, or among any set of them.
When you say “skepticism wins,” do you mean that for any set of alternative claims, there is never any reason to discriminate among them?
In that case, I don’t know how to proceed until you formulate your skepticism more precisely. What exactly is it that is not justified, if “skepticism wins”?
Nothing is justified if skepticism wins. Unless we have irrational faith in at least one starting assumption (and it is irrational since we have no basis for making the assumption), it is impossible to determine anything except our lack of knowledge.
So on thought, yes. There is never any valid rational reason to discriminate between possibilities because nothing can demonstrate the Evil Demon Argument false.
OK. I am still not exactly sure what you mean by “justification.” Let’s put this in more concrete terms. Imagine the following:
Sitting down to dinner you see three items on the table before you: a bowl of rice, a bowl of gasoline, and a coin. Suppose further that you prefer rice over gasoline. You have three choices—eat the rice, drink the gasoline, or flip the coin and let the result determine the contents of which bowl to consume.
What does the Evil Demon Argument (and all in its family) say about the rationality of each choice, compared to the others (assuming it says anything at all)?
What advice would you personally give someone sitting at such a dinner table, and why?
The Evil Demon Argument says that you don’t know that it’s actually those three things before you. Further, it says that you don’t know that eating the rice will actually have the effects you’re used to, or that your memories can be used to remember your preferences. Etc etc...
On reason, I would give no advice. On faith, I would say to have the rice.
In the real world, it depends. With most people in practice, assuming they have enough of an understanding of me to know I am a skeptic on these things and are implicitly asking for one or the other, I give that. Therefore I normally give advice on faith.
I guess it’s hard for me to understand what’s irrational about advising them to eat the rice (as you indicated you would do). It seems like the only sane choice. I’m not sure exactly what you mean by “faith”, but if advising people to eat the rice is based on it, then it must be compatible with rationality, right?
Right—choose the rice, assuming you (or they) want to live. That seems like the only sane choice, doesn’t it?
Maybe this is a problem of terminology. You seem to be using the labels “faith” and “reason” in certain ways. Especially, you seem to be using the label “reason” to refer to the following of certain rules, but which you can’t see how to justify.
Maybe instead of focusing on those rules (whatever they may happen to be), you should focus on why the rules are valuable in the first place (if they are). Presumably, it’s because they reliably lead to success in achieving one’s goals. The worth of the rules is contingent on their usefulness; it’s not rational to believe only things you can prove with absolute certainty, because that would mean believing nothing, doing nothing, dying early and having no fun, and nobody wants that!
My conception of reason is based on determining what is true, completely and entirely irrespective of pragmatism. To call skeptical arguments irrational and call an anti-skeptical case rational would mean losing sight of the important fact that ONLY pragmatic considerations lead to the rejection of skepticism.
Rationality, to me, is defined as the hypothetical set of rules which reliably determine truth, not by coincidence, but because they must determine truth by their nature. Anything which does not follow said rules are irrational. Even if skepticism is false, believing in the world is irrational for me (and you, based on what I’ve heard from you and my definition) because nothing necessarily leads to a correlation between the senses and reality.
One of the rules of my rationality is that pragmatic considerations are not to be taken into account, as what is useful to believe and what is true have no necessary correlation. This applies for anything which has no necessary correlation with what is true.
What you’re talking about is pragmatic, not rational. It is important to be aware of the distinction between what one may ‘believe’ for some reason and what is likely to be actually true, completely independent of such beliefs.
what is useful to believe and what is true have no necessary correlation
You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below.
rationality = a set of rules which reliably and necessarily determine truth
and
X is irrational = X does not follow rationality
If that’s how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable—just not with absolute certainty. She would not have a very accurate picture of the world.
And this is not just because of “pragmatics”; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs.
Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn’t first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests.
Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long—possibly forever. And secondly (and this is the aspect I’m trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating).
Now, how about a person’s epistemic diet—does it make sense, from a purely epistemic perspective, for an agent to believe only what she can prove with absolute certainty? No. For one thing, it would take way too long—possibly forever. And secondly, lots of true things cannot be proven so, at least not with the kind of transcendent certainty you seem to be talking about. So an agent who insisted on such a filter would end up blocking much truth, thus “learning” a highly distorted map.
If the agent is interested in truth, she should ditch that filter and find a standard that lets her accept more true correct claims about the world, even if they aren’t totally proven.
By the way, have you read many of the Sequences? They are quite helpful and much better written than my comments. I’d say to start here. This one and this one also heavily impinge on our topic.
This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.
The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.
You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
my posistion [sic] all along has a distinction between faith and rational belief
OK, but you are not using the term “rational” in the (what I thought was) the standard way. So the only reason what you’re saying seems contentious is because of your terminology.
You have not yet addressed much of what I’ve written. Automatically rejecting everything that isn’t 100% proven is a poor strategy if the agent’s goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you’re using the word “rational,” or do you actually recommend “Reject everything that isn’t known 100%” as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already—that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
You are the only who is making assumptions without evidence and ignoring what I’m saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it’s existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I’m getting this right.
(1) Let A = any agent and P = any proposition
(2) Define “justified belief” such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, … and z.
b. A justifiably believes every a, b, c, … and z.
c. A believes P because of its proof from a, b, c, … and z.
(3) The claim “The sun will rise tomorrow” (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim “The sun will rise tomorrow” is not justified.
Is this a fair characterization of your argument? If so, I’ll work from this. If not, please improve it.
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn’t have any justification.
If we can have no a priori knowledge, it means skepticism wins because everything is based on faith. Given this, I try to find a means to make a priori knowledge work despite objections, both of this sort and skeptical.
If this is right, then radical skepticism wins entirely. The point is if it can be shown false on probabilities.
Yes and no. I do believe it hopeless, but I search because I’m looking anyway,
What does “skepticism wins” mean?
If what is right—that you can’t be sure you’re not dreaming? Of course that’s right; how would you ever tell? Any method of distinguishing you came up with can’t possibly be relied upon, because if you are dreaming, then that method only works in your dreamworld. In other words, it can distinguish between meta-dreams and dreams, but not between dreams and reality. (And there’s no real reason to think it can even do the former, because hey, it’s a dreamworld after all, and no rules apply.)
You search because you’re looking? What does that mean?
Here’s a question. I assume you are familiar with the probability-theoretic notion of maximum entropy. By “radical skepticism” do you mean the thesis that the only possible rational belief-state is maximum entropy?
It means we cannot be justified in knowing anything, and are isolated from any objective reality. The basic rules of probability from which we assume the reliability of memory, senses etc are taken on religious style faith.
I’ve been trying to find a way around this, but you are probably right.
I mean I am checking and and again just in case because I don’t like the idea that scepticism is right.
I’m not familiar with that notion.
Justification depends on a function that tells you whether something is justified. I can easily justify a belief with the fact that a teacher taught it to me.
In what sense do you think it can not be justified and why do you think that framework of justification has some sort of reality to it?
Something is epistemically justified if, as you said, it has some sort of reality to it not by coincidence but because the rule reliably shows what is real. I am trying to find a framework with some sort of reality to it, and that requires dealing with scepticism.
If you don’t believe in reality in the first place how could you check whether something has reality?
You need to look at reality to check whether something is real. There no way around it. Your idea for justification has no solid basis in reality if you don’t believe in it in the first place.
You don’t get to be certain about justification and be skeptic about reality. There are certain types of Buddhism who you could call skeptic about reality but they would also not accept the concept of justification in which you happen to believe.
I don’t believe in the reality around us, not on a rational level- that does not mean I don’t believe there are things which are real(there may be, anyway). I just have no idea what they are.
Justification is DEFINED in a certain manner, and I think the best one to use is the definition I have given. That is how I can be certain about justification (or at least what I am calling justification) and a skeptic about reality.
OK, let’s skip to (4), as that might help you formulate your skepticism more precisely. “Maximum entropy” has more than one meaning, but here it basically means a belief-state that assigns an equal probability to all possibilities. In other words, it’s the probability distribution you would use if you had zero information. For example, if I ask you whether glappzug is thuxreq or not thuxreq, you can’t do better than to just pick an answer randomly. You have no clue to go on, so just get the choice over with and move on.
A thorough-going skeptic, it seems to me, would have to think that all choices are just like that one. Even when we think we have information, we don’t really (because we could be dreaming!). Therefore there’s no reason to discriminate between any pair of alternatives, or among any set of them.
When you say “skepticism wins,” do you mean that for any set of alternative claims, there is never any reason to discriminate among them?
Probability itself being somehow valid is something I do not think rationally legitimate. Therefore, in a sense yes but in a sense no.
In that case, I don’t know how to proceed until you formulate your skepticism more precisely. What exactly is it that is not justified, if “skepticism wins”?
Nothing is justified if skepticism wins. Unless we have irrational faith in at least one starting assumption (and it is irrational since we have no basis for making the assumption), it is impossible to determine anything except our lack of knowledge.
So on thought, yes. There is never any valid rational reason to discriminate between possibilities because nothing can demonstrate the Evil Demon Argument false.
OK. I am still not exactly sure what you mean by “justification.” Let’s put this in more concrete terms. Imagine the following:
What does the Evil Demon Argument (and all in its family) say about the rationality of each choice, compared to the others (assuming it says anything at all)?
What advice would you personally give someone sitting at such a dinner table, and why?
The Evil Demon Argument says that you don’t know that it’s actually those three things before you. Further, it says that you don’t know that eating the rice will actually have the effects you’re used to, or that your memories can be used to remember your preferences. Etc etc...
On reason, I would give no advice. On faith, I would say to have the rice.
So, which advice would you give?
In the real world, it depends. With most people in practice, assuming they have enough of an understanding of me to know I am a skeptic on these things and are implicitly asking for one or the other, I give that. Therefore I normally give advice on faith.
I guess it’s hard for me to understand what’s irrational about advising them to eat the rice (as you indicated you would do). It seems like the only sane choice. I’m not sure exactly what you mean by “faith”, but if advising people to eat the rice is based on it, then it must be compatible with rationality, right?
Right—choose the rice, assuming you (or they) want to live. That seems like the only sane choice, doesn’t it?
Maybe this is a problem of terminology. You seem to be using the labels “faith” and “reason” in certain ways. Especially, you seem to be using the label “reason” to refer to the following of certain rules, but which you can’t see how to justify.
Maybe instead of focusing on those rules (whatever they may happen to be), you should focus on why the rules are valuable in the first place (if they are). Presumably, it’s because they reliably lead to success in achieving one’s goals. The worth of the rules is contingent on their usefulness; it’s not rational to believe only things you can prove with absolute certainty, because that would mean believing nothing, doing nothing, dying early and having no fun, and nobody wants that!
(In case you haven’t read it, you might want to check you Newcomb’s Problem and Regret of Rationality, from 2008.)
My conception of reason is based on determining what is true, completely and entirely irrespective of pragmatism. To call skeptical arguments irrational and call an anti-skeptical case rational would mean losing sight of the important fact that ONLY pragmatic considerations lead to the rejection of skepticism.
Rationality, to me, is defined as the hypothetical set of rules which reliably determine truth, not by coincidence, but because they must determine truth by their nature. Anything which does not follow said rules are irrational. Even if skepticism is false, believing in the world is irrational for me (and you, based on what I’ve heard from you and my definition) because nothing necessarily leads to a correlation between the senses and reality.
One of the rules of my rationality is that pragmatic considerations are not to be taken into account, as what is useful to believe and what is true have no necessary correlation. This applies for anything which has no necessary correlation with what is true.
What you’re talking about is pragmatic, not rational. It is important to be aware of the distinction between what one may ‘believe’ for some reason and what is likely to be actually true, completely independent of such beliefs.
You seem to be referring to the distinction between instrumental and epistemic rationality. Yes, they are different things. The case I am trying to make does not depend on a conflation of the two, and works just fine if we confine ourselves to epistemic rationality, as I will attempt to show below.
OK, so I think your labeling system, which is clearly different from the one to which I am accustomed, looks like this:
and
If that’s how you want to use the labels in this thread, fine. But it seems that an agent that believed only things that were known with infinite certainty would suffer from a severe truth deficiency. Even if such an agent managed to avoid directly accepting any falsehoods, she would fail to accept a vast number of correct beliefs. This is because much of the world is knowable—just not with absolute certainty. She would not have a very accurate picture of the world.
And this is not just because of “pragmatics”; even if the only goal is to maximize true beliefs, it makes no sense to filter out every non-provable proposition, because doing so would block too many true beliefs.
Perhaps an analogy with nutrition would be helpful. Imagine a person who refused to ingest anything that wasn’t first totally proven to be nutritious. Whenever she was served anything (even if she had eaten the same thing hundreds of times before!), she had to subject it to a series of time-consuming, expensive, and painstaking tests.
Would this be a good idea, from a nutritional point of view? No. For one thing, it would take way too long—possibly forever. And secondly (and this is the aspect I’m trying to focus on) lots of nutritious things cannot be proven so. Is this bite of pasta going to be nutritious? What about the next one? And the one after that? A person who insisted on such a diet would not eat very nutrients at all, because so many things would not pass the test ( and because the person would spend so much time testing and so little time eating).
Now, how about a person’s epistemic diet—does it make sense, from a purely epistemic perspective, for an agent to believe only what she can prove with absolute certainty? No. For one thing, it would take way too long—possibly forever. And secondly, lots of true things cannot be proven so, at least not with the kind of transcendent certainty you seem to be talking about. So an agent who insisted on such a filter would end up blocking much truth, thus “learning” a highly distorted map.
If the agent is interested in truth, she should ditch that filter and find a standard that lets her accept more true correct claims about the world, even if they aren’t totally proven.
By the way, have you read many of the Sequences? They are quite helpful and much better written than my comments. I’d say to start here. This one and this one also heavily impinge on our topic.
This assumes what the entire thread is about- that probability is a legitimate means for discussing reality. This presumes a lot or axioms of probability, such as that if you see X it is more likely real than an illusion, and induction as valid.
The appeal to absence of many true beliefs is irrelevant, as you have no means to determine truth beyond skepticism.
I do not think anything I wrote above depends on using probability to discuss reality.
Please elaborate. I believe it is not only relevant, but decisive.
You believe that the world exists, your memories are reliable, etc. You argue that a system that does not produce those conclusions is not good enough because they are true and a system must show they are true. But how on earth do you know that? Assuming induction, that your memories are reliable etc to judge Epistemic rules is circular.
You must admit it is absurd that you know the world exists with certainty, therefore you must admit you believe it exists on probability. Therefore your entire case depends on the legitimacy of probability.
Before accusing me of contradiction, remember my posistion all along has a distinction between faith and rational belief.
OK, but you are not using the term “rational” in the (what I thought was) the standard way. So the only reason what you’re saying seems contentious is because of your terminology.
You have not yet addressed much of what I’ve written. Automatically rejecting everything that isn’t 100% proven is a poor strategy if the agent’s goal is to be right as much as possible, yet it seems to be the only one you insist is rational. Is this merely because of how you’re using the word “rational,” or do you actually recommend “Reject everything that isn’t known 100%” as a strategy to such a person? (From the rice-and-gasoline example I think I know your answer already—that you would not recommend the skeptical strategy.)
How should an agent proceed, if she wants to have as accurate picture of reality as possible?
You are the only who is making assumptions without evidence and ignoring what I’m saying- that contrary to what you think you do not in fact know the Earth exists, your memories are reliable etc and therefore that your argument, which assumes such, falls apart.
You also fail to comprehend that probabilities have implicit axioms which must be accepted in order to accept probability. There is induction (e.g.- Sun risen X times already so it will probably rise again tonight), the Memory assumption (if my memories say I have done X then that is evidence in probabilities I have done X), the Reality assumption (seeing something is evidence in probabilities for it’s existence) etc. None of these can be demonstrated- they are starting assumptions taken on faith.
In the real world, as I said, it depends on what the person asked for. If I believe they were implicitly asking for a faith-based answer I would give that, if I believe an answer based on pure reason I would say neither.
The truth is that anything an agent believes to be true they have no way of justifying, as any justification ultimately appeals to assumptions that cannot themselves be justified.
I do not thus fail, and am aware of the specific assumptions you have in mind. I just deny that their existence implies what you say it implies.
OK. Let me try to restate your argument in terms I can better understand. Tell me if I’m getting this right.
(1) Let A = any agent and P = any proposition
(2) Define “justified belief” such that A justifiably believes P iff the following conditions hold:
a. P is provable from assumptions a, b, c, … and z.
b. A justifiably believes every a, b, c, … and z.
c. A believes P because of its proof from a, b, c, … and z.
(3) The claim “The sun will rise tomorrow” (or insert any other claim you want to talk about instead) is not provable from assumptions in which any agent could be justified in believing.
(4) Therefore, for every agent, belief in the claim “The sun will rise tomorrow” is not justified.
Is this a fair characterization of your argument? If so, I’ll work from this. If not, please improve it.
Mostly right. I accept the theoretical possibility of a self-evident belief- before learning of the Evil Demon argument, for example, I considered 1+1=2 to be such a belief.
However, a circular argument never is allowable, no matter how wide the circle. Without ultimately being tracable back to self-evident beliefs (though these can be self-evident axioms of probability, at least in theory), the system doesn’t have any justification.