Yo, deductive logic is a special case of probabilistic logic in the limit that your probabilities for things go to 0 and 1, i.e. you’re really sure of things. If I’m really sure that Socrates is a man, and I’m really sure that all men are mortal, then I’m really sure that Socrates is mortal. However, if I am 20% sure that Socrates is a space alien, my information is no longer well-modeled by deductive logic, and I have to use probabilistic logic.
The point is that the conditions for deductive logic have always broken down if you can deduce both T and ~T. This breakdown doesn’t (always) mean you can no longer reason. It does mean you should stop trying to use deductive logic, and use probabilistic logic instead. Probabilistic logic is, for various reasons, the right way to reason from incomplete information—deductive logic is just an approximation for when you’re really sure of things. Try phrasing your problems with degrees of belief expressed as probabilities, follow the rules, and you will find that the apparent problem has vanished into thin air.
Thank you!
Well, you didn’t answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case—what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2.
I don’t think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not doesn’t seem to depend on the degree of my belief. There are cases in which the degree of my belief that P is very low and, yet, I am rational in believing that P. There are cases where I infer a proposition from a long argument, have no counter-evidence to any premise or to the support relation between premises and conclusion but, yet, have a low degree of confidence in the conclusion. Degrees of belief is a psychological matter, or at least so it appear to me.
Nevertheless, even accepting the degree-of-belief model of doxastic rational changes, I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1. Can you explain what is the rational thing to do in each case, and why?
So, in order to answer the puzzles, you have to start with probabilistic beliefs, rather than with binary true-false beliefs. The problem is currently somewhat like the question “is it true or false that the sun will rise tomorrow.” To a very good approximation, the sun will rise tomorrow. But the earth’s rotation could stop, or the sun could get eaten by a black hole, or several other possibilities that mean that it is not absolutely known that the sun will rise tomorrow. So how can we express our confidence that the sun will rise tomorrow? As a probability—a big one, like 0.999999999999.
So anyhow, let’s take problem 1. How confident are you in P1, P2, and P3? Let’s say about 0.99 each—you could make a hundred such statements and only get one wrong, or so you think. So how about T? Well, if it follows form P1, P2 and P3, then you believe it with degree about 0.97.
Now Ms. Math comes and tells you you’re wrong. What happens? You apply Bayes’ theorem. When something is wrong, Ms. Math can spot it 90% of the time, and when it’s right, she only thinks it’s wrong 0.01% of the time. So Bayes’ rule says to multiply your probability of ~T by 0.9/(0.030.9 + 0.970.0001), giving an end result of T being true with probability only about 0.005.
Note that at no point did any beliefs “defeat” other ones. You just multiplied them together. If Ms. Math had talked to you first, and then you had gotten your answer after, the end result would be the same. The second problem is slightly trickier because not only do you have to apply probability theory correctly, you have to avoid applying it incorrectly. Basically, you have to be good at remembering to use conditional probabilities when applying (AME).
I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.
I suspect that you only conceive that you can conceive of that. In addition to the post linked above, I would suggest reading this, and this, and perhaps a textbook on probability. It’s not enough for something to be a belief for it to be a probability—it has to behave according to certain rules.
I can’t believe people apply Baye’s theorem when confronted to counter-evidence. What evidence do we have to believe that Bayesian probability theories describe the way we reason inductively?
It doesn’t necessarily describe the way we actually reason (because of cognitive biases that effect our ability to make inferences), but it does describe the way we should reason.
I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.
Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That’s just an agent with an unfortunate prior that doesn’t allow him to learn.
More generally, I think you might be missing the point of the replies you’re getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html
OK, got it, thank you.
I have two doubts.
(i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed?
(ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes.
That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman’s approach, which is heavily criticized by Conee and others).
Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief?
That’s how degree 1 is defined: such strong a belief that no evidence can persuade one to abandon it. (You shoudn’t have such beliefs, needless to say.)
The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes.
I don’t see the difference. Bayesian epistemology is a set of prescriptive norms of reasoning.
That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases.
Bayesianism explains the problem away—the problem is there only if you use notions like defeat or knowledge and insist that to build your epistemology on them. Your puzzle shows that it is impossible. The fact that Bayesianism is free of Gettier problems is an argument for Bayesianism and against “traditional epistemology”.
To make an imprecise analogy, ancient mathematicians have long wondered what the infinite sum 1-1+1-1+1-1… is equal to. When calculus was invented, people saw that this was just a confused question. Some puzzles are best answered by rejecting the puzzle altogether.
(i) That remark concerns a Bayesian agent, or more specifically an agent who updates by conditionalization. It’s a property of conditionalization that no amount of evidence that an agent updates upon can change a degree of belief of 0 or 1. Intuitively, the closer a probability gets to 1, the less it will decrease in its absolute value in response to a given strength of counterevidence. 1 corresponds to the limit at which it won’t decreases at all from any counterevidence.
(ii) I’m well-aware that the aims of most epistemologists and most Bayesian philosophers diverge somewhat, but there is substantial overlap even within philosophy (i.e. applying Bayesianism to norms of belief change); furthermore, Bayesianism is very much applicable (and in fact applied) to norms of belief change, your puzzles being examples of questions that wouldn’t even occur to a Bayesian.
Yo, deductive logic is a special case of probabilistic logic in the limit that your probabilities for things go to 0 and 1, i.e. you’re really sure of things. If I’m really sure that Socrates is a man, and I’m really sure that all men are mortal, then I’m really sure that Socrates is mortal. However, if I am 20% sure that Socrates is a space alien, my information is no longer well-modeled by deductive logic, and I have to use probabilistic logic.
The point is that the conditions for deductive logic have always broken down if you can deduce both T and ~T. This breakdown doesn’t (always) mean you can no longer reason. It does mean you should stop trying to use deductive logic, and use probabilistic logic instead. Probabilistic logic is, for various reasons, the right way to reason from incomplete information—deductive logic is just an approximation for when you’re really sure of things. Try phrasing your problems with degrees of belief expressed as probabilities, follow the rules, and you will find that the apparent problem has vanished into thin air.
Welcome to LessWrong!
Thank you! Well, you didn’t answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case—what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don’t think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not doesn’t seem to depend on the degree of my belief. There are cases in which the degree of my belief that P is very low and, yet, I am rational in believing that P. There are cases where I infer a proposition from a long argument, have no counter-evidence to any premise or to the support relation between premises and conclusion but, yet, have a low degree of confidence in the conclusion. Degrees of belief is a psychological matter, or at least so it appear to me. Nevertheless, even accepting the degree-of-belief model of doxastic rational changes, I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1. Can you explain what is the rational thing to do in each case, and why?
So, in order to answer the puzzles, you have to start with probabilistic beliefs, rather than with binary true-false beliefs. The problem is currently somewhat like the question “is it true or false that the sun will rise tomorrow.” To a very good approximation, the sun will rise tomorrow. But the earth’s rotation could stop, or the sun could get eaten by a black hole, or several other possibilities that mean that it is not absolutely known that the sun will rise tomorrow. So how can we express our confidence that the sun will rise tomorrow? As a probability—a big one, like 0.999999999999.
Why not just round up to one? Because although the gap between 0.999999999999 and 1 may seem small, it actually takes an infinite amount of evidence to bridge that gap. You may know this as the problem of induction.
So anyhow, let’s take problem 1. How confident are you in P1, P2, and P3? Let’s say about 0.99 each—you could make a hundred such statements and only get one wrong, or so you think. So how about T? Well, if it follows form P1, P2 and P3, then you believe it with degree about 0.97.
Now Ms. Math comes and tells you you’re wrong. What happens? You apply Bayes’ theorem. When something is wrong, Ms. Math can spot it 90% of the time, and when it’s right, she only thinks it’s wrong 0.01% of the time. So Bayes’ rule says to multiply your probability of ~T by 0.9/(0.030.9 + 0.970.0001), giving an end result of T being true with probability only about 0.005.
Note that at no point did any beliefs “defeat” other ones. You just multiplied them together. If Ms. Math had talked to you first, and then you had gotten your answer after, the end result would be the same. The second problem is slightly trickier because not only do you have to apply probability theory correctly, you have to avoid applying it incorrectly. Basically, you have to be good at remembering to use conditional probabilities when applying (AME).
I suspect that you only conceive that you can conceive of that. In addition to the post linked above, I would suggest reading this, and this, and perhaps a textbook on probability. It’s not enough for something to be a belief for it to be a probability—it has to behave according to certain rules.
I can’t believe people apply Baye’s theorem when confronted to counter-evidence. What evidence do we have to believe that Bayesian probability theories describe the way we reason inductively?
Oh, if you want to model what people actually do, I agree it’s much more complicated. Merely doing things correctly is quite simple by comparison.
It doesn’t necessarily describe the way we actually reason (because of cognitive biases that effect our ability to make inferences), but it does describe the way we should reason.
Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That’s just an agent with an unfortunate prior that doesn’t allow him to learn.
More generally, I think you might be missing the point of the replies you’re getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html
OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes. That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman’s approach, which is heavily criticized by Conee and others).
That’s how degree 1 is defined: such strong a belief that no evidence can persuade one to abandon it. (You shoudn’t have such beliefs, needless to say.)
I don’t see the difference. Bayesian epistemology is a set of prescriptive norms of reasoning.
Bayesianism explains the problem away—the problem is there only if you use notions like defeat or knowledge and insist that to build your epistemology on them. Your puzzle shows that it is impossible. The fact that Bayesianism is free of Gettier problems is an argument for Bayesianism and against “traditional epistemology”.
To make an imprecise analogy, ancient mathematicians have long wondered what the infinite sum 1-1+1-1+1-1… is equal to. When calculus was invented, people saw that this was just a confused question. Some puzzles are best answered by rejecting the puzzle altogether.
(i) That remark concerns a Bayesian agent, or more specifically an agent who updates by conditionalization. It’s a property of conditionalization that no amount of evidence that an agent updates upon can change a degree of belief of 0 or 1. Intuitively, the closer a probability gets to 1, the less it will decrease in its absolute value in response to a given strength of counterevidence. 1 corresponds to the limit at which it won’t decreases at all from any counterevidence.
(ii) I’m well-aware that the aims of most epistemologists and most Bayesian philosophers diverge somewhat, but there is substantial overlap even within philosophy (i.e. applying Bayesianism to norms of belief change); furthermore, Bayesianism is very much applicable (and in fact applied) to norms of belief change, your puzzles being examples of questions that wouldn’t even occur to a Bayesian.