I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.
Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That’s just an agent with an unfortunate prior that doesn’t allow him to learn.
More generally, I think you might be missing the point of the replies you’re getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html
OK, got it, thank you.
I have two doubts.
(i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed?
(ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes.
That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman’s approach, which is heavily criticized by Conee and others).
Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief?
That’s how degree 1 is defined: such strong a belief that no evidence can persuade one to abandon it. (You shoudn’t have such beliefs, needless to say.)
The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes.
I don’t see the difference. Bayesian epistemology is a set of prescriptive norms of reasoning.
That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases.
Bayesianism explains the problem away—the problem is there only if you use notions like defeat or knowledge and insist that to build your epistemology on them. Your puzzle shows that it is impossible. The fact that Bayesianism is free of Gettier problems is an argument for Bayesianism and against “traditional epistemology”.
To make an imprecise analogy, ancient mathematicians have long wondered what the infinite sum 1-1+1-1+1-1… is equal to. When calculus was invented, people saw that this was just a confused question. Some puzzles are best answered by rejecting the puzzle altogether.
(i) That remark concerns a Bayesian agent, or more specifically an agent who updates by conditionalization. It’s a property of conditionalization that no amount of evidence that an agent updates upon can change a degree of belief of 0 or 1. Intuitively, the closer a probability gets to 1, the less it will decrease in its absolute value in response to a given strength of counterevidence. 1 corresponds to the limit at which it won’t decreases at all from any counterevidence.
(ii) I’m well-aware that the aims of most epistemologists and most Bayesian philosophers diverge somewhat, but there is substantial overlap even within philosophy (i.e. applying Bayesianism to norms of belief change); furthermore, Bayesianism is very much applicable (and in fact applied) to norms of belief change, your puzzles being examples of questions that wouldn’t even occur to a Bayesian.
Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That’s just an agent with an unfortunate prior that doesn’t allow him to learn.
More generally, I think you might be missing the point of the replies you’re getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html
OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes. That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman’s approach, which is heavily criticized by Conee and others).
That’s how degree 1 is defined: such strong a belief that no evidence can persuade one to abandon it. (You shoudn’t have such beliefs, needless to say.)
I don’t see the difference. Bayesian epistemology is a set of prescriptive norms of reasoning.
Bayesianism explains the problem away—the problem is there only if you use notions like defeat or knowledge and insist that to build your epistemology on them. Your puzzle shows that it is impossible. The fact that Bayesianism is free of Gettier problems is an argument for Bayesianism and against “traditional epistemology”.
To make an imprecise analogy, ancient mathematicians have long wondered what the infinite sum 1-1+1-1+1-1… is equal to. When calculus was invented, people saw that this was just a confused question. Some puzzles are best answered by rejecting the puzzle altogether.
(i) That remark concerns a Bayesian agent, or more specifically an agent who updates by conditionalization. It’s a property of conditionalization that no amount of evidence that an agent updates upon can change a degree of belief of 0 or 1. Intuitively, the closer a probability gets to 1, the less it will decrease in its absolute value in response to a given strength of counterevidence. 1 corresponds to the limit at which it won’t decreases at all from any counterevidence.
(ii) I’m well-aware that the aims of most epistemologists and most Bayesian philosophers diverge somewhat, but there is substantial overlap even within philosophy (i.e. applying Bayesianism to norms of belief change); furthermore, Bayesianism is very much applicable (and in fact applied) to norms of belief change, your puzzles being examples of questions that wouldn’t even occur to a Bayesian.