I didn’t downvote! And I am not shooting the messenger, as I am also sure it is not and argument about Gettier problems. I am sorry if the post offended you—maybe it is better not to mix different views of something.
fsopho
Well, puzzle 2 is a puzzle with a case of knowledge: I know (T). Changing to probabilities does not solve the problem, only changes it!
Thank you, Zed. You are right: I didn’t specified the meaning of ‘misleading evidence’. It means evidence to believe something that is false (whether or not the cognitive agent receiving such evidence knows it is misleading). Now, maybe it I’m missing something, but I don’t see any silliness in thinking of terms of “belief A defeats belief B”. On the basis of having an experiential evidence, I believe there is a tree in front of me. But then, I discover I’m drugged with LCD (a friend of mine put it in my coffee previously, unknown to me). This new piece of information defeats the justification I had for believing there is a tree in front of me—my evidence does not support this belief anymore. There is a good material on defeasible reasoning and justification in John Pollock’s website: http://oscarhome.soc-sci.arizona.edu/ftp/publications.html#reasoning
Thank you! Well, you didn’t answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case—what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don’t think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not doesn’t seem to depend on the degree of my belief. There are cases in which the degree of my belief that P is very low and, yet, I am rational in believing that P. There are cases where I infer a proposition from a long argument, have no counter-evidence to any premise or to the support relation between premises and conclusion but, yet, have a low degree of confidence in the conclusion. Degrees of belief is a psychological matter, or at least so it appear to me. Nevertheless, even accepting the degree-of-belief model of doxastic rational changes, I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1. Can you explain what is the rational thing to do in each case, and why?
two puzzles on rationality of defeat
Good afternoon, morning or night! I’m a graduate student in Epistemology. My research is about epistemic rationality, logic and AI. I’m actually investigating about the general pattern of epistemic norms and about their nature—if these norms must be actually accessed by the cognitive agent to do their job or not; if these norms in fact optimize the epistemic goal of having true beliefs and avoiding false ones, or rather if these norms just appear to do so; and still other questions. I was navigating through the web and looking for web-based softwares to calculate probabilites, so that I found LW, and guess what! I started to read it and couldn’t stop—each link sounds exciting and interesting (bias, probability, belief, bayesianism...). So, I happily made an account, and I’m eager to discuss with you guys! Hope I can contribute to LW some way. We (me and my research partners) have a blog (https://fsopho.wordpress.com) on epistemology and reasoning. We’re all together in the search for knowledge, fighting bias and requiring evidence! see ya =]
OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives—the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes. That being the case, the puzzles I brought maybe are not of interest for Bayesians—but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman’s approach, which is heavily criticized by Conee and others).