The concept of “defeat”, in any case, is not necessarily silly or inapplicable to a particular (game-based) understanding of reasoning, which has always been known to be discursive, so I do not think it is inadequate as an autobiographical account, but it is not how one characterizes what is ultimately a false conclusion that was previously held true. One need not commit oneself to a particular choice either in the case of “victory” or “defeat”, which are not themselves choices to be made.
Puzzle 2
Statements ME and AME are both false generalizations. One cannot know evidence for (or against) a given theorem (or apodosis from known protases) in advance based on the supposition that the apodosis is true, for that would constitute a circular argument. I.e.:
T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false. It is also false to suppose that a human being is always capable of reasoning correctly under all states of knowledge, or even that they possess sufficient knowledge of a particular body of information perfectly so as to reason validly.
MF is also false as a generalization.
In general, one should not be concerned with how “misleading” a given amount of evidence is. To reason on those grounds, one could suppose a given bit of evidence would always be “misleading” because one “knows” that the contrary of what that bit of evidence suggests is always true. (The fact that there are people out there who do in fact “reason” this way, based on evidence, as in the superabundant source of historical examples in which they continue to believe in a false conclusion, because they “know” the evidence that it is false is false or “misleading”, does not at all validate this mode of reasoning, but rather shores up certain psychological proclivities that suggest how fallacious their reasoning may be; however, this would not itself show that the course of necessary reasoning is incorrect, only that those who attempt to exercise it do so very poorly.) In the case that the one is dealing with a theorem, it must be true, provided that the reasoning is in fact valid, for theorematic reasoning is based on any axioms of one’s choice (even though it is not corollarial).!! However, if the apodosis concerns a statement of evidence, there is room for falsehood, even if the reasoning is valid, because the premisses themselves are not guaranteed to be always true.
The proper attitude is to understand that the reasoning prior to exposure of evidence/reasoning from another subject (or one’s own inquiry) may in fact be wrong, however necessary the reasoning itself may seemingly appear. No amount of evidence is sufficient evidence for its absolute truth, no matter how valid the reasoning is. Note that evidence here is indeed characteristic of observational criteria, but the reasoning based thereon is not properly deductive, even if the reasoning is essentially necessary in character. Note that deductive logic is concerned with the reasoning to true conclusions under the assumption that the relevant premisses are true; if one is taking into account the possibility of premisses which may not always be true, then such reasoning is probabilistic (and necessary) reasoning.
!! This, in effect, resolves puzzle 1. Namely, if the theorem is derived based on valid necessary reasoning, then it is true. If it isn’t valid reasoning, then it is false. If “defeat” consists in being shown that one’s initial stance was incorrect, then yes, it is essential that one takes the stance of having been defeated. Note that puzzle 2 is solved in fundamentally the same manner, despite the distracting statements ME, AME, and MF, on account of the nature of theorems. Probabilities nowhere come into account, and the employment of Bayesian reasoning is an unnecessary complication. If one does not take the stance of having been defeated, then there is no hope for that person to be convinced of anything of a logical (necessary) character.
“T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false.”
Actually, I think if “I know T is true” means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 1 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief. So I’d say the problem is a wrong question.
Edited: I meant probability 1 of misleading evidence, not 0.
Actually, I think if “I know T is true” means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 0 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief.
The presumption of the claim “I know T is true” (and that evidence that it is false is false) is false precisely in the case that the reasoning used to show that T (in this case a theorem) is true is invalid. Were T not a theorem, then probabilistic reasoning would in fact apply, but it does not. (And since it doesn’t, it is irrelevant to pursue that path. But, in short, the fact that it is a theorem should lead us to understand that the premisses’ truth is not the issue at hand here, thus probabilistic reasoning need not apply, and so there is no issue of T’s being probably true or false.) Furthermore, it is completely wide of the mark to suggest that one should apply this or that probability to the claims in question, precisely because the problem concerns deductive reasoning. All the non-deductive aspects of the puzzles are puzzling distractions at best. In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone’s having arrived at the (fallacious) truth of T. (It is necessary that one be led to a true conclusion given true premisses.) Hence, one need not be concerned with the epistemic standing of the truth of T, since it would have clearly been demonstrated to be false. And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid. Valid reasoning is always valid, no matter what one may think of the reasoning; and one may invalidly believe in the validity of an invalid conclusion. Such is human fallibility.
So I’d say the problem is a wrong question.
No, I think it is a good question, and it is easy to be led astray by not recognizing where precisely the problem fits in logical space, if one isn’t being careful. Amusingly (if not disturbingly), some of most up-voted posts are precisely those that get this wrong and thus fail to see the nature of the problem correctly. However, the way the problem is framed does lend itself to misinterpretation, because a demonstration of the falsity of T (namely, that it is invalid that T is true) should not be treated as a premiss in another apodosis; a valid demonstration of the falsity of T is itself a deductive conclusion, not a protasis proper. (In fact, the way it is framed, the claim ~T is equivalent to F, such that the claims [F, P1, P2, and P3] implies ~T is really a circular argument, but I was being charitable in my approach to the puzzles.) But oh well.
In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone’s having arrived at the (fallacious) truth of T.
I think I see your point, but if you allow for the possibility that the original deductive reasoning is wrong, i.e. deny logical omniscience, don’t you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
Unless you assume that you can’t make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
And if you do assume that you can’t make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
...if you allow for the possibility that the original deductive reasoning is wrong...
I want to be very clear here: a valid deductive reasoning can never be wrong (i.e., invalid), only those who exercise in such reasoning are liable to error. This does not pertain to logical omniscience per se, because we are not here concerned with the logical coherence of the total collection of beliefs a given person (like the one in the example) might possess; we are only concerned with T. And humans, in any case, do not always engage in deduction properly due to many psychological, physical, etc. limitations.
don’t you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
No, the possibility that someone will commit an error in deductive reasoning is in no need of quantification. That is only to increase the complexity of the puzzle. And by the razor, what is done with less is in vain done with more.
Unless you assume that you can’t make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
To reiterate, an invalid deductive reasoning is not a deduction with which we should concern ourselves. The prior case of T, having been shown F, is in fact false, such that we should no longer elevate it to the status of a logical deduction. By the measure of its invalidity, we know full well in the valid deduction ~T. In other words, to make a mistake in deductive reasoning is not to reason deductively!
And if you do assume that you can’t make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
This is where the puzzle introduced needless confusion. There was no real evidence. There was only the brute fact of the validity of ~T as introduced by a person who showed the falsity/invalidity of T. That is how the puzzles’ solution comes to a head – via a clear understanding of the nature of deductive reasoning.
Sorry, I think I still don’t understand your reasoning.
First, I have the beliefs P1, P2 and P3, then I (in an apparently deductively valid way) reason that [C1] “T is a theorem of P1, P2, and P3”, therefore I believe T.
Either my reasoning that finds out [C1] is valid or invalid. I do think it’s valid, but I am fallible.
Then the Authority asserts F, I add F to the belief pool, and we (in an apparently deductively valid way) reason [C2] “~T is a theorem of F, P1, P2, and P3”, therefore we believe ~T.
Either our reasoning that finds out [C2] is valid or invalid. We do think it’s valid, but we are fallible.
Is it possible to conclude C2 without accepting I made a mistake when reasoning C1 (therefore we were wrong to think that line of reasoning was valid)? Otherwise we would have both T and ~T as theorems of F, P1, P2, and P3, and we should conclude that the promises lead to contradiction and should be revised; we wouldn’t jump from believing T to believing ~T.
But the story doesn’t say the Authority showed a mistake in C1. It says only that she made a (apparently valid) reasoning using F in addition to P1, P2, and P3.
If the argument of the Authority doesn’t show the mistake in C1, how should I decide whether to believe C1 has a mistake, C2 has a mistake, or the promises F, P1, P2, and P3 actually lead to contradiction, with both C1 and C2 being valid?
I think Bayesian reasoning would inevitably enter the game in that last step.
C1 is a presumption, namely, a belief in the truth of T, which is apparently a theorem of P1, P2, and P3. As a belief, it’s validity is not what is at issue here, because we are concerned with the truth of T.
F comes in, but is improperly treated as a premiss to conclude ~T, when it is equivalent to ~T. Again, we should not be concerned with belief, because we are dealing with statements that are either true or false. Either but not both (T or ~T) can be true (which is the definition of a logical tautology).
Hence C2 is another presumption with which we should not concern ourselves. Belief has no influence on the outcome of T or ~T.
For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).
Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.
Third bullet: I assume that the problem is stated in general terms, for had Ms. Math shown that T is false in explicit terms (contained in F), then the proper form of ~T would be: F → ~T. Note that it is wrong to frame it the following way: F, P1, P2, and P3 → ~T. It is wrong because F states ~T. There is no “decision” to be made here! Bayesian reasoning in this instance (if not many others) is a misapplication and obfuscation of the original problem from a poor grasp of the nature of deduction.
(N.B.: However, if the nature of the problem were to consist in merely being told by some authority a contradiction to what one supposes to be true, then there is no logically necessity for us to suddenly switch camps and begin to believe in the contradiction over one’s prior conviction. Appeal to Authority is a logical fallacy, and if one supposes Bayesian reasoning is a help there, then there is much for that person to learn of the nature of deduction proper.)
Let me give you an example of what I really mean:
Note statements P, Q, and Z:
(P) Something equals something and something else equals that same something such that both equal each other.
(Q) This something equals that. This other something also equals that.
(Z) The aforementioned somethings equal each other.
It is clear that Z follows from P and Q, no? In effect, you’re forced to accept it, correct? Is there any “belief” involved in this setting? Decidedly not. However, let’s suppose we meet up with someone who disagrees and states: “I accept the truths of P and Q but not Z.”
Then we’ll add the following to help this poor fellow:
(R) If P and Q are true, then Z must be true.
They may respond: “I accept P, Q, and R as true, but not Z.”
And so on ad infinitum. What went wrong here? They failed to reason deductively. We might very well be in the same situation with T, where
(P and Q) are equivalent to (P1, P2, and P3) (namely, all of these premisses are true), such that whatever Z is, it must be equivalent to the theorem (which would in this case be ~T, if Ms. Math is doing her job and not merely deigning to inform the peons at the foot of her ivory tower).
P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion.
If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn’t defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he’ll have departed from the sphere of logic completely.
Note that if Ms. Math merely says “T is false” (F) such that F is really ~T, then the form [F, P1, P2, and P3] implies ~T is really a circular argument, for the conclusion is already assumed within the premisses. But, as I said, I was being charitable with the puzzles and not assuming that that was being communicated.
I guess it wasn’t clear, C1 and C2 reffered to the reasonings as well as the conclusions they reached. You say belief is of no importance here, but I don’t see how you can talk about “defeat” if you’re not talking about justified believing.
For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).
I’m not sure if I understood what you said here. You agree with what I said in the first bullet or not?
Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.
Are you sure that’s correct? If there’s a contradiction within the set of axioms, you could find T and ~T following valid deductions, couldn’t you? Proving ~T and proving that the reasoning leading to T was invalid are only equivalent if you assume the axioms are not contradictory. Am I wrong?
P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn’t defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he’ll have departed from the sphere of logic completely.
The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn’t, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3. I don’t see why logic would force me to accept ~T instead of believing there’s a mistake I can’t see in the proof Ms. Math showed me, or, more plausibly, to conclude that the axioms are contradictory.
...I don’t see how you can talk about “defeat” if you’re not talking about justified believing
“Defeat” would solely consist in the recognition of admitting to ~T instead of T. Not a matter of belief per se.
You agree with what I said in the first bullet or not?
No, I don’t.
The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn’t, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3.
T cannot be derived from [P1, P2, and P3], but ~T can on account of F serving as a corrective that invalidates T. The only assumptions I’ve made are 1) Ms. Math is not an ivory tower authoritarian and 2) that she wouldn’t be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.
Anyway, I suppose there’s no more to be said about this, but you can ask for further clarification if you want.
2) that she wouldn’t be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.
Oh, now I see what you mean. I interpreted F as a new promiss, a new axiom, not a whole argument about the (mistaken) reasoning that proved T. For example, (wikipedia tells me that) the axiom of determinacy is inconsistent with the axiom of choice. If I had proved T in ZFC, and Ms. Math asserted the Axiom of Determinacy and proved ~T in ZFC+AD, and I didn’t know beforehand that AD is inconsistent with AC, I would still need to find out what was the problem.
I still think this is more consistent with the text of the original post, but now I understand what you meant by ” I was being charitable with the puzzles”.
I’m interested in what you have to say, and I’m sympathetic (I think), but I was hoping you could restate this in somewhat clearer terms. Several of your sentences are rather difficult to parse, like “And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid.”
Read my latest comments. If you need further clarity, ask me specific questions and I will attempt to accommodate them.
But to give some additional note on the quote you provide, look to reductio ad absurdum as a case where it would be incorrect to aver to the truth of what is really contradictory in nature.
If it still isn’t clear, ask yourself this: “does it make sense to say something is true when it is actually false?” Anyone who answers this in the affirmative is either being silly or needs to have their head checked (for some fascinating stuff, indeed).
Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.
Short answer: Because if you assign probability 1 to a belief, then it is impossible for you to change your mind even when confronted with a mountain of opposing evidence. For the full argument, see Infinite Certainty.
Puzzle 1
RM is irrelevant.
The concept of “defeat”, in any case, is not necessarily silly or inapplicable to a particular (game-based) understanding of reasoning, which has always been known to be discursive, so I do not think it is inadequate as an autobiographical account, but it is not how one characterizes what is ultimately a false conclusion that was previously held true. One need not commit oneself to a particular choice either in the case of “victory” or “defeat”, which are not themselves choices to be made.
Puzzle 2
Statements ME and AME are both false generalizations. One cannot know evidence for (or against) a given theorem (or apodosis from known protases) in advance based on the supposition that the apodosis is true, for that would constitute a circular argument. I.e.:
T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false. It is also false to suppose that a human being is always capable of reasoning correctly under all states of knowledge, or even that they possess sufficient knowledge of a particular body of information perfectly so as to reason validly.
MF is also false as a generalization.
In general, one should not be concerned with how “misleading” a given amount of evidence is. To reason on those grounds, one could suppose a given bit of evidence would always be “misleading” because one “knows” that the contrary of what that bit of evidence suggests is always true. (The fact that there are people out there who do in fact “reason” this way, based on evidence, as in the superabundant source of historical examples in which they continue to believe in a false conclusion, because they “know” the evidence that it is false is false or “misleading”, does not at all validate this mode of reasoning, but rather shores up certain psychological proclivities that suggest how fallacious their reasoning may be; however, this would not itself show that the course of necessary reasoning is incorrect, only that those who attempt to exercise it do so very poorly.) In the case that the one is dealing with a theorem, it must be true, provided that the reasoning is in fact valid, for theorematic reasoning is based on any axioms of one’s choice (even though it is not corollarial). !! However, if the apodosis concerns a statement of evidence, there is room for falsehood, even if the reasoning is valid, because the premisses themselves are not guaranteed to be always true.
The proper attitude is to understand that the reasoning prior to exposure of evidence/reasoning from another subject (or one’s own inquiry) may in fact be wrong, however necessary the reasoning itself may seemingly appear. No amount of evidence is sufficient evidence for its absolute truth, no matter how valid the reasoning is. Note that evidence here is indeed characteristic of observational criteria, but the reasoning based thereon is not properly deductive, even if the reasoning is essentially necessary in character. Note that deductive logic is concerned with the reasoning to true conclusions under the assumption that the relevant premisses are true; if one is taking into account the possibility of premisses which may not always be true, then such reasoning is probabilistic (and necessary) reasoning.
!! This, in effect, resolves puzzle 1. Namely, if the theorem is derived based on valid necessary reasoning, then it is true. If it isn’t valid reasoning, then it is false. If “defeat” consists in being shown that one’s initial stance was incorrect, then yes, it is essential that one takes the stance of having been defeated. Note that puzzle 2 is solved in fundamentally the same manner, despite the distracting statements ME, AME, and MF, on account of the nature of theorems. Probabilities nowhere come into account, and the employment of Bayesian reasoning is an unnecessary complication. If one does not take the stance of having been defeated, then there is no hope for that person to be convinced of anything of a logical (necessary) character.
“T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false.”
Actually, I think if “I know T is true” means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 1 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief. So I’d say the problem is a wrong question.
Edited: I meant probability 1 of misleading evidence, not 0.
The presumption of the claim “I know T is true” (and that evidence that it is false is false) is false precisely in the case that the reasoning used to show that T (in this case a theorem) is true is invalid. Were T not a theorem, then probabilistic reasoning would in fact apply, but it does not. (And since it doesn’t, it is irrelevant to pursue that path. But, in short, the fact that it is a theorem should lead us to understand that the premisses’ truth is not the issue at hand here, thus probabilistic reasoning need not apply, and so there is no issue of T’s being probably true or false.) Furthermore, it is completely wide of the mark to suggest that one should apply this or that probability to the claims in question, precisely because the problem concerns deductive reasoning. All the non-deductive aspects of the puzzles are puzzling distractions at best. In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone’s having arrived at the (fallacious) truth of T. (It is necessary that one be led to a true conclusion given true premisses.) Hence, one need not be concerned with the epistemic standing of the truth of T, since it would have clearly been demonstrated to be false. And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid. Valid reasoning is always valid, no matter what one may think of the reasoning; and one may invalidly believe in the validity of an invalid conclusion. Such is human fallibility.
No, I think it is a good question, and it is easy to be led astray by not recognizing where precisely the problem fits in logical space, if one isn’t being careful. Amusingly (if not disturbingly), some of most up-voted posts are precisely those that get this wrong and thus fail to see the nature of the problem correctly. However, the way the problem is framed does lend itself to misinterpretation, because a demonstration of the falsity of T (namely, that it is invalid that T is true) should not be treated as a premiss in another apodosis; a valid demonstration of the falsity of T is itself a deductive conclusion, not a protasis proper. (In fact, the way it is framed, the claim ~T is equivalent to F, such that the claims [F, P1, P2, and P3] implies ~T is really a circular argument, but I was being charitable in my approach to the puzzles.) But oh well.
I think I see your point, but if you allow for the possibility that the original deductive reasoning is wrong, i.e. deny logical omniscience, don’t you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
Unless you assume that you can’t make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
And if you do assume that you can’t make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
I want to be very clear here: a valid deductive reasoning can never be wrong (i.e., invalid), only those who exercise in such reasoning are liable to error. This does not pertain to logical omniscience per se, because we are not here concerned with the logical coherence of the total collection of beliefs a given person (like the one in the example) might possess; we are only concerned with T. And humans, in any case, do not always engage in deduction properly due to many psychological, physical, etc. limitations.
No, the possibility that someone will commit an error in deductive reasoning is in no need of quantification. That is only to increase the complexity of the puzzle. And by the razor, what is done with less is in vain done with more.
To reiterate, an invalid deductive reasoning is not a deduction with which we should concern ourselves. The prior case of T, having been shown F, is in fact false, such that we should no longer elevate it to the status of a logical deduction. By the measure of its invalidity, we know full well in the valid deduction ~T. In other words, to make a mistake in deductive reasoning is not to reason deductively!
This is where the puzzle introduced needless confusion. There was no real evidence. There was only the brute fact of the validity of ~T as introduced by a person who showed the falsity/invalidity of T. That is how the puzzles’ solution comes to a head – via a clear understanding of the nature of deductive reasoning.
Sorry, I think I still don’t understand your reasoning.
First, I have the beliefs P1, P2 and P3, then I (in an apparently deductively valid way) reason that [C1] “T is a theorem of P1, P2, and P3”, therefore I believe T.
Either my reasoning that finds out [C1] is valid or invalid. I do think it’s valid, but I am fallible.
Then the Authority asserts F, I add F to the belief pool, and we (in an apparently deductively valid way) reason [C2] “~T is a theorem of F, P1, P2, and P3”, therefore we believe ~T.
Either our reasoning that finds out [C2] is valid or invalid. We do think it’s valid, but we are fallible.
Is it possible to conclude C2 without accepting I made a mistake when reasoning C1 (therefore we were wrong to think that line of reasoning was valid)? Otherwise we would have both T and ~T as theorems of F, P1, P2, and P3, and we should conclude that the promises lead to contradiction and should be revised; we wouldn’t jump from believing T to believing ~T.
But the story doesn’t say the Authority showed a mistake in C1. It says only that she made a (apparently valid) reasoning using F in addition to P1, P2, and P3.
If the argument of the Authority doesn’t show the mistake in C1, how should I decide whether to believe C1 has a mistake, C2 has a mistake, or the promises F, P1, P2, and P3 actually lead to contradiction, with both C1 and C2 being valid?
I think Bayesian reasoning would inevitably enter the game in that last step.
C1 is a presumption, namely, a belief in the truth of T, which is apparently a theorem of P1, P2, and P3. As a belief, it’s validity is not what is at issue here, because we are concerned with the truth of T.
F comes in, but is improperly treated as a premiss to conclude ~T, when it is equivalent to ~T. Again, we should not be concerned with belief, because we are dealing with statements that are either true or false. Either but not both (T or ~T) can be true (which is the definition of a logical tautology).
Hence C2 is another presumption with which we should not concern ourselves. Belief has no influence on the outcome of T or ~T.
For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).
Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.
Third bullet: I assume that the problem is stated in general terms, for had Ms. Math shown that T is false in explicit terms (contained in F), then the proper form of ~T would be: F → ~T. Note that it is wrong to frame it the following way: F, P1, P2, and P3 → ~T. It is wrong because F states ~T. There is no “decision” to be made here! Bayesian reasoning in this instance (if not many others) is a misapplication and obfuscation of the original problem from a poor grasp of the nature of deduction.
(N.B.: However, if the nature of the problem were to consist in merely being told by some authority a contradiction to what one supposes to be true, then there is no logically necessity for us to suddenly switch camps and begin to believe in the contradiction over one’s prior conviction. Appeal to Authority is a logical fallacy, and if one supposes Bayesian reasoning is a help there, then there is much for that person to learn of the nature of deduction proper.)
Let me give you an example of what I really mean:
Note statements P, Q, and Z:
(P) Something equals something and something else equals that same something such that both equal each other. (Q) This something equals that. This other something also equals that. (Z) The aforementioned somethings equal each other.
It is clear that Z follows from P and Q, no? In effect, you’re forced to accept it, correct? Is there any “belief” involved in this setting? Decidedly not. However, let’s suppose we meet up with someone who disagrees and states: “I accept the truths of P and Q but not Z.”
Then we’ll add the following to help this poor fellow:
(R) If P and Q are true, then Z must be true.
They may respond: “I accept P, Q, and R as true, but not Z.”
And so on ad infinitum. What went wrong here? They failed to reason deductively. We might very well be in the same situation with T, where
(P and Q) are equivalent to (P1, P2, and P3) (namely, all of these premisses are true), such that whatever Z is, it must be equivalent to the theorem (which would in this case be ~T, if Ms. Math is doing her job and not merely deigning to inform the peons at the foot of her ivory tower).
P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn’t defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he’ll have departed from the sphere of logic completely. Note that if Ms. Math merely says “T is false” (F) such that F is really ~T, then the form [F, P1, P2, and P3] implies ~T is really a circular argument, for the conclusion is already assumed within the premisses. But, as I said, I was being charitable with the puzzles and not assuming that that was being communicated.
I guess it wasn’t clear, C1 and C2 reffered to the reasonings as well as the conclusions they reached. You say belief is of no importance here, but I don’t see how you can talk about “defeat” if you’re not talking about justified believing.
I’m not sure if I understood what you said here. You agree with what I said in the first bullet or not?
Are you sure that’s correct? If there’s a contradiction within the set of axioms, you could find T and ~T following valid deductions, couldn’t you? Proving ~T and proving that the reasoning leading to T was invalid are only equivalent if you assume the axioms are not contradictory. Am I wrong?
The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn’t, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3. I don’t see why logic would force me to accept ~T instead of believing there’s a mistake I can’t see in the proof Ms. Math showed me, or, more plausibly, to conclude that the axioms are contradictory.
“Defeat” would solely consist in the recognition of admitting to ~T instead of T. Not a matter of belief per se.
No, I don’t.
T cannot be derived from [P1, P2, and P3], but ~T can on account of F serving as a corrective that invalidates T. The only assumptions I’ve made are 1) Ms. Math is not an ivory tower authoritarian and 2) that she wouldn’t be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.
Anyway, I suppose there’s no more to be said about this, but you can ask for further clarification if you want.
Oh, now I see what you mean. I interpreted F as a new promiss, a new axiom, not a whole argument about the (mistaken) reasoning that proved T. For example, (wikipedia tells me that) the axiom of determinacy is inconsistent with the axiom of choice. If I had proved T in ZFC, and Ms. Math asserted the Axiom of Determinacy and proved ~T in ZFC+AD, and I didn’t know beforehand that AD is inconsistent with AC, I would still need to find out what was the problem.
I still think this is more consistent with the text of the original post, but now I understand what you meant by ” I was being charitable with the puzzles”.
Thank you for you attention.
I’m interested in what you have to say, and I’m sympathetic (I think), but I was hoping you could restate this in somewhat clearer terms. Several of your sentences are rather difficult to parse, like “And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid.”
Read my latest comments. If you need further clarity, ask me specific questions and I will attempt to accommodate them.
But to give some additional note on the quote you provide, look to reductio ad absurdum as a case where it would be incorrect to aver to the truth of what is really contradictory in nature. If it still isn’t clear, ask yourself this: “does it make sense to say something is true when it is actually false?” Anyone who answers this in the affirmative is either being silly or needs to have their head checked (for some fascinating stuff, indeed).
we are not justified in assigning probability 1 to the belief that ‘A=A’ or to the belief that ‘p → p’? Why not?
Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.
Short answer: Because if you assign probability 1 to a belief, then it is impossible for you to change your mind even when confronted with a mountain of opposing evidence. For the full argument, see Infinite Certainty.