This is also reminiscent of Descartes’ cogito:
X cannot occur without Y. X occurs. Therefore, Y exists.
(X=thought; Y=a thinking thing)
This is also reminiscent of Descartes’ cogito:
X cannot occur without Y. X occurs. Therefore, Y exists.
(X=thought; Y=a thinking thing)
...I don’t see how you can talk about “defeat” if you’re not talking about justified believing
“Defeat” would solely consist in the recognition of admitting to ~T instead of T. Not a matter of belief per se.
You agree with what I said in the first bullet or not?
No, I don’t.
The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn’t, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3.
T cannot be derived from [P1, P2, and P3], but ~T can on account of F serving as a corrective that invalidates T. The only assumptions I’ve made are 1) Ms. Math is not an ivory tower authoritarian and 2) that she wouldn’t be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.
Anyway, I suppose there’s no more to be said about this, but you can ask for further clarification if you want.
Funny. I thought of pointing that out as well, but I thought it probably wasn’t worth mentioning.
As I’ve imagined it being said before: “I’m either a genius or I’m not. That’s a 50% chance of my being a genius. Just pray luck isn’t on my side!” :)
Colors-as-near-universal-attributes is really a false claim. Consider examples of the varieties of color blindness, tetrachromacy, and cultures in which certain colors go by names that other cultures distinguish as being different. Your last paragraph seems to indicate that you still hold to the Mind Projection Fallacy which you had assumed to have overcome by realizing your favorite isn’t everyone’s favorite. Well, even their “blue” might be your “green”. Generally, this goes unnoticed because we tend to acculturate and inhabit more or less similar linguistic spaces.
C1 is a presumption, namely, a belief in the truth of T, which is apparently a theorem of P1, P2, and P3. As a belief, it’s validity is not what is at issue here, because we are concerned with the truth of T.
F comes in, but is improperly treated as a premiss to conclude ~T, when it is equivalent to ~T. Again, we should not be concerned with belief, because we are dealing with statements that are either true or false. Either but not both (T or ~T) can be true (which is the definition of a logical tautology).
Hence C2 is another presumption with which we should not concern ourselves. Belief has no influence on the outcome of T or ~T.
For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).
Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.
Third bullet: I assume that the problem is stated in general terms, for had Ms. Math shown that T is false in explicit terms (contained in F), then the proper form of ~T would be: F → ~T. Note that it is wrong to frame it the following way: F, P1, P2, and P3 → ~T. It is wrong because F states ~T. There is no “decision” to be made here! Bayesian reasoning in this instance (if not many others) is a misapplication and obfuscation of the original problem from a poor grasp of the nature of deduction.
(N.B.: However, if the nature of the problem were to consist in merely being told by some authority a contradiction to what one supposes to be true, then there is no logically necessity for us to suddenly switch camps and begin to believe in the contradiction over one’s prior conviction. Appeal to Authority is a logical fallacy, and if one supposes Bayesian reasoning is a help there, then there is much for that person to learn of the nature of deduction proper.)
Let me give you an example of what I really mean:
Note statements P, Q, and Z:
(P) Something equals something and something else equals that same something such that both equal each other. (Q) This something equals that. This other something also equals that. (Z) The aforementioned somethings equal each other.
It is clear that Z follows from P and Q, no? In effect, you’re forced to accept it, correct? Is there any “belief” involved in this setting? Decidedly not. However, let’s suppose we meet up with someone who disagrees and states: “I accept the truths of P and Q but not Z.”
Then we’ll add the following to help this poor fellow:
(R) If P and Q are true, then Z must be true.
They may respond: “I accept P, Q, and R as true, but not Z.”
And so on ad infinitum. What went wrong here? They failed to reason deductively. We might very well be in the same situation with T, where
(P and Q) are equivalent to (P1, P2, and P3) (namely, all of these premisses are true), such that whatever Z is, it must be equivalent to the theorem (which would in this case be ~T, if Ms. Math is doing her job and not merely deigning to inform the peons at the foot of her ivory tower).
P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn’t defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he’ll have departed from the sphere of logic completely. Note that if Ms. Math merely says “T is false” (F) such that F is really ~T, then the form [F, P1, P2, and P3] implies ~T is really a circular argument, for the conclusion is already assumed within the premisses. But, as I said, I was being charitable with the puzzles and not assuming that that was being communicated.
Here’s one that comes to mind:
I really don’t know anything about baseball, so if I’m going to bet on either the Red Socks or the Yankees, I’d have to go fifty-fifty on it. Therefore, the chance that either will win is fifty percent.
(Right at the “therefore” is the fallacy put forward as a veritable property of either of the teams winning, when in fact it is merely indicative of the ignorance of the gambler. The actual probability is most likely not 50-50.)
EDIT: Others might enjoy reading this PDF (“Probability Theory as Logic”) for additional background and ideas. There you’ll also see a bon mot by Montaigne: “Man is surely mad. He cannot make a worm; yet he makes Gods by the dozen.”
Read my latest comments. If you need further clarity, ask me specific questions and I will attempt to accommodate them.
But to give some additional note on the quote you provide, look to reductio ad absurdum as a case where it would be incorrect to aver to the truth of what is really contradictory in nature. If it still isn’t clear, ask yourself this: “does it make sense to say something is true when it is actually false?” Anyone who answers this in the affirmative is either being silly or needs to have their head checked (for some fascinating stuff, indeed).
...if you allow for the possibility that the original deductive reasoning is wrong...
I want to be very clear here: a valid deductive reasoning can never be wrong (i.e., invalid), only those who exercise in such reasoning are liable to error. This does not pertain to logical omniscience per se, because we are not here concerned with the logical coherence of the total collection of beliefs a given person (like the one in the example) might possess; we are only concerned with T. And humans, in any case, do not always engage in deduction properly due to many psychological, physical, etc. limitations.
don’t you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?
No, the possibility that someone will commit an error in deductive reasoning is in no need of quantification. That is only to increase the complexity of the puzzle. And by the razor, what is done with less is in vain done with more.
Unless you assume that you can’t make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.
To reiterate, an invalid deductive reasoning is not a deduction with which we should concern ourselves. The prior case of T, having been shown F, is in fact false, such that we should no longer elevate it to the status of a logical deduction. By the measure of its invalidity, we know full well in the valid deduction ~T. In other words, to make a mistake in deductive reasoning is not to reason deductively!
And if you do assume that you can’t make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.
This is where the puzzle introduced needless confusion. There was no real evidence. There was only the brute fact of the validity of ~T as introduced by a person who showed the falsity/invalidity of T. That is how the puzzles’ solution comes to a head – via a clear understanding of the nature of deductive reasoning.
Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.
Actually, I think if “I know T is true” means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 0 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief.
The presumption of the claim “I know T is true” (and that evidence that it is false is false) is false precisely in the case that the reasoning used to show that T (in this case a theorem) is true is invalid. Were T not a theorem, then probabilistic reasoning would in fact apply, but it does not. (And since it doesn’t, it is irrelevant to pursue that path. But, in short, the fact that it is a theorem should lead us to understand that the premisses’ truth is not the issue at hand here, thus probabilistic reasoning need not apply, and so there is no issue of T’s being probably true or false.) Furthermore, it is completely wide of the mark to suggest that one should apply this or that probability to the claims in question, precisely because the problem concerns deductive reasoning. All the non-deductive aspects of the puzzles are puzzling distractions at best. In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone’s having arrived at the (fallacious) truth of T. (It is necessary that one be led to a true conclusion given true premisses.) Hence, one need not be concerned with the epistemic standing of the truth of T, since it would have clearly been demonstrated to be false. And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid. Valid reasoning is always valid, no matter what one may think of the reasoning; and one may invalidly believe in the validity of an invalid conclusion. Such is human fallibility.
So I’d say the problem is a wrong question.
No, I think it is a good question, and it is easy to be led astray by not recognizing where precisely the problem fits in logical space, if one isn’t being careful. Amusingly (if not disturbingly), some of most up-voted posts are precisely those that get this wrong and thus fail to see the nature of the problem correctly. However, the way the problem is framed does lend itself to misinterpretation, because a demonstration of the falsity of T (namely, that it is invalid that T is true) should not be treated as a premiss in another apodosis; a valid demonstration of the falsity of T is itself a deductive conclusion, not a protasis proper. (In fact, the way it is framed, the claim ~T is equivalent to F, such that the claims [F, P1, P2, and P3] implies ~T is really a circular argument, but I was being charitable in my approach to the puzzles.) But oh well.
Puzzle 1
RM is irrelevant.
The concept of “defeat”, in any case, is not necessarily silly or inapplicable to a particular (game-based) understanding of reasoning, which has always been known to be discursive, so I do not think it is inadequate as an autobiographical account, but it is not how one characterizes what is ultimately a false conclusion that was previously held true. One need not commit oneself to a particular choice either in the case of “victory” or “defeat”, which are not themselves choices to be made.
Puzzle 2
Statements ME and AME are both false generalizations. One cannot know evidence for (or against) a given theorem (or apodosis from known protases) in advance based on the supposition that the apodosis is true, for that would constitute a circular argument. I.e.:
T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false. It is also false to suppose that a human being is always capable of reasoning correctly under all states of knowledge, or even that they possess sufficient knowledge of a particular body of information perfectly so as to reason validly.
MF is also false as a generalization.
In general, one should not be concerned with how “misleading” a given amount of evidence is. To reason on those grounds, one could suppose a given bit of evidence would always be “misleading” because one “knows” that the contrary of what that bit of evidence suggests is always true. (The fact that there are people out there who do in fact “reason” this way, based on evidence, as in the superabundant source of historical examples in which they continue to believe in a false conclusion, because they “know” the evidence that it is false is false or “misleading”, does not at all validate this mode of reasoning, but rather shores up certain psychological proclivities that suggest how fallacious their reasoning may be; however, this would not itself show that the course of necessary reasoning is incorrect, only that those who attempt to exercise it do so very poorly.) In the case that the one is dealing with a theorem, it must be true, provided that the reasoning is in fact valid, for theorematic reasoning is based on any axioms of one’s choice (even though it is not corollarial). !! However, if the apodosis concerns a statement of evidence, there is room for falsehood, even if the reasoning is valid, because the premisses themselves are not guaranteed to be always true.
The proper attitude is to understand that the reasoning prior to exposure of evidence/reasoning from another subject (or one’s own inquiry) may in fact be wrong, however necessary the reasoning itself may seemingly appear. No amount of evidence is sufficient evidence for its absolute truth, no matter how valid the reasoning is. Note that evidence here is indeed characteristic of observational criteria, but the reasoning based thereon is not properly deductive, even if the reasoning is essentially necessary in character. Note that deductive logic is concerned with the reasoning to true conclusions under the assumption that the relevant premisses are true; if one is taking into account the possibility of premisses which may not always be true, then such reasoning is probabilistic (and necessary) reasoning.
!! This, in effect, resolves puzzle 1. Namely, if the theorem is derived based on valid necessary reasoning, then it is true. If it isn’t valid reasoning, then it is false. If “defeat” consists in being shown that one’s initial stance was incorrect, then yes, it is essential that one takes the stance of having been defeated. Note that puzzle 2 is solved in fundamentally the same manner, despite the distracting statements ME, AME, and MF, on account of the nature of theorems. Probabilities nowhere come into account, and the employment of Bayesian reasoning is an unnecessary complication. If one does not take the stance of having been defeated, then there is no hope for that person to be convinced of anything of a logical (necessary) character.
Excuse me for waxing over-philosophical in my last message, since I said “might be” rather than “currently is”. To be clear, I’m referring to the practical possibility (if not the straightforward logical possibility) of such a game existing.
I suppose, in any case, that one form such a game has the greatest chance of succeeding in meeting that (rather vague) designation would involve its exhibiting the most generality within its gameplay, such that the kinds of cognitive requirements put upon users would not necessarily involve specific skills or skill acquisition per se, but rather a kind of mystifying push-without-training-wheels that permits the mind to shape itself however it sees fit to accomplish the task—which then creates problems for users by forcing them to constantly modify their adopted strategy or preferred tactics.
One such game that comes to mind as a (tentative) example is Dual N-Back (or related variants) that does not directly demand any specific strategy or conceptual framework for it to be taken on by a user. One has no specific input on how to tackle it, but when the user gets the hang of it, the game naturally changes the rule(s) or framework, forcing the user to adapt once more. Such a game most certainly involves expertise (a lot of time spent playing it and getting better).
But, yeah, with most, if not all, generally recognized games, it is pretty clear that with the kinds of skills demanded of a user it may be quite difficult to maneuver certain other skills and make such a game feasible.
I think the main issue here is that expertise must be conceptualized with respect to a particular activity or set of activities in order for it to maintain its essential meaning. The nature of expertise is also restricted to a specific range of tools the brain embodies (as in “embodied cognition”); in other words, it is not the hand the knows what to type, but rather the keyboard that knows what to type. To be clear, my cognitive capacity is effectively extended and reshaped by the interaction with the keyboard, so in effect the nature of the expertise will be limited specifically to the final cause (in the philosophical sense) of the activity itself. I like to think of it as the mind further approximating the function of the game, or activity, over time serving as a kind of analogy to the ever-accumulating expertise therein.
Taking the example of chess versus a modern-day computer-enhanced strategy game, the modes of embodiment are vastly different, and so the kinds of expertise to be expected should naturally diverge. However, I would not be so pollyannaish as to assert that playing StarCraft 2 (or Chess) would be “really useful”, unless you’re playing for money to help you in some specific goal outside of the game itself. That is going a bit too far, in my opinion. We already know that the nature of expertise is such that it only operates at the level of the activity one is engaged in, and will not generalize (or transfer) far from that domain of activity. For instance, the expertise in knowing the layout of a keyboard and being able to type commands without a second thought (being constantly honed by a game that demands it) will transfer to the tasks (of other games) that require the same input on a keyboard (and will differentially benefit from those quick reflexes), but the specific tactics and techniques learned in-game will generally not find much use beyond that game, and I do believe that is what we’re getting at with a game like SC2 insofar as “expertise” is a concern here. Similarly with chess: one might very well have excellent reflexes, honed in certain other tasks, and know many strategies and techniques for other things, but they won’t apply to the space of chess, and so vice versa for chess to other activities. (And we already know that typical memorization techniques used in chess really don’t help with memorizing anything else.)
Having said all that, I wonder whether or not there might be a prime example of the game of general expertise par excellence out there, one that touches on many domains simultaneously… Perhaps the Glass Bead Game? Ah, never mind. But, in all seriousness, the way of the game is probably the only way we’ll ever find out if such a thing exists and will permit the mind to approximate the function of life all the more perfectly.
By the way, I don’t know how it is the researchers in the article don’t think there hasn’t been such a “satellite view” of expertise before, particularly on the note of chess. Hasn’t anyone told them of the Chess Tactics Server? ( http://chess.emrald.net/ ) Chumps to champs aplenty there.
Here’s what indicated as much:
An “attribute for color” is not much different from showing that a name is an attribute for a color. Again, you were making the same mistake by thinking that a name for a color is an absolute. Definitely not the case, which you recognize:
To continue –
– I further pointed out that humans do not live in a mono-culture with a universal language that predetermines the arrangement of linguistic space in connection to perceived colors. That is the norm, such that the claim of near-universality does not apply. (And were such a mono-culture present, all it would take is a small deviation to accumulate to undermine it. Think of the Tower of Babel.)
The objection I posited covers all cases, even the exceptions. It’s really the mind-projection fallacy, such that one human regards their “normal” experience as the “normal” experience of “normal” humans, more or less.