Interesting discussion but I suspect an important distinction may be required between logic and probability theory. Logic is a special case of probability theory where values are restricted to only 0 and 1, that is to 0% and 100% probability. Within logic you may arrive at certain conclusions but generally within probability theory conclusions are not certain but rather assigned a degree of plausibility.
If logic provides, in some contexts, a valid method of reasoning then conclusions arrived at will be either 0% or 100% true. Denying that 100% confidence is ever rational seems to be equivalent to denying that logic ever applies to anything.
It is certainly true that many phenomena are better described by probability than by logic but can we deny logic any validity. I understand mathematical proofs as being within the realm of logic where things may often be determined as being either true or false. For instance Euclid is credited with first proving that there is no largest prime. I believe most mathematicians accept this as a true statement and that most would agree that 53 is easily proven to be prime.
Denying that 100% confidence is ever rational seems to be equivalent to denying that logic ever applies to anything.
It’s just saying that logic is a model that can’t describe anything in the real world fully literally. That doesn’t mean it’s not useful. Abstracting away irrelevant details is bread and butter reductionism.
Yes I agree, there is only a rough isomorphism between the mathematics of binary logic and the real world; binary logic seems to describe a limit that reality approaches but never reaches.
We should consider that the mathematics of binary logic are the limiting case of probability theory; it is probability theory where the probabilities may only take the values of 0 or 1. Probability theory can do everything that logic can but it can also handle those real world cases where the probability of knowing something is something other than 0 or 1, as is the usual case with scientific knowledge.
When you prove something in mathematics, at very least you implicitly assume you have made no mistakes anywhere, are not hallucinating, etc. Your “real” subjective degree of belief in some mathematical proposition, on the other hand, must take all these things into account.
For practical purposes the probability of hallucinations etc. may be very small and so you can usually ignore them. But the OP is right to demonstrate that in some cases this is a bad approximation to make.
Deductive logic is just the special limiting case of probability theory where you allow yourself the luxury of an idealised box of thought isolated from “real world” small probabilities.
edit: Perhaps I could say it a different way. It may be reasonable for certain conditional probabilities to be zero or one, so long as they are conditioned on enough assumptions, e.g. P(“51 is a prime” given “I did my math correctly, I am not hallucinating, the external world is real, etc...”)=1 might be achievable. But if you try to remove the conditional on all that other stuff you cannot keep this certainty.
HELLO.Dr Lawrence saved my marriage within 3days of contact,i contacted him in regard of my husband who left me for another woman i tried all the methods i know to get him back but to no avail then a good friend of mine Mrs maria introduce me to drlawrencespelltemple@hotmail.com who cast a powerful and wonderful spell that brought him back to me in just 3days i really want to use this medium to advice that for solution regarding any relationship issues contact the temple and all your worry s will be gone: drlawrencespelltemple@hotmail.com...DONNA
Can you give me an example of a proposition arrived at by what you’re calling “logic” here that corresponds to an expected observation in which you have 0% or 100% confidence?
“Tautologies are tautological” is the statement for which I am most certain.
Certainly, if the fate of humanity was the consequence of a false positive and a slap on my wrist was the consequence of a false negative for an affirmative answer to “tautologies are tautological”, I would give a negative answer...so by that litmus test, I don’t really have 100% confidence. But I’ve still have a strong logical intuition that I aught to be that confident, and that my emotional underconfidence in the proposed scenario is a bit silly.
The problem is, the above hypothetical scenario involves weird and self contradictory stuff (an entity which knows the right answer, my 100% certainty that the entity knows the right answer and can interpret my answer, etc). I think it may be best to restrict probability estimates to empirical matters (I’m 99.99% certain that my calculator will output “2” in response to the input “1+1=”) and keep it away from pure logic realms.
thanks! the specific thought experiment is something I came up with but the general notion of using hypothetical scenarios to gauge one’s true certainty is something that I think many people have done at one time or another, though I can’t think of a specific example.
One example in classical logic is the syllogism where if the premises are true then the conclusion is by necessity true:
Socrates is a man
All men are mortal
therefore it is true that Socrates is mortal
Another example is mathematical proofs. Here is the Wikipedia presentation of Euclids proof from 300 BC that there is an infinite number of prime numbers. Perhaps In your terms this proof provides 0% confidence that we will observe the largest prime number.
Take any finite list of prime numbers p1, p2, …, pn. It will be shown that at least one additional prime number not in this list exists. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then, q is either prime or not:
1) If q is prime then there is at least one more prime than is listed.
2) If q is not prime then some prime factor p divides q. If this factor p were on our list, then it would divide P (since P is the product of every number on the list); but as we know, p divides P + 1 = q. If p divides P and q then p would have to divide the difference of the two numbers, which is (P + 1) − P or just 1. But no prime number divides 1 so there would be a contradiction, and therefore p cannot be on the list. This means at least one more prime number exists beyond those in the list.
This proves that for every finite list of prime numbers, there is a prime number not on the list. Therefore there must be infinitely many prime numbers.
I would be interested if you would care to elaborate a little.Syllogisms have been a mainstay of philosophy for over two millennium and undoubtedly I have a lot to learn about them.
In my admittedly limited understanding of syllogisms the conclusion is true given the premises being true. Truth is more in the structure of the argument than in its conclusion. If Socrates is not mortal than either he is not a man or not all men are mortal.
It is the “all men are mortal” proposition that is in danger of being rendered false by sufficiently advanced technology (at least, depending on what you mean by “mortal”).
Perhaps In your terms this proof provides 0% confidence that we will observe the largest prime number.
Sure, I’m willing to consider that a prediction about the numbers that correspond to observable phenomena.
And you’re asserting that the chance that Euclid was wrong about the properties of the numbers we observe is not just vanishingly small, but in fact zero, such that no amount of observed evidence could possibly properly change our minds about it.
I have some skepticism about absolute certainty. Logic deals in certainties but it seems unclear if it absolutely describes anything in the real world. I am not sure if observed evidence plays a role in logic. If all men are mortal and if Socrates is a man then Socrates is mortal appears to be true. If we were to observe Socrates being immortal the syllogism would still be true but one of the conditional premises that all men are mortal or that Socrates is a man would not be true.
In science at least where evidence plays a decisive role there is no certainty; scientific theories must be falsifiable, there is always some possibility that an experimental result will not agree with theory.
The examples I gave are true by virtue of logical relationships such as if all A are B and all B are C then all A are C. In this vein it might seem certain that if something is here it cannot be there, however this is not true for quantum systems; due to superposition a quantum entity can be said to be both here and there.
Another interesting approach to this problem was taken by David Deutsch. He considers that any mathematical proof is a form of calculation and all calculation is physical just as all information has a physical form. Thus mathematical proofs are no more certain than the physical laws invoked to calculate them. All mathematical proofs require our mathematical intuition, the intuition that one step of the proof follows logically from the other. Undoubtedly such intuition is the result of our long evolutionary history that has built knowledge of how the world works into our brains. Although these intuitions are formed from principles encoded in our genetics they are no more reliable than any other hypothesis supported by the data; they are not certain.
It may be interesting that although all measurable results in quantum theory are in the form of probabilities there is at least one instance where this theory predicts a certain result. If the same measurement is immediately made a second time on a quantum system the second result will be the same as the first with probability 1. In other words the state of the quantum system revealed by the first measurement is confirmed by the second measurement. It may seem odd that the theory predicts the result of the first measurement as a probability distribution of possible results but predicts only a single possible result for the second measurement.
Wojciech Zuruk considers this as a postulate of quantum theory (see his paper quantum Darwinism ). (sorry for the typo in the quote).
Postulate (iii) Immediate repetition of a measurement yields the same outcome starts this task. This is the only uncontroversial measurement postulate (even if it is difficult to approximate in the laboratory): Such repeatability or predictability is behind the very idea of a state.
If we consider that information exchange took place between the quantum system and the measuring device in the first measurement then we might view the probability distribution implied by the wave function as having undergone a Bayesian update on the receipt of new information. We might understand that this new information moved the quantum model to predictive certainty regarding the result of the second measurement.
Of course this certainty is only certain within the terms of quantum theory which is itself falsifiable.
I fail to discern your point, here; sorry. Specifically, I don’t see what makes this more interesting in context than my expectation, within the limits of precision and reliability of my measuring device, that if I (e.g.) measure the mass of a macroscopic object twice I’ll get the same result.
Yes, good point. Classical physics, dealing with macroscopic objects, predicts definite (non-probabilistic) measurement outcomes for both the first and second measurements.
The point I was (poorly) aiming at is that while quantum theory is inherently probabilistic even it sometimes predicts specific results as certainties.
I guess the important point for me is that while theories may predict certainties they are always falsifiable; the theory itself may be wrong.
Interesting discussion but I suspect an important distinction may be required between logic and probability theory. Logic is a special case of probability theory where values are restricted to only 0 and 1, that is to 0% and 100% probability. Within logic you may arrive at certain conclusions but generally within probability theory conclusions are not certain but rather assigned a degree of plausibility.
If logic provides, in some contexts, a valid method of reasoning then conclusions arrived at will be either 0% or 100% true. Denying that 100% confidence is ever rational seems to be equivalent to denying that logic ever applies to anything.
It is certainly true that many phenomena are better described by probability than by logic but can we deny logic any validity. I understand mathematical proofs as being within the realm of logic where things may often be determined as being either true or false. For instance Euclid is credited with first proving that there is no largest prime. I believe most mathematicians accept this as a true statement and that most would agree that 53 is easily proven to be prime.
It’s just saying that logic is a model that can’t describe anything in the real world fully literally. That doesn’t mean it’s not useful. Abstracting away irrelevant details is bread and butter reductionism.
Yes I agree, there is only a rough isomorphism between the mathematics of binary logic and the real world; binary logic seems to describe a limit that reality approaches but never reaches.
We should consider that the mathematics of binary logic are the limiting case of probability theory; it is probability theory where the probabilities may only take the values of 0 or 1. Probability theory can do everything that logic can but it can also handle those real world cases where the probability of knowing something is something other than 0 or 1, as is the usual case with scientific knowledge.
Yeah, I came across that idea in the Jaynes book, and was very impressed.
When you prove something in mathematics, at very least you implicitly assume you have made no mistakes anywhere, are not hallucinating, etc. Your “real” subjective degree of belief in some mathematical proposition, on the other hand, must take all these things into account.
For practical purposes the probability of hallucinations etc. may be very small and so you can usually ignore them. But the OP is right to demonstrate that in some cases this is a bad approximation to make.
Deductive logic is just the special limiting case of probability theory where you allow yourself the luxury of an idealised box of thought isolated from “real world” small probabilities.
edit: Perhaps I could say it a different way. It may be reasonable for certain conditional probabilities to be zero or one, so long as they are conditioned on enough assumptions, e.g. P(“51 is a prime” given “I did my math correctly, I am not hallucinating, the external world is real, etc...”)=1 might be achievable. But if you try to remove the conditional on all that other stuff you cannot keep this certainty.
HELLO.Dr Lawrence saved my marriage within 3days of contact,i contacted him in regard of my husband who left me for another woman i tried all the methods i know to get him back but to no avail then a good friend of mine Mrs maria introduce me to drlawrencespelltemple@hotmail.com who cast a powerful and wonderful spell that brought him back to me in just 3days i really want to use this medium to advice that for solution regarding any relationship issues contact the temple and all your worry s will be gone: drlawrencespelltemple@hotmail.com...DONNA
Downvoted for being spam.
I’m over 99.99% certain it’s spam.
Can you give me an example of a proposition arrived at by what you’re calling “logic” here that corresponds to an expected observation in which you have 0% or 100% confidence?
“Tautologies are tautological” is the statement for which I am most certain.
Certainly, if the fate of humanity was the consequence of a false positive and a slap on my wrist was the consequence of a false negative for an affirmative answer to “tautologies are tautological”, I would give a negative answer...so by that litmus test, I don’t really have 100% confidence. But I’ve still have a strong logical intuition that I aught to be that confident, and that my emotional underconfidence in the proposed scenario is a bit silly.
The problem is, the above hypothetical scenario involves weird and self contradictory stuff (an entity which knows the right answer, my 100% certainty that the entity knows the right answer and can interpret my answer, etc). I think it may be best to restrict probability estimates to empirical matters (I’m 99.99% certain that my calculator will output “2” in response to the input “1+1=”) and keep it away from pure logic realms.
+1 for the thought experiment test. I haven’t seen that one before; any source or is it something you came up with?
thanks! the specific thought experiment is something I came up with but the general notion of using hypothetical scenarios to gauge one’s true certainty is something that I think many people have done at one time or another, though I can’t think of a specific example.
One example in classical logic is the syllogism where if the premises are true then the conclusion is by necessity true:
Socrates is a man
All men are mortal
therefore it is true that Socrates is mortal
Another example is mathematical proofs. Here is the Wikipedia presentation of Euclids proof from 300 BC that there is an infinite number of prime numbers. Perhaps In your terms this proof provides 0% confidence that we will observe the largest prime number.
Take any finite list of prime numbers p1, p2, …, pn. It will be shown that at least one additional prime number not in this list exists. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then, q is either prime or not:
1) If q is prime then there is at least one more prime than is listed.
2) If q is not prime then some prime factor p divides q. If this factor p were on our list, then it would divide P (since P is the product of every number on the list); but as we know, p divides P + 1 = q. If p divides P and q then p would have to divide the difference of the two numbers, which is (P + 1) − P or just 1. But no prime number divides 1 so there would be a contradiction, and therefore p cannot be on the list. This means at least one more prime number exists beyond those in the list.
This proves that for every finite list of prime numbers, there is a prime number not on the list. Therefore there must be infinitely many prime numbers.
The “Socrates is mortal” one is a good example because nowadays its conclusion has a probability of less than one.
I would be interested if you would care to elaborate a little.Syllogisms have been a mainstay of philosophy for over two millennium and undoubtedly I have a lot to learn about them.
In my admittedly limited understanding of syllogisms the conclusion is true given the premises being true. Truth is more in the structure of the argument than in its conclusion. If Socrates is not mortal than either he is not a man or not all men are mortal.
It is the “all men are mortal” proposition that is in danger of being rendered false by sufficiently advanced technology (at least, depending on what you mean by “mortal”).
Or by “man.”
Sure, I’m willing to consider that a prediction about the numbers that correspond to observable phenomena.
And you’re asserting that the chance that Euclid was wrong about the properties of the numbers we observe is not just vanishingly small, but in fact zero, such that no amount of observed evidence could possibly properly change our minds about it.
Yes?
I have some skepticism about absolute certainty. Logic deals in certainties but it seems unclear if it absolutely describes anything in the real world. I am not sure if observed evidence plays a role in logic. If all men are mortal and if Socrates is a man then Socrates is mortal appears to be true. If we were to observe Socrates being immortal the syllogism would still be true but one of the conditional premises that all men are mortal or that Socrates is a man would not be true.
In science at least where evidence plays a decisive role there is no certainty; scientific theories must be falsifiable, there is always some possibility that an experimental result will not agree with theory.
The examples I gave are true by virtue of logical relationships such as if all A are B and all B are C then all A are C. In this vein it might seem certain that if something is here it cannot be there, however this is not true for quantum systems; due to superposition a quantum entity can be said to be both here and there.
Another interesting approach to this problem was taken by David Deutsch. He considers that any mathematical proof is a form of calculation and all calculation is physical just as all information has a physical form. Thus mathematical proofs are no more certain than the physical laws invoked to calculate them. All mathematical proofs require our mathematical intuition, the intuition that one step of the proof follows logically from the other. Undoubtedly such intuition is the result of our long evolutionary history that has built knowledge of how the world works into our brains. Although these intuitions are formed from principles encoded in our genetics they are no more reliable than any other hypothesis supported by the data; they are not certain.
OK. Thanks for clarifying your position.
It may be interesting that although all measurable results in quantum theory are in the form of probabilities there is at least one instance where this theory predicts a certain result. If the same measurement is immediately made a second time on a quantum system the second result will be the same as the first with probability 1. In other words the state of the quantum system revealed by the first measurement is confirmed by the second measurement. It may seem odd that the theory predicts the result of the first measurement as a probability distribution of possible results but predicts only a single possible result for the second measurement.
Wojciech Zuruk considers this as a postulate of quantum theory (see his paper quantum Darwinism ). (sorry for the typo in the quote).
Postulate (iii) Immediate repetition of a measurement yields the same outcome starts this task. This is the only uncontroversial measurement postulate (even if it is difficult to approximate in the laboratory): Such repeatability or predictability is behind the very idea of a state.
If we consider that information exchange took place between the quantum system and the measuring device in the first measurement then we might view the probability distribution implied by the wave function as having undergone a Bayesian update on the receipt of new information. We might understand that this new information moved the quantum model to predictive certainty regarding the result of the second measurement.
Of course this certainty is only certain within the terms of quantum theory which is itself falsifiable.
I fail to discern your point, here; sorry. Specifically, I don’t see what makes this more interesting in context than my expectation, within the limits of precision and reliability of my measuring device, that if I (e.g.) measure the mass of a macroscopic object twice I’ll get the same result.
Yes, good point. Classical physics, dealing with macroscopic objects, predicts definite (non-probabilistic) measurement outcomes for both the first and second measurements.
The point I was (poorly) aiming at is that while quantum theory is inherently probabilistic even it sometimes predicts specific results as certainties.
I guess the important point for me is that while theories may predict certainties they are always falsifiable; the theory itself may be wrong.
Ah, I see. Yes, exactly… the theory may be wrong, or we have made a mistake in applying it or interpreting it, etc.