Um, guys, there are an infinite number of possible hypotheses. Any evidence that corroborates one theory also corroborates (or fails to refute) an infinite number of alternative specifiable accounts of the world.
What evidence does is allow us to say “Whatever the truth is, it must coexist in the same universe with the true nature of this evidence I have accepted. Theory X and its infinite number of variants seems to be ruled out by this evidence (although I may have misinterpreted the theory or the nature of the evidence), whereas Theory Y and its infinite number of variants seems not yet to be ruled out.”
Yeah, I realize this is a complicated way to phrase it. The reason I like to phrase it this way is to point out that Einstein did not have merely 29 “bits” of evidence, he had VAST evidence, based on an entire lifetime of neuron-level programming, that automatically focused his mind on a productive way of thinking about the universe. He was imagining and eliminating vast swaths of potential theories of the universe, as are we all, from his earliest days in the womb. This is hardly surprising, considering that humans are the result of an evolutionary process that systematically killed the creatures who couldn’t map the universe sufficiently well.
We can never know if we are getting to the right hypothesis. What we can say is that we have arrived at a hypothesis that is isomorphic with the truth, as we understand that hypothesis, over the span of evidence we think we have and think we understand. Always the next bit of evidence we discover may turn what we think we knew upside down. All knowledge is defeasible.
There are not an infinite number of possible hypotheses in a great many sensible situations. For example, suppose the question is “who murdered Fred?”, because we have already learned that he was murdered. The already known answer: “A human alive at the time he died.”, makes the set finite. If we can determine when and where he died, the number of suspects can typically be reduced to dozens or hundreds. Limiting to someone capable of carrying out the means of death may cut 90% of them.
To the extent that “bits” of evidence means things that we don’t know yet, the number of bits can be much smaller than suggested. To the extent that “bits” of evidence includes everything we know so far, we all have trillions of bits already in our brains and the minimal number is meaningless.
What about the aliens who landed on earth, murdered Fred and then went away again? Or the infinite number of other possibilities, each of which has a very small probability?
What confuses me about this is that, if we do accept that there are an infinite number of possibilities, most of the possibilities must have an infinitesimal probability in order for everything to sum to 1. And I don’t really understand the concept of an infinitesimal probability—after all, even my example above must have some finite probability attached?
Just to point out what may be a nitpick or a clarification. It’s perfectly possible for infinity many positive things to sum to a finite number. 1/2+1/4+1/8+...=1.
There can be infinitely many potential murderers. But if the probability of each having done it drops off fast enough you can avoid anything that is literally infinitesimal. Almost all will be less than 1/3^^^^^^3 of course, but that’s a perfectly well defined number you know how to do maths with.
Hate to nitpick myself, but 1/2+1/4+1/8+… diverges (e.g., by the harmonic series test). Sum 1/n^2 = 1⁄4 + 1⁄9 + … = (pi^2)/6 is a more fitting example.
An interesting question, in this context, is what it would mean for infinitely many possibilities to exist in a “finite space about any point that can be reached at sub-speed of light times.” Would it be possible under the assumption of a discrete universe (a universe decomposable no further than the smallest, indivisible pieces)? This is an issue we don’t have to worry about in dealing with the infinite sums of numbers that converge to a finite number.
Being as, at any one time, the universe only has a finite space about any point that can be reached at sub-speed of light times. As a result there is only a finite amount of matter and, furthermore, possibility that can happen at the point where Fred died. This limits us to finite probabilities of discrete events.
Were your case possible and we were talking about continuous probabilities it would be the case that any one event is impossible; an “area” in probability space between two limiting values (events in probability space) would give you a discrete probability. You’re issue is one that I had issues with until I really sat and thought about how integrals work.
FYI: everything I have said is essentially based on my understanding of special relativity, probability and calculus and are more than open to criticism.
The probability that the universe only has finite space is not exactly 1, is it?
Much more might exist than our particular Hubble volume, no?
What probability do the, say, world’s top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists?
And on what grounds?
To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of “TOE” actually passes the Ockham’s razor test excellently; if the universe is that large, then it could (in principle) be exhaustively described by a very simple and short computer program, namely one that produces a string consisting of all the integers in order of size: 110111001011101111000… ad infinitum, translated into any wide-spread language using practially any arbitrarily chosen system for translation. Name anything that could exist in any universe of countably infinite size, and it would be fully described, even at infinitely many places, in the string of characters that such a simple computer program would produce.
Why not assign a pretty large probability to the possibility that the universe is that large, since all other known theories about the size of the universe seem to have a harder time with Ockham’s razor?
How about we put it this way: In the infinite space of possible theories, most of them are far too complex to ever have enough evidence to locate. (If it takes 3^^^3 bits of information to verify the theory… you’re never going to verify the theory.)
In realistic circumstances, we have really a quite small list of theories to choose from, because the list of theories that we are capable of comprehending and testing in human lifetimes is itself very small.
Your comments are clogging up the recent comments feed. I normally wouldn’t mind, but your comments are often replies to comments made several years ago by users who no longer post. Please be mindful of this when posting. Thanks!
Um, guys, there are an infinite number of possible hypotheses. Any evidence that corroborates one theory also corroborates (or fails to refute) an infinite number of alternative specifiable accounts of the world.
What evidence does is allow us to say “Whatever the truth is, it must coexist in the same universe with the true nature of this evidence I have accepted. Theory X and its infinite number of variants seems to be ruled out by this evidence (although I may have misinterpreted the theory or the nature of the evidence), whereas Theory Y and its infinite number of variants seems not yet to be ruled out.”
Yeah, I realize this is a complicated way to phrase it. The reason I like to phrase it this way is to point out that Einstein did not have merely 29 “bits” of evidence, he had VAST evidence, based on an entire lifetime of neuron-level programming, that automatically focused his mind on a productive way of thinking about the universe. He was imagining and eliminating vast swaths of potential theories of the universe, as are we all, from his earliest days in the womb. This is hardly surprising, considering that humans are the result of an evolutionary process that systematically killed the creatures who couldn’t map the universe sufficiently well.
We can never know if we are getting to the right hypothesis. What we can say is that we have arrived at a hypothesis that is isomorphic with the truth, as we understand that hypothesis, over the span of evidence we think we have and think we understand. Always the next bit of evidence we discover may turn what we think we knew upside down. All knowledge is defeasible.
There are not an infinite number of possible hypotheses in a great many sensible situations. For example, suppose the question is “who murdered Fred?”, because we have already learned that he was murdered. The already known answer: “A human alive at the time he died.”, makes the set finite. If we can determine when and where he died, the number of suspects can typically be reduced to dozens or hundreds. Limiting to someone capable of carrying out the means of death may cut 90% of them.
To the extent that “bits” of evidence means things that we don’t know yet, the number of bits can be much smaller than suggested. To the extent that “bits” of evidence includes everything we know so far, we all have trillions of bits already in our brains and the minimal number is meaningless.
What about the aliens who landed on earth, murdered Fred and then went away again? Or the infinite number of other possibilities, each of which has a very small probability?
What confuses me about this is that, if we do accept that there are an infinite number of possibilities, most of the possibilities must have an infinitesimal probability in order for everything to sum to 1. And I don’t really understand the concept of an infinitesimal probability—after all, even my example above must have some finite probability attached?
Just to point out what may be a nitpick or a clarification. It’s perfectly possible for infinity many positive things to sum to a finite number. 1/2+1/4+1/8+...=1.
There can be infinitely many potential murderers. But if the probability of each having done it drops off fast enough you can avoid anything that is literally infinitesimal. Almost all will be less than 1/3^^^^^^3 of course, but that’s a perfectly well defined number you know how to do maths with.
Hate to nitpick myself, but 1/2+1/4+1/8+… diverges (e.g., by the harmonic series test). Sum 1/n^2 = 1⁄4 + 1⁄9 + … = (pi^2)/6 is a more fitting example.
An interesting question, in this context, is what it would mean for infinitely many possibilities to exist in a “finite space about any point that can be reached at sub-speed of light times.” Would it be possible under the assumption of a discrete universe (a universe decomposable no further than the smallest, indivisible pieces)? This is an issue we don’t have to worry about in dealing with the infinite sums of numbers that converge to a finite number.
That’s not correct at all. sum(1/2^n)[1:infinity] = 1.
Oops, misread that as sum(1/(2n))[1:infinity] (which it wasn’t), my bad.
Being as, at any one time, the universe only has a finite space about any point that can be reached at sub-speed of light times. As a result there is only a finite amount of matter and, furthermore, possibility that can happen at the point where Fred died. This limits us to finite probabilities of discrete events.
Were your case possible and we were talking about continuous probabilities it would be the case that any one event is impossible; an “area” in probability space between two limiting values (events in probability space) would give you a discrete probability. You’re issue is one that I had issues with until I really sat and thought about how integrals work.
FYI: everything I have said is essentially based on my understanding of special relativity, probability and calculus and are more than open to criticism.
The probability that the universe only has finite space is not exactly 1, is it? Much more might exist than our particular Hubble volume, no? What probability do the, say, world’s top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists? And on what grounds?
To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of “TOE” actually passes the Ockham’s razor test excellently; if the universe is that large, then it could (in principle) be exhaustively described by a very simple and short computer program, namely one that produces a string consisting of all the integers in order of size: 110111001011101111000… ad infinitum, translated into any wide-spread language using practially any arbitrarily chosen system for translation. Name anything that could exist in any universe of countably infinite size, and it would be fully described, even at infinitely many places, in the string of characters that such a simple computer program would produce.
Why not assign a pretty large probability to the possibility that the universe is that large, since all other known theories about the size of the universe seem to have a harder time with Ockham’s razor?
“The probability that the universe only has finite space is not exactly 1, is it?”
Nooooo, that’s not it. The probability that the reachable space from a particular point within a certain time is finite is effectively one.
So it doesn’t matter how large the universe is—the aliens a few trillion ly away cannot have killed Bob.
How about we put it this way: In the infinite space of possible theories, most of them are far too complex to ever have enough evidence to locate. (If it takes 3^^^3 bits of information to verify the theory… you’re never going to verify the theory.)
In realistic circumstances, we have really a quite small list of theories to choose from, because the list of theories that we are capable of comprehending and testing in human lifetimes is itself very small.
Your comments are clogging up the recent comments feed. I normally wouldn’t mind, but your comments are often replies to comments made several years ago by users who no longer post. Please be mindful of this when posting. Thanks!
This is fine—if the comments provide useful insight (they don’t in this case). We encourage (productive) thread necromancy.