There are not an infinite number of possible hypotheses in a great many sensible situations. For example, suppose the question is “who murdered Fred?”, because we have already learned that he was murdered. The already known answer: “A human alive at the time he died.”, makes the set finite. If we can determine when and where he died, the number of suspects can typically be reduced to dozens or hundreds. Limiting to someone capable of carrying out the means of death may cut 90% of them.
To the extent that “bits” of evidence means things that we don’t know yet, the number of bits can be much smaller than suggested. To the extent that “bits” of evidence includes everything we know so far, we all have trillions of bits already in our brains and the minimal number is meaningless.
What about the aliens who landed on earth, murdered Fred and then went away again? Or the infinite number of other possibilities, each of which has a very small probability?
What confuses me about this is that, if we do accept that there are an infinite number of possibilities, most of the possibilities must have an infinitesimal probability in order for everything to sum to 1. And I don’t really understand the concept of an infinitesimal probability—after all, even my example above must have some finite probability attached?
Just to point out what may be a nitpick or a clarification. It’s perfectly possible for infinity many positive things to sum to a finite number. 1/2+1/4+1/8+...=1.
There can be infinitely many potential murderers. But if the probability of each having done it drops off fast enough you can avoid anything that is literally infinitesimal. Almost all will be less than 1/3^^^^^^3 of course, but that’s a perfectly well defined number you know how to do maths with.
Hate to nitpick myself, but 1/2+1/4+1/8+… diverges (e.g., by the harmonic series test). Sum 1/n^2 = 1⁄4 + 1⁄9 + … = (pi^2)/6 is a more fitting example.
An interesting question, in this context, is what it would mean for infinitely many possibilities to exist in a “finite space about any point that can be reached at sub-speed of light times.” Would it be possible under the assumption of a discrete universe (a universe decomposable no further than the smallest, indivisible pieces)? This is an issue we don’t have to worry about in dealing with the infinite sums of numbers that converge to a finite number.
Being as, at any one time, the universe only has a finite space about any point that can be reached at sub-speed of light times. As a result there is only a finite amount of matter and, furthermore, possibility that can happen at the point where Fred died. This limits us to finite probabilities of discrete events.
Were your case possible and we were talking about continuous probabilities it would be the case that any one event is impossible; an “area” in probability space between two limiting values (events in probability space) would give you a discrete probability. You’re issue is one that I had issues with until I really sat and thought about how integrals work.
FYI: everything I have said is essentially based on my understanding of special relativity, probability and calculus and are more than open to criticism.
The probability that the universe only has finite space is not exactly 1, is it?
Much more might exist than our particular Hubble volume, no?
What probability do the, say, world’s top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists?
And on what grounds?
To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of “TOE” actually passes the Ockham’s razor test excellently; if the universe is that large, then it could (in principle) be exhaustively described by a very simple and short computer program, namely one that produces a string consisting of all the integers in order of size: 110111001011101111000… ad infinitum, translated into any wide-spread language using practially any arbitrarily chosen system for translation. Name anything that could exist in any universe of countably infinite size, and it would be fully described, even at infinitely many places, in the string of characters that such a simple computer program would produce.
Why not assign a pretty large probability to the possibility that the universe is that large, since all other known theories about the size of the universe seem to have a harder time with Ockham’s razor?
There are not an infinite number of possible hypotheses in a great many sensible situations. For example, suppose the question is “who murdered Fred?”, because we have already learned that he was murdered. The already known answer: “A human alive at the time he died.”, makes the set finite. If we can determine when and where he died, the number of suspects can typically be reduced to dozens or hundreds. Limiting to someone capable of carrying out the means of death may cut 90% of them.
To the extent that “bits” of evidence means things that we don’t know yet, the number of bits can be much smaller than suggested. To the extent that “bits” of evidence includes everything we know so far, we all have trillions of bits already in our brains and the minimal number is meaningless.
What about the aliens who landed on earth, murdered Fred and then went away again? Or the infinite number of other possibilities, each of which has a very small probability?
What confuses me about this is that, if we do accept that there are an infinite number of possibilities, most of the possibilities must have an infinitesimal probability in order for everything to sum to 1. And I don’t really understand the concept of an infinitesimal probability—after all, even my example above must have some finite probability attached?
Just to point out what may be a nitpick or a clarification. It’s perfectly possible for infinity many positive things to sum to a finite number. 1/2+1/4+1/8+...=1.
There can be infinitely many potential murderers. But if the probability of each having done it drops off fast enough you can avoid anything that is literally infinitesimal. Almost all will be less than 1/3^^^^^^3 of course, but that’s a perfectly well defined number you know how to do maths with.
Hate to nitpick myself, but 1/2+1/4+1/8+… diverges (e.g., by the harmonic series test). Sum 1/n^2 = 1⁄4 + 1⁄9 + … = (pi^2)/6 is a more fitting example.
An interesting question, in this context, is what it would mean for infinitely many possibilities to exist in a “finite space about any point that can be reached at sub-speed of light times.” Would it be possible under the assumption of a discrete universe (a universe decomposable no further than the smallest, indivisible pieces)? This is an issue we don’t have to worry about in dealing with the infinite sums of numbers that converge to a finite number.
That’s not correct at all. sum(1/2^n)[1:infinity] = 1.
Oops, misread that as sum(1/(2n))[1:infinity] (which it wasn’t), my bad.
Being as, at any one time, the universe only has a finite space about any point that can be reached at sub-speed of light times. As a result there is only a finite amount of matter and, furthermore, possibility that can happen at the point where Fred died. This limits us to finite probabilities of discrete events.
Were your case possible and we were talking about continuous probabilities it would be the case that any one event is impossible; an “area” in probability space between two limiting values (events in probability space) would give you a discrete probability. You’re issue is one that I had issues with until I really sat and thought about how integrals work.
FYI: everything I have said is essentially based on my understanding of special relativity, probability and calculus and are more than open to criticism.
The probability that the universe only has finite space is not exactly 1, is it? Much more might exist than our particular Hubble volume, no? What probability do the, say, world’s top 100 physicists assign, on average, to the possibiliy that infinitely much matter exists? And on what grounds?
To my understanding, the universe might be so large that everything that could be described with infinitely many characters actually exists. That kind of “TOE” actually passes the Ockham’s razor test excellently; if the universe is that large, then it could (in principle) be exhaustively described by a very simple and short computer program, namely one that produces a string consisting of all the integers in order of size: 110111001011101111000… ad infinitum, translated into any wide-spread language using practially any arbitrarily chosen system for translation. Name anything that could exist in any universe of countably infinite size, and it would be fully described, even at infinitely many places, in the string of characters that such a simple computer program would produce.
Why not assign a pretty large probability to the possibility that the universe is that large, since all other known theories about the size of the universe seem to have a harder time with Ockham’s razor?
“The probability that the universe only has finite space is not exactly 1, is it?”
Nooooo, that’s not it. The probability that the reachable space from a particular point within a certain time is finite is effectively one.
So it doesn’t matter how large the universe is—the aliens a few trillion ly away cannot have killed Bob.