I’m still not sure how I’m supposed to interpret this question. If you’re asking whether I think “matter is made up of atoms” is an extremely useful working hypothesis for many many scientific purposes, then the answer is obviously “yes” with probability that only negligibly differs from 1. (ETA: Don’t ask me how negligibly different, because I couldn’t tell you. I am not enough of an ideal Bayesian that I can meaningfully attach probabilities with many significant digits to my beliefs.)
If you’re asking whether the fundamental structure of matter is in fact discrete, I would assign that a probability of about 0.3. Quantum field theory is sometimes interpreted as a particle theory, but this seems wrong to me. It is best interpreted as telling us that the basic constituents of nature are continuous field configurations (or, to be more precise, linear superpositions of field configurations).
Particle number is not fixed in any workable relativistic quantum field theory. This strongly suggests that particles are emergent rather than fundamental. If you suppose that a a particle cannot be located at two disjoint regions in a single space-like hyperplane, then any relativistic quantum theory of a fixed number of particles predicts a zero probability of finding a particle anywhere (see here for the proof). So the only consistent particle QFT is one where there are no particles!
There’s also the fact that an observer accelerating uniformly in a Minkowski vacuum will see a thermal bath of particles (the Unruh effect). If one can bring particles in or out of existence simply by a change of reference frame, then they shouldn’t be part of one’s fundamental ontology.
Of course QFT itself is in all probability not the right fundamental theory, so matter may still turn out to have discrete constituents.
Like I said in my edit, I can’t give you a precise answer, but to narrow it down a bit, I’m comfortable saying that the probability is higher than 1 − 10^(-9).
What’s the probability that a human can even be in an epistemic state that would justify 30 bits of belief?
About the same as the probability that a human can be in a physical state that allows them to walk. Winners of a 100 million-to-one lottery overcome a prior improbability of 10^-8, and presumably are at least as certain they have won, once they have collected, as they were previously expecting to lose, so there’s somewhere above 10^16 of updating, 160 decibans, 53 bits. And ordinary people do it. If you’re so smart you can’t, there’s something wrong with your smartness.
What strikes you as implausible about 30 bits of belief? It takes more than 30 bits to single out one individual on this planet.
So all we need is an example of a universe without atoms (corresponding to the example of someone who did win the lottery despite the improbability of doing that) for this analogy to work.
I think there are fields of thought in which the best paradigm is that something either is or isn’t, and where probabalistic thinking will do no good, and if forced or contrived to seem to work, may do harm (e.g. the models by which Wall Street came up with a plausible—to some—argument that CDSs of subprime mortgages could be rated AAA).
And there are fields of thought in which the idea that something simply is or isn’t is the thing likely to mislead or do harm (see http://en.wikipedia.org/wiki/Interesting_number where one gets into trouble by thinking a number either is or isn’t “interesting”).
The “interesting number” business isn’t probabalistic either, though there may be some usefulness, in Baysian arguments that treat subjective “levels of certainty” like probabilities.
Note that probabilities like that cannot be estimated because they are at the noise level. For example, the odds are about the same that you are delusional and no one asked this question (i.e., the odds are tiny and hard to evaluate).
The most important reason I can think of: the largest number of decibans that’s yet been mentioned is 160 (though that’s more of a delta, going from −80 to +80 decibans); the highest actual number of decibans is around 100. This gives me reasonably good confidence that if any practical rules-of-thumb involving decibans I come up with can handle, say, from −127 to +127 decibans (easily storable in a single byte), then that should be sufficient to handle just about anything I come across, and I don’t have to spend the time and effort trying to extend that rule-of-thumb to 1,000 decibans.
I’m also interested in finding out what the /range/ of highest decibans given is. One person said 50; another said 100. This gives an idea of how loosely calibrated even LWers are when dealing with extreme levels of confidences, and suggests that figuring out a decent way /to/ calibrate such confidences is an area worth looking into.
An extremely minor reason, but I feel like mentioning it anyway: I’m the one and only DataPacRat, and this feels like a shiny piece of data to collect and hoard even if it doesn’t ever turn out to have any practical use.
I think how confident you can be in the math of your own confidence and the laws of probability, according to your priors of the laws of probability, is pretty much as far down as you can go.
Then this confidence prior in the laws of probability lets you apply it to your memory and do conjunction/disjunction math on the many instances of something turning out “correct”, so you get a reliability of memory given a certain amount of memory datapoints and a certain reliability of probabilities.
Then you kind of keep doing more bayes, building layer after layer relying on the reliability of your own memory and the applicability of this whole process altogether.
That seems like an acceptable upper bound to me.
And yes, that’s probably rather equivalent to saying “What are your priors for Bayes, Occam, laws of probability and your memory all correct and functional? There, that’s your upper bound.”
Would you care to offer any estimates of /your/ priors for Bayes, etc? Or what your own inputs or outputs for the overall process you describe might be?
I haven’t calculated the longer version yet, but my general impression so far is that I’m around the ~60 deciban mark as my general upper bound for any single piece of knowledge.
I’m not sure I’m even capable of calculating the longer version, since I suspect there’s a lot more information problems and more advanced math required for calculating things like the probability distributions of causal independence over individually-uncertain memories forged from unreliable causes in the (presumably very complex) causal graph representing all of this and so on.
I’m still not sure how I’m supposed to interpret this question. If you’re asking whether I think “matter is made up of atoms” is an extremely useful working hypothesis for many many scientific purposes, then the answer is obviously “yes” with probability that only negligibly differs from 1. (ETA: Don’t ask me how negligibly different, because I couldn’t tell you. I am not enough of an ideal Bayesian that I can meaningfully attach probabilities with many significant digits to my beliefs.)
If you’re asking whether the fundamental structure of matter is in fact discrete, I would assign that a probability of about 0.3. Quantum field theory is sometimes interpreted as a particle theory, but this seems wrong to me. It is best interpreted as telling us that the basic constituents of nature are continuous field configurations (or, to be more precise, linear superpositions of field configurations).
Particle number is not fixed in any workable relativistic quantum field theory. This strongly suggests that particles are emergent rather than fundamental. If you suppose that a a particle cannot be located at two disjoint regions in a single space-like hyperplane, then any relativistic quantum theory of a fixed number of particles predicts a zero probability of finding a particle anywhere (see here for the proof). So the only consistent particle QFT is one where there are no particles!
There’s also the fact that an observer accelerating uniformly in a Minkowski vacuum will see a thermal bath of particles (the Unruh effect). If one can bring particles in or out of existence simply by a change of reference frame, then they shouldn’t be part of one’s fundamental ontology.
Of course QFT itself is in all probability not the right fundamental theory, so matter may still turn out to have discrete constituents.
That’s the general answer I’m aiming to evoke; I’m trying to get a better idea of just how big that ‘negligibly’ is.
Like I said in my edit, I can’t give you a precise answer, but to narrow it down a bit, I’m comfortable saying that the probability is higher than 1 − 10^(-9).
Really? What’s the probability that a human can even be in an epistemic state that would justify 30 bits of belief?
About the same as the probability that a human can be in a physical state that allows them to walk. Winners of a 100 million-to-one lottery overcome a prior improbability of 10^-8, and presumably are at least as certain they have won, once they have collected, as they were previously expecting to lose, so there’s somewhere above 10^16 of updating, 160 decibans, 53 bits. And ordinary people do it. If you’re so smart you can’t, there’s something wrong with your smartness.
What strikes you as implausible about 30 bits of belief? It takes more than 30 bits to single out one individual on this planet.
So all we need is an example of a universe without atoms (corresponding to the example of someone who did win the lottery despite the improbability of doing that) for this analogy to work.
I think there are fields of thought in which the best paradigm is that something either is or isn’t, and where probabalistic thinking will do no good, and if forced or contrived to seem to work, may do harm (e.g. the models by which Wall Street came up with a plausible—to some—argument that CDSs of subprime mortgages could be rated AAA).
And there are fields of thought in which the idea that something simply is or isn’t is the thing likely to mislead or do harm (see http://en.wikipedia.org/wiki/Interesting_number where one gets into trouble by thinking a number either is or isn’t “interesting”).
The “interesting number” business isn’t probabalistic either, though there may be some usefulness, in Baysian arguments that treat subjective “levels of certainty” like probabilities.
Note that probabilities like that cannot be estimated because they are at the noise level. For example, the odds are about the same that you are delusional and no one asked this question (i.e., the odds are tiny and hard to evaluate).
What level of confidence is high (or low) enough that you would feel means that something is within the ‘noise level’?
Depending on how smart I feel today, anywhere from −10 to 40 decibans.
(edit: I remember how log odds work now.)
Why is it important to quantify that value?
The most important reason I can think of: the largest number of decibans that’s yet been mentioned is 160 (though that’s more of a delta, going from −80 to +80 decibans); the highest actual number of decibans is around 100. This gives me reasonably good confidence that if any practical rules-of-thumb involving decibans I come up with can handle, say, from −127 to +127 decibans (easily storable in a single byte), then that should be sufficient to handle just about anything I come across, and I don’t have to spend the time and effort trying to extend that rule-of-thumb to 1,000 decibans.
I’m also interested in finding out what the /range/ of highest decibans given is. One person said 50; another said 100. This gives an idea of how loosely calibrated even LWers are when dealing with extreme levels of confidences, and suggests that figuring out a decent way /to/ calibrate such confidences is an area worth looking into.
An extremely minor reason, but I feel like mentioning it anyway: I’m the one and only DataPacRat, and this feels like a shiny piece of data to collect and hoard even if it doesn’t ever turn out to have any practical use.
“Not all data is productively quantifiable to arbitrary precision.” Hoard that, and grow wiser for it.
Indeed not.
In this particular case, can you recommend any better way of finding out what the limits to precision actually are?
I think how confident you can be in the math of your own confidence and the laws of probability, according to your priors of the laws of probability, is pretty much as far down as you can go.
Then this confidence prior in the laws of probability lets you apply it to your memory and do conjunction/disjunction math on the many instances of something turning out “correct”, so you get a reliability of memory given a certain amount of memory datapoints and a certain reliability of probabilities.
Then you kind of keep doing more bayes, building layer after layer relying on the reliability of your own memory and the applicability of this whole process altogether.
That seems like an acceptable upper bound to me.
And yes, that’s probably rather equivalent to saying “What are your priors for Bayes, Occam, laws of probability and your memory all correct and functional? There, that’s your upper bound.”
Would you care to offer any estimates of /your/ priors for Bayes, etc? Or what your own inputs or outputs for the overall process you describe might be?
I haven’t calculated the longer version yet, but my general impression so far is that I’m around the ~60 deciban mark as my general upper bound for any single piece of knowledge.
I’m not sure I’m even capable of calculating the longer version, since I suspect there’s a lot more information problems and more advanced math required for calculating things like the probability distributions of causal independence over individually-uncertain memories forged from unreliable causes in the (presumably very complex) causal graph representing all of this and so on.
I can’t. But that sounds like a more useful question!