The most important reason I can think of: the largest number of decibans that’s yet been mentioned is 160 (though that’s more of a delta, going from −80 to +80 decibans); the highest actual number of decibans is around 100. This gives me reasonably good confidence that if any practical rules-of-thumb involving decibans I come up with can handle, say, from −127 to +127 decibans (easily storable in a single byte), then that should be sufficient to handle just about anything I come across, and I don’t have to spend the time and effort trying to extend that rule-of-thumb to 1,000 decibans.
I’m also interested in finding out what the /range/ of highest decibans given is. One person said 50; another said 100. This gives an idea of how loosely calibrated even LWers are when dealing with extreme levels of confidences, and suggests that figuring out a decent way /to/ calibrate such confidences is an area worth looking into.
An extremely minor reason, but I feel like mentioning it anyway: I’m the one and only DataPacRat, and this feels like a shiny piece of data to collect and hoard even if it doesn’t ever turn out to have any practical use.
I think how confident you can be in the math of your own confidence and the laws of probability, according to your priors of the laws of probability, is pretty much as far down as you can go.
Then this confidence prior in the laws of probability lets you apply it to your memory and do conjunction/disjunction math on the many instances of something turning out “correct”, so you get a reliability of memory given a certain amount of memory datapoints and a certain reliability of probabilities.
Then you kind of keep doing more bayes, building layer after layer relying on the reliability of your own memory and the applicability of this whole process altogether.
That seems like an acceptable upper bound to me.
And yes, that’s probably rather equivalent to saying “What are your priors for Bayes, Occam, laws of probability and your memory all correct and functional? There, that’s your upper bound.”
Would you care to offer any estimates of /your/ priors for Bayes, etc? Or what your own inputs or outputs for the overall process you describe might be?
I haven’t calculated the longer version yet, but my general impression so far is that I’m around the ~60 deciban mark as my general upper bound for any single piece of knowledge.
I’m not sure I’m even capable of calculating the longer version, since I suspect there’s a lot more information problems and more advanced math required for calculating things like the probability distributions of causal independence over individually-uncertain memories forged from unreliable causes in the (presumably very complex) causal graph representing all of this and so on.
Why is it important to quantify that value?
The most important reason I can think of: the largest number of decibans that’s yet been mentioned is 160 (though that’s more of a delta, going from −80 to +80 decibans); the highest actual number of decibans is around 100. This gives me reasonably good confidence that if any practical rules-of-thumb involving decibans I come up with can handle, say, from −127 to +127 decibans (easily storable in a single byte), then that should be sufficient to handle just about anything I come across, and I don’t have to spend the time and effort trying to extend that rule-of-thumb to 1,000 decibans.
I’m also interested in finding out what the /range/ of highest decibans given is. One person said 50; another said 100. This gives an idea of how loosely calibrated even LWers are when dealing with extreme levels of confidences, and suggests that figuring out a decent way /to/ calibrate such confidences is an area worth looking into.
An extremely minor reason, but I feel like mentioning it anyway: I’m the one and only DataPacRat, and this feels like a shiny piece of data to collect and hoard even if it doesn’t ever turn out to have any practical use.
“Not all data is productively quantifiable to arbitrary precision.” Hoard that, and grow wiser for it.
Indeed not.
In this particular case, can you recommend any better way of finding out what the limits to precision actually are?
I think how confident you can be in the math of your own confidence and the laws of probability, according to your priors of the laws of probability, is pretty much as far down as you can go.
Then this confidence prior in the laws of probability lets you apply it to your memory and do conjunction/disjunction math on the many instances of something turning out “correct”, so you get a reliability of memory given a certain amount of memory datapoints and a certain reliability of probabilities.
Then you kind of keep doing more bayes, building layer after layer relying on the reliability of your own memory and the applicability of this whole process altogether.
That seems like an acceptable upper bound to me.
And yes, that’s probably rather equivalent to saying “What are your priors for Bayes, Occam, laws of probability and your memory all correct and functional? There, that’s your upper bound.”
Would you care to offer any estimates of /your/ priors for Bayes, etc? Or what your own inputs or outputs for the overall process you describe might be?
I haven’t calculated the longer version yet, but my general impression so far is that I’m around the ~60 deciban mark as my general upper bound for any single piece of knowledge.
I’m not sure I’m even capable of calculating the longer version, since I suspect there’s a lot more information problems and more advanced math required for calculating things like the probability distributions of causal independence over individually-uncertain memories forged from unreliable causes in the (presumably very complex) causal graph representing all of this and so on.
I can’t. But that sounds like a more useful question!