“So much for begging the question. Please do a calculation, using the theorems of Bayes (or theorems derived from Bayesian theorems), which gives an incorrect number given correct numbers as input.”
Couldn’t we say the same thing for Turing machines? “Please do a computation, using a Universal Turing Machine (or equivalent), which gives an incorrect number, given correct numbers as input.”
Remember that a Universal Turing Machine takes a Turing machine as an input, so you can’t muck around with the algorithm it runs, without making the input “incorrect”.
I thought the whole point of probabilistic methods is that it doesn’t matter too much what the prior is, it will always eventually converge on the right answer...
Well apart from in some cases. The following is a situation where, unless you give the system exactly the right prior it will never come to the right answer. Not quite what you were after but shows a hole in bayes to my mind.
Environmental output is the entire affect that a computation has on the environment (e.g. heat, radiation, reduction in the energy of the power source).
In the Sensitive Urn the colours of the balls are dependent upon the average environmental output from the processing done in the area since the last sample. That is they are a function of the processing done. We could represent knowledge about the probability
function in the following way with the standard notation
P (r|Φ(μ, ts − 100, ts ) > 10)
Being the probability that a ball is red with having there been at least 10 environmental output per millisecond in the 100 milliseconds before the time of the current sample ts . We shall say that the probabilistic reasoner outputs 20 outputs per millisecond and so fulfils this property. This value is therefore found during its normal operation and sampling.
However
P (r| ∼ Φ(μ, ts−1 ts ) > 10)
, the probability that the ball will be red if there is no such processing in the area is harder to find. For the sake of argument say that this is the most efficient bayesian reasoner that we could build. In order to find this value would require that the sampler no longer process to the same extent and because processing is required to update probabilities it can no longer update probabilities. It is in effect a blind spot, a place that the sampler cannot go with out changing itself and stopping being a Bayesian sampler.
“So much for begging the question. Please do a calculation, using the theorems of Bayes (or theorems derived from Bayesian theorems), which gives an incorrect number given correct numbers as input.”
Couldn’t we say the same thing for Turing machines? “Please do a computation, using a Universal Turing Machine (or equivalent), which gives an incorrect number, given correct numbers as input.”
Remember that a Universal Turing Machine takes a Turing machine as an input, so you can’t muck around with the algorithm it runs, without making the input “incorrect”.
I thought the whole point of probabilistic methods is that it doesn’t matter too much what the prior is, it will always eventually converge on the right answer...
Well apart from in some cases. The following is a situation where, unless you give the system exactly the right prior it will never come to the right answer. Not quite what you were after but shows a hole in bayes to my mind.
Environmental output is the entire affect that a computation has on the environment (e.g. heat, radiation, reduction in the energy of the power source).
In the Sensitive Urn the colours of the balls are dependent upon the average environmental output from the processing done in the area since the last sample. That is they are a function of the processing done. We could represent knowledge about the probability function in the following way with the standard notation
P (r|Φ(μ, ts − 100, ts ) > 10) Being the probability that a ball is red with having there been at least 10 environmental output per millisecond in the 100 milliseconds before the time of the current sample ts . We shall say that the probabilistic reasoner outputs 20 outputs per millisecond and so fulfils this property. This value is therefore found during its normal operation and sampling. However
P (r| ∼ Φ(μ, ts−1 ts ) > 10) , the probability that the ball will be red if there is no such processing in the area is harder to find. For the sake of argument say that this is the most efficient bayesian reasoner that we could build. In order to find this value would require that the sampler no longer process to the same extent and because processing is required to update probabilities it can no longer update probabilities. It is in effect a blind spot, a place that the sampler cannot go with out changing itself and stopping being a Bayesian sampler.