But my sense is that the “substantial school in the philosophy of science [that] identifies Bayesian inference with inductive inference and even rationality as such”, as well as Eliezer’s OB persona, is talking more about a prior implicit in informal human reasoning than about anything that’s written down on paper. You can then see model checking as roughly comparing the parts of your prior that you wrote down to all the parts that you didn’t write down. Is that wrong?
I don’t think informal human reasoning corresponds to Bayesian inference with any prior. Maybe you mean “what informal human reasoning should be”. In that case I’d like a formal description of what it should be (ahem).
Gelman/Shalizi don’t seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.
It seems to me that Wei Dai’s argument is flawed (and I may be overly arrogant in saying this; I haven’t even had breakfast this morning.)
He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don’t fundamentally see why “measure zero hypothesis” is equivalent to “impossible;” for example the hypothesis of “they’re making it up as they go along” having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other “humans have been wrong forever” should have a consistent probability mass which will grow in comparison to the other hypothesis “they are making it up.”
Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI’s prior distribution to give “impossible” things low but nonzero probability.
Wei Dai’s argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such “impossible” things positive probability, but still sum to 1.0 over all hypotheses, then by all means let’s hear it.
Yeah well it is certainly a good argument against that. The title of the thread is “is induction unformalizable” which point I’m unconvinced of.
If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for “things I haven’t thought up yet.” On the other hand I’m not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.
There’s no general way to have a “none of the above” hypothesis as part of your prior, because it doesn’t make any specific prediction and thus you can’t update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.
Well then I guess I would hypothesize that solving the problem of a universal prior is equivalent to solving the problem of NOTA. I don’t really know enough to get technical here. If your point is that it’s not a good idea to model humans as Bayesians, I agree. If your point is that it’s impossible, I’m unconvinced. Maybe after I finish reading Jaynes I’ll have a better idea of the formalisms involved.
But my sense is that the “substantial school in the philosophy of science [that] identifies Bayesian inference with inductive inference and even rationality as such”, as well as Eliezer’s OB persona, is talking more about a prior implicit in informal human reasoning than about anything that’s written down on paper. You can then see model checking as roughly comparing the parts of your prior that you wrote down to all the parts that you didn’t write down. Is that wrong?
I don’t think informal human reasoning corresponds to Bayesian inference with any prior. Maybe you mean “what informal human reasoning should be”. In that case I’d like a formal description of what it should be (ahem).
Solomonoff induction, mebbe?
Wei Dai thought up a counterexample to that :-)
Gelman/Shalizi don’t seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.
It seems to me that Wei Dai’s argument is flawed (and I may be overly arrogant in saying this; I haven’t even had breakfast this morning.)
He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don’t fundamentally see why “measure zero hypothesis” is equivalent to “impossible;” for example the hypothesis of “they’re making it up as they go along” having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other “humans have been wrong forever” should have a consistent probability mass which will grow in comparison to the other hypothesis “they are making it up.”
Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI’s prior distribution to give “impossible” things low but nonzero probability.
Wei Dai’s argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such “impossible” things positive probability, but still sum to 1.0 over all hypotheses, then by all means let’s hear it.
Yeah well it is certainly a good argument against that. The title of the thread is “is induction unformalizable” which point I’m unconvinced of.
If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for “things I haven’t thought up yet.” On the other hand I’m not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.
There’s no general way to have a “none of the above” hypothesis as part of your prior, because it doesn’t make any specific prediction and thus you can’t update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.
Well then I guess I would hypothesize that solving the problem of a universal prior is equivalent to solving the problem of NOTA. I don’t really know enough to get technical here. If your point is that it’s not a good idea to model humans as Bayesians, I agree. If your point is that it’s impossible, I’m unconvinced. Maybe after I finish reading Jaynes I’ll have a better idea of the formalisms involved.