I think this is due to Yudkowsky’s focus on AI theory; an AI can’t use discretion to choose the right method unless we formalize this discretion. Bayes’ theorem is applicable to all inference problems, while frequentist methods have domains of applicability. This may seem philosophical to working statisticians—after all, Bayes’ theorem is rather inefficient for many problems, so it may still be considered inapplicable in this sense—but programming an AI to use a frequentist method without a complete understanding of its domain of applicability could be disastrous, while that problem just does not exist for Bayesianism. There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.
Yes, mostly that lesser meaning of disastrous, though an AI that almost works but has a few very wrong beliefs could be unfriendly. If I misunderstood your comment and you were actually asking for an example of a frequentist method failing, one of the simplest examples is a mistaken assumption of linearity.
“There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.”
Yeah, well. That of course is the core of what is dubious and disputed here. Really, Bayes’ theorem itself is hardly controversial, and talking about it this way is pointless.
There’s sort of a continuum here. A weak claim is that these priors can be an adequate model of uncertainty in many situations. Stronger and stronger claims will assert that this works in more and more situations, and the strongest claim is that these cover all forms of uncertainty in all situations. Lukeprog makes the strongest claim, by means of examples which I find rather sketchy relative to the strength of the claim.
To Kaj Sotala’s conversation, adherents of the weaker claim would be fine with the “use either methodlogy if that suits it” attitude. This is less acceptable to those who think priors should be broadly applicable. And it is utterly unacceptable from the perspective of the strongest claim.
For that matter “either” is incorrect (note the original conversation one of them actually talks about several rather than two). There is lots of work on modeling uncertainty in non-frequentist and non-bayesian ways.
I was thinking of the simpler case of someone who has already assigned utilities as required by the VNM axioms for the noncontroversial case of gambling with probabilities that are relative frequencies, but refuses on philosophical grounds to apply the expected utility decision procedure to other kinds of uncertainty.
(I do think the statement still stands in general. I don’t have a complete proof but Savage’s axioms get most of the way there.)
On the thread cited I gave a three state, two outcome counterexample to P2 which does just that. Having two outcomes obviously a utility function is not an issue. (It can be extended it with an arbitrary number of “fair coins” for example to satisfy P6, which covers your actual frequency requirement here)
My weak claim is that it is not vulnerable to “Dutch-book-type” arguments. My strong claim is that this behaviour is reasonable, even rational. The strong claim is being disputed on that thread. And of course we haven’t agreed on any prior definition of reasonable or rational. But nobody has attempted to Dutch book me, and the weak claim is all that is needed to contradict your claim here.
I think this is due to Yudkowsky’s focus on AI theory; an AI can’t use discretion to choose the right method unless we formalize this discretion. Bayes’ theorem is applicable to all inference problems, while frequentist methods have domains of applicability. This may seem philosophical to working statisticians—after all, Bayes’ theorem is rather inefficient for many problems, so it may still be considered inapplicable in this sense—but programming an AI to use a frequentist method without a complete understanding of its domain of applicability could be disastrous, while that problem just does not exist for Bayesianism. There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.
I’m not sure what you meant by that, but as far as I can tell not explicitly using Bayesian reasoning makes AIs less functional, not unfriendly.
Yes, mostly that lesser meaning of disastrous, though an AI that almost works but has a few very wrong beliefs could be unfriendly. If I misunderstood your comment and you were actually asking for an example of a frequentist method failing, one of the simplest examples is a mistaken assumption of linearity.
“There is the problem of choosing a prior, but that can be dealt with by using objective priors or Solomonoff induction.”
Yeah, well. That of course is the core of what is dubious and disputed here. Really, Bayes’ theorem itself is hardly controversial, and talking about it this way is pointless.
There’s sort of a continuum here. A weak claim is that these priors can be an adequate model of uncertainty in many situations. Stronger and stronger claims will assert that this works in more and more situations, and the strongest claim is that these cover all forms of uncertainty in all situations. Lukeprog makes the strongest claim, by means of examples which I find rather sketchy relative to the strength of the claim.
To Kaj Sotala’s conversation, adherents of the weaker claim would be fine with the “use either methodlogy if that suits it” attitude. This is less acceptable to those who think priors should be broadly applicable. And it is utterly unacceptable from the perspective of the strongest claim.
For that matter “either” is incorrect (note the original conversation one of them actually talks about several rather than two). There is lots of work on modeling uncertainty in non-frequentist and non-bayesian ways.
Anyone who bases decisions on a non-Bayesian model of uncertainty that is not equivalent to Bayesianism with some prior is vulnerable to Dutch books.
It seems not. Sniffnoy’s recent thread asked the very question as to whether Savage’s axioms could really be justified by dutch book arguments.
I was thinking of the simpler case of someone who has already assigned utilities as required by the VNM axioms for the noncontroversial case of gambling with probabilities that are relative frequencies, but refuses on philosophical grounds to apply the expected utility decision procedure to other kinds of uncertainty.
(I do think the statement still stands in general. I don’t have a complete proof but Savage’s axioms get most of the way there.)
On the thread cited I gave a three state, two outcome counterexample to P2 which does just that. Having two outcomes obviously a utility function is not an issue. (It can be extended it with an arbitrary number of “fair coins” for example to satisfy P6, which covers your actual frequency requirement here)
My weak claim is that it is not vulnerable to “Dutch-book-type” arguments. My strong claim is that this behaviour is reasonable, even rational. The strong claim is being disputed on that thread. And of course we haven’t agreed on any prior definition of reasonable or rational. But nobody has attempted to Dutch book me, and the weak claim is all that is needed to contradict your claim here.
Sorry, I didn’t check that thread for posts by you. I replied there.