I think the universal prior (a la Solmonoff induction) would give it positive probability, FWIW. A universe that has a GOD(infinity) seems to me describable by a shorter program than one that has GOD(N) for N large enough to actually be godlike. God simply stops time, reads the universe state (with some stub substituted for himself to avoid regression), writes a new one, then continues the new one.
I thought this, but now I’m not sure. Surely, if you were God, you would be able to instantly work out BB(n) for any n. This would make you uncomputable, which would indeed mean the Solomonoff prior assigns you being God a probability of zero.
There is quite a good argument that this treatment of uncomputables is a flaw rather than a feature of the Solomonoff prior, although right now it does seem to be working out quite conveniently for us.
Surely if you were God you would be able to instantly work out BB(n) for any n, which would make you uncomputable, which would indeed mean the Solomonoff prior assigns you being God a probability of zero.
I agree that the Solomonoff prior isn’t going to give positive probability to me having any sort of halting oracle. Hmm, I’m not sure whether inferring someone’s utility function is computable. I suppose that inferring the utility function for a brain of fixed complexity when arbitrarily large (but still finite) computational capacity can be brought to bear could give an arbitrarily close approximation, so the OP could be revised to fix that. It presently doesn’t seem worth the effort though—the added verbage would obscure the main point without adding anything obviously useful.
A bigger problem is your ability to hand out arbitrarily large amounts of utility. Suppose the universe can be simulated by an N state Turing machine, this limits the number of possible states it can occupy to a finite (but probably very large) number. This in turn bounds the amount of utility you can offer me, since each state has finite utility and the maximum of a finite set of finite numbers is finite. (The reason why this doesn’t automatically imply a bounded utility function is that we are uncertain of N.)
As a result of this:
P(you can offer me k utility) > 0 for any fixed k
but
P(you can offer me x utility for any x) = 0
To be honest thought, I’m not really comfortable with this, and I think Solomonoff needs to be fixed (I don’t feel like I believe with certainty that the universe is computable). The real reason why you haven’t seen any of my money is that I think the maths is bullshit, as I have mentioned elsewhere.
Thinking about it more, this isn’t a serious problem for the dilemma. While P(you can offer me k utility) goes to zero as k goes to infinity but there’s no reason to suppose it goes faster then 1/n does.
This means you can still set a similar dilemma, with a probability of you being able to offer me 2^n utility eventually becoming greater than (1/2)^n for sufficiently large n, satisfying the conditions for a St Petersburg Lottery.
Thinking about it more, this isn’t a serious problem for the dilemma. While P(you can offer me k utility) goes to zero as k goes to infinity but there’s no reason to suppose it goes faster then 1/n does.
By Rice’s theorem, inferring utility functions is uncomputable in general, but it is probably possible to do for humans. If not, that would be quite a problem for FAI designers.
I thought this, but now I’m not sure. Surely, if you were God, you would be able to instantly work out BB(n) for any n. This would make you uncomputable, which would indeed mean the Solomonoff prior assigns you being God a probability of zero.
There is quite a good argument that this treatment of uncomputables is a flaw rather than a feature of the Solomonoff prior, although right now it does seem to be working out quite conveniently for us.
I agree that the Solomonoff prior isn’t going to give positive probability to me having any sort of halting oracle. Hmm, I’m not sure whether inferring someone’s utility function is computable. I suppose that inferring the utility function for a brain of fixed complexity when arbitrarily large (but still finite) computational capacity can be brought to bear could give an arbitrarily close approximation, so the OP could be revised to fix that. It presently doesn’t seem worth the effort though—the added verbage would obscure the main point without adding anything obviously useful.
A bigger problem is your ability to hand out arbitrarily large amounts of utility. Suppose the universe can be simulated by an N state Turing machine, this limits the number of possible states it can occupy to a finite (but probably very large) number. This in turn bounds the amount of utility you can offer me, since each state has finite utility and the maximum of a finite set of finite numbers is finite. (The reason why this doesn’t automatically imply a bounded utility function is that we are uncertain of N.)
As a result of this:
P(you can offer me k utility) > 0 for any fixed k
but
P(you can offer me x utility for any x) = 0
To be honest thought, I’m not really comfortable with this, and I think Solomonoff needs to be fixed (I don’t feel like I believe with certainty that the universe is computable). The real reason why you haven’t seen any of my money is that I think the maths is bullshit, as I have mentioned elsewhere.
Thinking about it more, this isn’t a serious problem for the dilemma. While P(you can offer me k utility) goes to zero as k goes to infinity but there’s no reason to suppose it goes faster then 1/n does.
This means you can still set a similar dilemma, with a probability of you being able to offer me 2^n utility eventually becoming greater than (1/2)^n for sufficiently large n, satisfying the conditions for a St Petersburg Lottery.
That’s just Pascal’s mugging, though; the problem that “the utility of a Turing machine can grow much faster than its prior probability shrinks”.
By Rice’s theorem, inferring utility functions is uncomputable in general, but it is probably possible to do for humans. If not, that would be quite a problem for FAI designers.