I posit that linearity always holds. In a deterministic universe, the linear function is between the ε-adjoined open affine space generated by our primitive set of actions and the ε-adjoined utilities. (Like in my first comment.)
In a probabilistic universe, the linear function is between the ε-adjoined open affine space generated by (the set of points in) the closed affine space generated by our primitive set of actions and the ε-adjoined utilities. (Like in my second comment.)
I got from one of your comments that assuming linearity wards off some problem. Does it come back in the probabilistic-universe case?
My point was that I don’t know where to assume the linearity is. Whenever I have private randomness, I have linearity over what I end up choosing with that randomness, but not linearity over what probability I choose. But I think this is non getting at the disagreement, so I pivot to:
In your model, what does it mean to prove that U is some linear affine function? If I prove that my probability p is 1⁄2 and that U=7.5, have I proven that U is the constant function 7.5? If there is only one value of p, it is not defined what the utility function is, unless I successfully carve the universe in such a way as to let me replace the action with various things and see what happens. (or, assuming linearity replace the probability with enough linearly independent things (in this case 2) to define the function.
In the matching pennies game, U() would be proven to be ∫A()(p)∗min(p,1−p)dp. A could maximize this by returning ε when p isn’t 12, and 1−∫εdp (where ε is so small that this is still infinitesimally close to 1) when p is 12.
The linearity is always in the function between ε-adjoined open affine spaces. Whether the utilities also end up linear in the closed affine space (ie nobody cares about our reasoning process) is for the object-level information gathering process to deduce from the environment.
You never prove that you will with certainty decide p=12. You always leave a so-you’re-saying-there’s-a chance of exploration, which produces a grain of uncertainty. To execute the action, you inspect the ceremonial Boltzmann Bit (which is implemented by being constantly set to “discard the ε”), but which you treat as having an ε chance of flipping.
The self-modification module could note that inspecting that bit is a no-op, see that removing it would make the counterfactual reasoning module crash, and leave up the Chesterton fence.
But how do you avoid proving with certainty that p=1/2?
Since your proposal does not say what to do if you find inconsistent proofs that the linear function is two different things, I will assume that if it finds multiple different proofs, it defaults to 5 for the following.
Here is another example:
You are in a 5 and 10 problem. You have twin that is also in a 5 and 10 problem. You have exactly the same source code. There is a consistency checker, and if you and your twin do different things, you both get 0 utility.
You can prove that you and your twin do the same thing. Thus you can prove that the function is 5+5p. You can also prove that your twin takes 5 by Lob’s theorem. (You can also prove that you take 5 by Lob’s theorem, but you ignore that proof, since “there is always a chance”) Thus, you can prove that the function is 5-5p. Your system doesn’t know what to do with two functions, so it defaults to 5. (If it is provable that you both take 5, you both take 5, completing the proof by Lob.)
I am doing the same thing as before, but because I put it outside of the agent, it does not get flagged with the “there is always a chance” module. This is trying to illustrate that your proposal takes advantage of a separation between the agent and the environment that was snuck in, and could be done incorrectly.
Two possible fixes:
1) You could say that the agent, instead of taking 5 when finding inconsistency takes some action that exhibits the inconsistency (something that the two functions give different values). This is very similar to the chicken rule, and if you add something like this, you don’t really need the rest of your system. If you take an agent that whenever it proves it does something, it does something else. This agent will prove (given enough time) that if it takes 5 it gets 5, and if it takes 10 it gets 10.
2) I had one proof system, and just ignored the proofs that I found that I did a thing. I could instead give the agent a special proof system that is incapable of proving what it does, but how do you do that? Chicken rule seems like the place to start.
One problem with the chicken rule is that it was developed in a system that was deductively closed, so you can’t prove something that passes though a proof of P without proving P. If you violate this, by having a random theorem prover, you might have an system that fails to prove “I take 5” but proves “I take 5 and 1+1=2″ and uses this to complete the Lob loop.
I can’t prove what I’m going to do and I can’t prove that I and the twin are going to do the same thing, because of the Boltzmann Bits in both of our decision-makers that might turn out different ways. But I can prove that we have a 1−2ε+2ε2 chance of doing the same thing, and my expected utility is (1−ε)2⋅10+ε2⋅5, rounding to 10 once it actually happens.
I posit that linearity always holds. In a deterministic universe, the linear function is between the ε-adjoined open affine space generated by our primitive set of actions and the ε-adjoined utilities. (Like in my first comment.)
In a probabilistic universe, the linear function is between the ε-adjoined open affine space generated by (the set of points in) the closed affine space generated by our primitive set of actions and the ε-adjoined utilities. (Like in my second comment.)
I got from one of your comments that assuming linearity wards off some problem. Does it come back in the probabilistic-universe case?
My point was that I don’t know where to assume the linearity is. Whenever I have private randomness, I have linearity over what I end up choosing with that randomness, but not linearity over what probability I choose. But I think this is non getting at the disagreement, so I pivot to:
In your model, what does it mean to prove that U is some linear affine function? If I prove that my probability p is 1⁄2 and that U=7.5, have I proven that U is the constant function 7.5? If there is only one value of p, it is not defined what the utility function is, unless I successfully carve the universe in such a way as to let me replace the action with various things and see what happens. (or, assuming linearity replace the probability with enough linearly independent things (in this case 2) to define the function.
In the matching pennies game, U() would be proven to be ∫A()(p)∗min(p,1−p)dp. A could maximize this by returning ε when p isn’t 12, and 1−∫ ε dp (where ε is so small that this is still infinitesimally close to 1) when p is 12.
The linearity is always in the function between ε-adjoined open affine spaces. Whether the utilities also end up linear in the closed affine space (ie nobody cares about our reasoning process) is for the object-level information gathering process to deduce from the environment.
You never prove that you will with certainty decide p=12. You always leave a so-you’re-saying-there’s-a chance of exploration, which produces a grain of uncertainty. To execute the action, you inspect the ceremonial Boltzmann Bit (which is implemented by being constantly set to “discard the ε”), but which you treat as having an ε chance of flipping.
The self-modification module could note that inspecting that bit is a no-op, see that removing it would make the counterfactual reasoning module crash, and leave up the Chesterton fence.
But how do you avoid proving with certainty that p=1/2?
Since your proposal does not say what to do if you find inconsistent proofs that the linear function is two different things, I will assume that if it finds multiple different proofs, it defaults to 5 for the following.
Here is another example:
You are in a 5 and 10 problem. You have twin that is also in a 5 and 10 problem. You have exactly the same source code. There is a consistency checker, and if you and your twin do different things, you both get 0 utility.
You can prove that you and your twin do the same thing. Thus you can prove that the function is 5+5p. You can also prove that your twin takes 5 by Lob’s theorem. (You can also prove that you take 5 by Lob’s theorem, but you ignore that proof, since “there is always a chance”) Thus, you can prove that the function is 5-5p. Your system doesn’t know what to do with two functions, so it defaults to 5. (If it is provable that you both take 5, you both take 5, completing the proof by Lob.)
I am doing the same thing as before, but because I put it outside of the agent, it does not get flagged with the “there is always a chance” module. This is trying to illustrate that your proposal takes advantage of a separation between the agent and the environment that was snuck in, and could be done incorrectly.
Two possible fixes:
1) You could say that the agent, instead of taking 5 when finding inconsistency takes some action that exhibits the inconsistency (something that the two functions give different values). This is very similar to the chicken rule, and if you add something like this, you don’t really need the rest of your system. If you take an agent that whenever it proves it does something, it does something else. This agent will prove (given enough time) that if it takes 5 it gets 5, and if it takes 10 it gets 10.
2) I had one proof system, and just ignored the proofs that I found that I did a thing. I could instead give the agent a special proof system that is incapable of proving what it does, but how do you do that? Chicken rule seems like the place to start.
One problem with the chicken rule is that it was developed in a system that was deductively closed, so you can’t prove something that passes though a proof of P without proving P. If you violate this, by having a random theorem prover, you might have an system that fails to prove “I take 5” but proves “I take 5 and 1+1=2″ and uses this to complete the Lob loop.
I can’t prove what I’m going to do and I can’t prove that I and the twin are going to do the same thing, because of the Boltzmann Bits in both of our decision-makers that might turn out different ways. But I can prove that we have a 1−2ε+2ε2 chance of doing the same thing, and my expected utility is (1−ε)2⋅10+ε2⋅5, rounding to 10 once it actually happens.
It sounds similar to the matrices in the post:
A solvable Newcomb-like problem