First: That notation seems helpful. Fairness of the environment isn’t present by default, it still needs to be assumed even if the environment is purely action-determined, as you can consider an agent in the environment that is using a hardwired predictor of what the argmax agent would do. It is just a piece of the environment, and feeding a different sequence of actions into the environment as input gets a different score, so the environment is purely action-determined, but it’s still unfair in the sense that the expected utility of feeding action x into the function drops sharply if you condition on the argmax agent selecting action x. The third condition was necessary to carry out this step. En(U(a∗1:n−1,a?–––,a2:∞?))=En(U(a∗1:n−1,a1:∞?)) . The intuitive interpretation of the third condition is that, if you know that policy B selects action 4, then you can step from “action 4 is taken” to “policy B takes the actions it takes”, and if you have a policy where you don’t know what action it takes (third condition is violated), then “policy B does its thing” may have a higher expected utility than any particular action being taken, even in a fair environment that only cares about action sequences, as the hamster dance example shows.
Second: I think you misunderstood what I was claiming. I wasn’t claiming that logical inductors attain the conditional future-trust property, even in the limit, for all sentences or all true sentences. What I was claiming was: The fact that ϕ is provable or disprovable in the future (in this case, ϕ is ‘‘a∗n=x"), makes the conditional future-trust property hold (I’m fairly sure), and for statements where there isn’t guaranteed feedback, the conditional future-trust property may fail. The double-expectation property that you state does not work to carry the proof through, because the proof (from the perspective of the first agent), takes ϕ as an assumption, so the “conditional on ϕ” part has to be outside of the future expectation, when you go back to what the first agent believes.
Third: the sense I meant for “agent is able to reason about this computation” is that the computation is not extremely long, so logical inductor traders can bet on it.
Ok, understood on the second assumption.U is not a function to [0,1], but a function to the set of [0,1]-valued random variables, and your assumption is that this random variable is uncorrelated with certain claims about the outputs of certain policies. The intuitive explanation of the third condition made sense; my complaint was that even with the intended interpretation at hand, the formal statement made no sense to me.
I’m pretty sure you’re assuming that ϕ is resolved on day n, not that it is resolved eventually.
Searching over the set of all Turing machines won’t halt in a reasonably short amount of time, and in fact won’t halt ever, since the set of all Turing machines is non-compact. So I don’t see what you mean when you say that the computation is not extremely long.
Ah, the formal statement was something like “if the policy A isn’t the argmax policy, the successor policy B must be in the policy space of the future argmax, and the action selected by policy A is computed so the relevant equality holds”
Yeah, I am assuming fast feedback that it is resolved on day n .
What I meant was that the computation isn’t extremely long in the sense of description length, not in the sense of computation time. Also, we aren’t doing policy search over the set of all turing machines, we’re doing policy search over some smaller set of policies that can be guaranteed to halt in a reasonable time (and more can be added as time goes on)
Also I’m less confident in conditional future-trust for all conditionals than I used to be, I’ll try to crystallize where I think it goes wrong.
What I meant was that the computation isn’t extremely long in the sense of description length, not in the sense of computation time. Also, we aren’t doing policy search over the set of all turing machines, we’re doing policy search over some smaller set of policies that can be guaranteed to halt in a reasonable time (and more can be added as time goes on)
Wouldn’t the set of all action sequences have lower description length than some large finite set of policies? There’s also the potential problem that all of the policies in the large finite set you’re searching over could be quite far from optimal.
First: That notation seems helpful. Fairness of the environment isn’t present by default, it still needs to be assumed even if the environment is purely action-determined, as you can consider an agent in the environment that is using a hardwired predictor of what the argmax agent would do. It is just a piece of the environment, and feeding a different sequence of actions into the environment as input gets a different score, so the environment is purely action-determined, but it’s still unfair in the sense that the expected utility of feeding action x into the function drops sharply if you condition on the argmax agent selecting action x. The third condition was necessary to carry out this step. En(U(a∗1:n−1,a?–––,a2:∞?))=En(U(a∗1:n−1,a1:∞?)) . The intuitive interpretation of the third condition is that, if you know that policy B selects action 4, then you can step from “action 4 is taken” to “policy B takes the actions it takes”, and if you have a policy where you don’t know what action it takes (third condition is violated), then “policy B does its thing” may have a higher expected utility than any particular action being taken, even in a fair environment that only cares about action sequences, as the hamster dance example shows.
Second: I think you misunderstood what I was claiming. I wasn’t claiming that logical inductors attain the conditional future-trust property, even in the limit, for all sentences or all true sentences. What I was claiming was: The fact that ϕ is provable or disprovable in the future (in this case, ϕ is ‘‘a∗n=x"), makes the conditional future-trust property hold (I’m fairly sure), and for statements where there isn’t guaranteed feedback, the conditional future-trust property may fail. The double-expectation property that you state does not work to carry the proof through, because the proof (from the perspective of the first agent), takes ϕ as an assumption, so the “conditional on ϕ” part has to be outside of the future expectation, when you go back to what the first agent believes.
Third: the sense I meant for “agent is able to reason about this computation” is that the computation is not extremely long, so logical inductor traders can bet on it.
Ok, understood on the second assumption.U is not a function to [0,1], but a function to the set of [0,1]-valued random variables, and your assumption is that this random variable is uncorrelated with certain claims about the outputs of certain policies. The intuitive explanation of the third condition made sense; my complaint was that even with the intended interpretation at hand, the formal statement made no sense to me.
I’m pretty sure you’re assuming that ϕ is resolved on day n, not that it is resolved eventually.
Searching over the set of all Turing machines won’t halt in a reasonably short amount of time, and in fact won’t halt ever, since the set of all Turing machines is non-compact. So I don’t see what you mean when you say that the computation is not extremely long.
Ah, the formal statement was something like “if the policy A isn’t the argmax policy, the successor policy B must be in the policy space of the future argmax, and the action selected by policy A is computed so the relevant equality holds”
Yeah, I am assuming fast feedback that it is resolved on day n .
What I meant was that the computation isn’t extremely long in the sense of description length, not in the sense of computation time. Also, we aren’t doing policy search over the set of all turing machines, we’re doing policy search over some smaller set of policies that can be guaranteed to halt in a reasonable time (and more can be added as time goes on)
Also I’m less confident in conditional future-trust for all conditionals than I used to be, I’ll try to crystallize where I think it goes wrong.
Wouldn’t the set of all action sequences have lower description length than some large finite set of policies? There’s also the potential problem that all of the policies in the large finite set you’re searching over could be quite far from optimal.