It seems the approaches we’re using are similar, in that they both are starting from observation/action history with posited falsifiable laws, with the agent’s source code not known a priori, and the agent considering different policies.
Learning “my source code is A” is quite similar to learning “Omega predicts my action is equal to A()”, so these would lead to similar results.
Policy-dependent source code, then, corresponds to Omega making different predictions depending on the agent’s intended policy, such that when comparing policies, the agent has to imagine Omega predicting differently (as it would imagine learning different source code under policy-dependent source code).
Policy-dependent source code, then, corresponds to Omega making different predictions depending on the agent’s intended policy, such that when comparing policies, the agent has to imagine Omega predicting differently (as it would imagine learning different source code under policy-dependent source code).
Well, in quasi-Bayesianism for each policy you have to consider the worst-case environment in your belief set, which depends on the policy. I guess that in this sense it is analogous.
It seems the approaches we’re using are similar, in that they both are starting from observation/action history with posited falsifiable laws, with the agent’s source code not known a priori, and the agent considering different policies.
Learning “my source code is A” is quite similar to learning “Omega predicts my action is equal to A()”, so these would lead to similar results.
Policy-dependent source code, then, corresponds to Omega making different predictions depending on the agent’s intended policy, such that when comparing policies, the agent has to imagine Omega predicting differently (as it would imagine learning different source code under policy-dependent source code).
Well, in quasi-Bayesianism for each policy you have to consider the worst-case environment in your belief set, which depends on the policy. I guess that in this sense it is analogous.