It still seems like this is very much affected by the measure you assign to different game of life universes, and that the measure strongly depends on f.
Suppose we want to set f to control the agent’s behavior, so that when it sees sensory data s, it takes silly action a(s), where a is a short function. To work this way, f will map game of life states in which the agent has seen s and should take action a(s) to binary strings that have greater measure, compared to game of life states in which the agent has seen s and should take some other action. I think this is almost always possible due to the agent’s partial information about the world: there is nearly always an infinite number of world states in which a(s) is a good idea, regardless of s. f has a compact description (not much longer than a), and it forces the agent’s behavior to be equal to a(s) (except in some unrealistic cases where the agent has very good information about the world).
What you’re saying can be rephrased as follows. The prior probability measure on the space of (possibly rule-violating) Game of Life histories depends on f since it is the f-image of the Solomonoff measure. You are right. However, the dependence is as strong as the dependence of the Solomonoff measure on the choice of a universal Turing machine.
In other words, the complexity of the f you need to make G take a silly action is about the same as the complexity of the universal Turing machine you need to make G take the same action.
It still seems like this is very much affected by the measure you assign to different game of life universes, and that the measure strongly depends on f.
Suppose we want to set f to control the agent’s behavior, so that when it sees sensory data s, it takes silly action a(s), where a is a short function. To work this way, f will map game of life states in which the agent has seen s and should take action a(s) to binary strings that have greater measure, compared to game of life states in which the agent has seen s and should take some other action. I think this is almost always possible due to the agent’s partial information about the world: there is nearly always an infinite number of world states in which a(s) is a good idea, regardless of s. f has a compact description (not much longer than a), and it forces the agent’s behavior to be equal to a(s) (except in some unrealistic cases where the agent has very good information about the world).
What you’re saying can be rephrased as follows. The prior probability measure on the space of (possibly rule-violating) Game of Life histories depends on f since it is the f-image of the Solomonoff measure. You are right. However, the dependence is as strong as the dependence of the Solomonoff measure on the choice of a universal Turing machine.
In other words, the complexity of the f you need to make G take a silly action is about the same as the complexity of the universal Turing machine you need to make G take the same action.