you could, in theory, train a Solomonoff Induction-based reinforcement learning agent to produce arbitrary finite sequences of actions (non-self-destructive ones anyway) in response to specified sets of finite sense data—assuming you are allowed to program its reward function and give it fake memories dating back from before it was born.
Basically Solomonoff Induction is a powerful learning mechanism, and with sufficient time and test cases, you could configure an agent based on it to behave in an arbitrary way[*] in response to any finite sense-stream after its “birth”—by giving it sufficient pre-birth training “memories”—which laboriously say: “if you see this, do this, and don’t do this or this or this”—for every possible bunch of observations, up to some finite length limit.
I call this sort of thing universal action—and I think reinforcement learning systems are capable of it.
My way of putting much the same idea was:
Basically Solomonoff Induction is a powerful learning mechanism, and with sufficient time and test cases, you could configure an agent based on it to behave in an arbitrary way[*] in response to any finite sense-stream after its “birth”—by giving it sufficient pre-birth training “memories”—which laboriously say: “if you see this, do this, and don’t do this or this or this”—for every possible bunch of observations, up to some finite length limit.
I call this sort of thing universal action—and I think reinforcement learning systems are capable of it.