If one’s interpretation of the ‘objective’ of the agent is full of piecewise statements and ad-hoc cases, then what exactly are we doing it by describing it as maximizing an objective in the first place? You might as well describe a calculator by saying that it’s maximizing the probability of outputting the following [write out the source code that leads to its outputs]. At some point the model breaks down, and the idea that it is following an objective is completely epiphenomenal to its actual operation. The model that it is maximizing an objective doesn’t shed light on its internal operations any more than just spelling out exactly what its source code is.
I don’t feel like you’re really understanding what I’m trying to say here. I’m happy to chat with you about this more over video call or something if you’re interested.
If one’s interpretation of the ‘objective’ of the agent is full of piecewise statements and ad-hoc cases, then what exactly are we doing it by describing it as maximizing an objective in the first place? You might as well describe a calculator by saying that it’s maximizing the probability of outputting the following [write out the source code that leads to its outputs]. At some point the model breaks down, and the idea that it is following an objective is completely epiphenomenal to its actual operation. The model that it is maximizing an objective doesn’t shed light on its internal operations any more than just spelling out exactly what its source code is.
I don’t feel like you’re really understanding what I’m trying to say here. I’m happy to chat with you about this more over video call or something if you’re interested.
Sure, we can talk about this over video. Check your Facebook messages.