The issue with being informal is that it’s hard to tell whether you are right. You use words like “motivations” without defining what you mean, and this makes your statements vague enough that it’s not clear whether or how they are in tension with other claims.
It seems worth pointing out: the informality is in the hypothesis, which comprises a set of somewhat illegible intuitions and theories I use to reason about generalization. However, the prediction itself is what needs to be graded in order to see whether I was right. I made a prediction fairly like “the policy tends to go to the top-right 5x5, and searches for cheese once there, because that’s where the cheese-seeking computations were more strongly historically reinforced” and “the policy sometimes pursues cheese and sometimes navigates to the top-right 5x5 corner.” These predictions are (informally) gradable, even if the underlying intuitions are informal.
As it pertains to shard theory more broadly, though, I agree that more precision is needed. Increasing precision and formalism is the reason I proposed and executed the project underpinning Understanding and controlling a maze-solving policy network. I wanted to understand more about realistic motivational circuitry and model internals in the real world. I think the last few months have given me headway on a more mechanistic definition of a “shard-based agent.”
It seems worth pointing out: the informality is in the hypothesis, which comprises a set of somewhat illegible intuitions and theories I use to reason about generalization. However, the prediction itself is what needs to be graded in order to see whether I was right. I made a prediction fairly like “the policy tends to go to the top-right 5x5, and searches for cheese once there, because that’s where the cheese-seeking computations were more strongly historically reinforced” and “the policy sometimes pursues cheese and sometimes navigates to the top-right 5x5 corner.” These predictions are (informally) gradable, even if the underlying intuitions are informal.
As it pertains to shard theory more broadly, though, I agree that more precision is needed. Increasing precision and formalism is the reason I proposed and executed the project underpinning Understanding and controlling a maze-solving policy network. I wanted to understand more about realistic motivational circuitry and model internals in the real world. I think the last few months have given me headway on a more mechanistic definition of a “shard-based agent.”