A third point: Instrumental convergence is precisely the thing that makes general intelligence possible.
That is, if there were no sets of behaviors or cognitions that were broadly useful for achieving goals, then any intelligence would have to be entirely specialized to a single goal. It is precisely instrumental convergence that allows broader intelligence.
Corollary: The way capabilities research progresses is through coming up with implementations of instrumentally convergent cognition.
I don’t call this instrumental convergence (of goals), more like the Bayesian bowl all intelligent agents fall towards. I also think the instrumental convergence of goals is stronger/more certain than the convergence to approximately Bayesian reasoners.
I don’t call this instrumental convergence (of goals), more like the Bayesian bowl all intelligent agents fall towards.
I suppose it’s true that there is usually made a distinction between goals vs models, and instrumental convergence is usually phrased as a point about goals rather than models.
I also think the instrumental convergence of goals is stronger/more certain than the convergence to approximately Bayesian reasoners.
I get the impression that you picture something more narrow with my comment than I intended? I don’t think my comment is limited to Bayesian rationality; we could also consider non-Bayesian reasoning approaches like logic or frequentism or similar. Even CNNs or transformers would go under what I was talking about. Or Aumann’s agreement theorem, or lots of other things.
I get the impression that you picture something more narrow with my comment than I intended? I don’t think my comment is limited to Bayesian rationality; we could also consider non-Bayesian reasoning approaches like logic or frequentism or similar. Even CNNs or transformers would go under what I was talking about. Or Aumann’s agreement theorem, or lots of other things.
I agree those all count, but those all (mostly) have Bayesian interpretations which is what I was referring to.
A third point: Instrumental convergence is precisely the thing that makes general intelligence possible.
That is, if there were no sets of behaviors or cognitions that were broadly useful for achieving goals, then any intelligence would have to be entirely specialized to a single goal. It is precisely instrumental convergence that allows broader intelligence.
Corollary: The way capabilities research progresses is through coming up with implementations of instrumentally convergent cognition.
I don’t call this instrumental convergence (of goals), more like the Bayesian bowl all intelligent agents fall towards. I also think the instrumental convergence of goals is stronger/more certain than the convergence to approximately Bayesian reasoners.
I suppose it’s true that there is usually made a distinction between goals vs models, and instrumental convergence is usually phrased as a point about goals rather than models.
I get the impression that you picture something more narrow with my comment than I intended? I don’t think my comment is limited to Bayesian rationality; we could also consider non-Bayesian reasoning approaches like logic or frequentism or similar. Even CNNs or transformers would go under what I was talking about. Or Aumann’s agreement theorem, or lots of other things.
I agree those all count, but those all (mostly) have Bayesian interpretations which is what I was referring to.