I don’t call this instrumental convergence (of goals), more like the Bayesian bowl all intelligent agents fall towards. I also think the instrumental convergence of goals is stronger/more certain than the convergence to approximately Bayesian reasoners.
I don’t call this instrumental convergence (of goals), more like the Bayesian bowl all intelligent agents fall towards.
I suppose it’s true that there is usually made a distinction between goals vs models, and instrumental convergence is usually phrased as a point about goals rather than models.
I also think the instrumental convergence of goals is stronger/more certain than the convergence to approximately Bayesian reasoners.
I get the impression that you picture something more narrow with my comment than I intended? I don’t think my comment is limited to Bayesian rationality; we could also consider non-Bayesian reasoning approaches like logic or frequentism or similar. Even CNNs or transformers would go under what I was talking about. Or Aumann’s agreement theorem, or lots of other things.
I get the impression that you picture something more narrow with my comment than I intended? I don’t think my comment is limited to Bayesian rationality; we could also consider non-Bayesian reasoning approaches like logic or frequentism or similar. Even CNNs or transformers would go under what I was talking about. Or Aumann’s agreement theorem, or lots of other things.
I agree those all count, but those all (mostly) have Bayesian interpretations which is what I was referring to.
I don’t call this instrumental convergence (of goals), more like the Bayesian bowl all intelligent agents fall towards. I also think the instrumental convergence of goals is stronger/more certain than the convergence to approximately Bayesian reasoners.
I suppose it’s true that there is usually made a distinction between goals vs models, and instrumental convergence is usually phrased as a point about goals rather than models.
I get the impression that you picture something more narrow with my comment than I intended? I don’t think my comment is limited to Bayesian rationality; we could also consider non-Bayesian reasoning approaches like logic or frequentism or similar. Even CNNs or transformers would go under what I was talking about. Or Aumann’s agreement theorem, or lots of other things.
I agree those all count, but those all (mostly) have Bayesian interpretations which is what I was referring to.