Thank! It’s a long comment, so I’ll comment on the convergence, morphologies and the rest latter, so here is just top-level comment on shards. (I’ve read about half of the doc)
My impression is they are basically the same thing which I called “agenty subparts” in Multi-agent predictive minds and AI alignment (and Friston calls “fixed priors”). Where “agenty” means ~ description from intentional stance is a good description, in information theory sense. (This naturally implies fluid boundaries and continuity)
Where I would disagree/find your terminology unclear is where you refer to this as an example of inner alignment failure. Putting in “agenty subparts” into the predictive processing machinery is not a failure, but bandwidth-feasible way for the evolution to communicate valuable states to the PP engine.
Also: I think what you are possibly underestimating is how much is evolution building on top on existing, evolutionary older control circuitry. E.g. evolution does not need to “point to a concept of sex in the PP world model”—evolution was able to make animals seek reproduction long time ago before it invented complex brains. This simplifies the task—what evolution actually had to do was to connect the “PP agenty parts” to parts of existing control machinery, which is often based on “body states”. Technically the older control systems are often using chemicals in blood, or quite old parts of the brain.
Thank! It’s a long comment, so I’ll comment on the convergence, morphologies and the rest latter, so here is just top-level comment on shards. (I’ve read about half of the doc)
My impression is they are basically the same thing which I called “agenty subparts” in Multi-agent predictive minds and AI alignment (and Friston calls “fixed priors”). Where “agenty” means ~ description from intentional stance is a good description, in information theory sense. (This naturally implies fluid boundaries and continuity)
Where I would disagree/find your terminology unclear is where you refer to this as an example of inner alignment failure. Putting in “agenty subparts” into the predictive processing machinery is not a failure, but bandwidth-feasible way for the evolution to communicate valuable states to the PP engine.
Also: I think what you are possibly underestimating is how much is evolution building on top on existing, evolutionary older control circuitry. E.g. evolution does not need to “point to a concept of sex in the PP world model”—evolution was able to make animals seek reproduction long time ago before it invented complex brains. This simplifies the task—what evolution actually had to do was to connect the “PP agenty parts” to parts of existing control machinery, which is often based on “body states”. Technically the older control systems are often using chemicals in blood, or quite old parts of the brain.
I guess I’ll respond once you’ve made your full comment. In the meantime, do you mind if I copy your comment here to the shard theory doc?