This sounds similar in effect to what philosophy of mind calls “embodied cognition”, but it takes a more abstract tack. Is there a recognized background link between the two ideas already? Is that a useful idea, regardless of whether it already exists, or am I off track?
I’d draw more of a connection between embedded agency and bounded optimality or the philosophical superproject of “naturalizing” various concepts (e.g., naturalized epistemology).
Our old name for embedded agency was “naturalized agency”; we switched because we kept finding that CS people wanted to know what we meant by “naturalized”, and we’d always say “embedded”, so...
“Embodiment” is less relevant because it’s about, well, bodies. Embedded agency just says that the agent is embedded in its environment in some fashion; it doesn’t say that the agent has a robot body, in spite of the cute pictures of robots Abram drew above. An AI system with no “body” it can directly manipulate or sense will still be physically implemented on computing hardware, and that on its own can raise all the issues above.
In my view, embodied cognition says that the way in which an agent is embodied is important to its cognition, whereas embedded agency says that the fact that an agent is embodied is important to its cognition.
(This is probably a repetition, but it’s shorter and more explicit, which could be useful.)
This sounds similar in effect to what philosophy of mind calls “embodied cognition”, but it takes a more abstract tack. Is there a recognized background link between the two ideas already? Is that a useful idea, regardless of whether it already exists, or am I off track?
I’d draw more of a connection between embedded agency and bounded optimality or the philosophical superproject of “naturalizing” various concepts (e.g., naturalized epistemology).
Our old name for embedded agency was “naturalized agency”; we switched because we kept finding that CS people wanted to know what we meant by “naturalized”, and we’d always say “embedded”, so...
“Embodiment” is less relevant because it’s about, well, bodies. Embedded agency just says that the agent is embedded in its environment in some fashion; it doesn’t say that the agent has a robot body, in spite of the cute pictures of robots Abram drew above. An AI system with no “body” it can directly manipulate or sense will still be physically implemented on computing hardware, and that on its own can raise all the issues above.
In my view, embodied cognition says that the way in which an agent is embodied is important to its cognition, whereas embedded agency says that the fact that an agent is embodied is important to its cognition.
(This is probably a repetition, but it’s shorter and more explicit, which could be useful.)