Claim: The embodied system is still not necessarily an agent, and may in failure cases not have the agency one expects it to. Any representation of what agency is needs to separate successful agency from system that is claimed to have it.
Core reason: Agency is a property of pulling the future back in time; it’s when a system selects actions by conditioning on the future. Agency is when any object, even ones not structured like traditional agents, takes the shape of the future before the future does and thereby steers the future.
How I came to believe this confidently: this paper, which you have probably seen but I link as pdf for reasons; anyone reading this who hasn’t seen it, I’d very strongly encourage at least skimming it. If by chance you haven’t already read it in detail, my recommended reading order if you have 20 minutes and already understand SCMs would be {1. intro} → {appendix B.} → {1.1 example, 1.2 other characterizations, 1.3 what do we consider} → skim/quick-index/first-pass {2. background, 3. algorithms, 3.1 MSCM, 3.2 labeled MCG} → read and ponder 3.3 & 3.4 and algorithms 1 and 2, then skim through assumptions in 3.5 and read algorithm 3. If you really want to get into it you can then do several more passes to properly understand the algorithms.
This took me several days with multiple calls with friends, as I was new to SCMs. I’m abbreviating things so there isn’t an easy gloss of what I’m referring to without reading the paper; I can’t summarize precisely so I’m choosing to not summarize at all. Hopefully this isn’t new to @Max H, but on the off chance it is, this is my reply to describe why I disagree.
Hadn’t seen the paper, but I think I basically agree with it, and your claim.
I was mainly saying something even weaker: the policy itself is just a function, so it can’t be an agent. The thing that might or might not be an agent is an embodiment of the policy by repeatedly executing it in the appropriate environment, while hooked up to (real or simulated) I/O channels.
Also, I didn’t mean for this distinction to be particularly interesting—I am still slightly concerned that it is so pedantic / boring / obvious that I’m the only one who finds it worth distinguishing at all.
I’m literally just saying, a description of a function / mind / algorithm is a different kind of thing than the (possibly repeated) execution of that function / mind / algorithm on some substrate. If that sounds like a really deep or interesting point, I’m probably still being misunderstood.
Interesting distinction. An agent that is asleep isn’t an agent, by this usage.
Well, a sleeping person is still an embodied system, with running processes and sensors that can wake the agent up. And the agent, before falling asleep, might arrange things such that they are deliberately woken up in the future under certain circumstances (e.g. setting an alarm, arranging a guard to watch over them during their sleep).
The thing I’m saying that is not an agent is more like, a static description of a mind. e.g. the source code of an AGI isn’t an agent until it is compiled and executed on some kind of substrate. I’m not a carbon (or silicon) chauvinist; I’m not picky about which substrate. But without some kind of embodiment and execution, you just have a mathematical description of a computation, the actual execution of which may or may not be computable or otherwise physically realizable within our universe.
By the way, are you Max H of the space rock ai thingy?
okay, perhaps sleep doesn’t cut it. I was calling the unrun policy a sleeping ai, but perhaps suspended or stopped might be better words to generalize the unrun state of a system that would be agentic when you type python inference.py and hit enter on your commandline.
Claim: The embodied system is still not necessarily an agent, and may in failure cases not have the agency one expects it to. Any representation of what agency is needs to separate successful agency from system that is claimed to have it.
Core reason: Agency is a property of pulling the future back in time; it’s when a system selects actions by conditioning on the future. Agency is when any object, even ones not structured like traditional agents, takes the shape of the future before the future does and thereby steers the future.
How I came to believe this confidently: this paper, which you have probably seen but I link as pdf for reasons; anyone reading this who hasn’t seen it, I’d very strongly encourage at least skimming it. If by chance you haven’t already read it in detail, my recommended reading order if you have 20 minutes and already understand SCMs would be {1. intro} → {appendix B.} → {1.1 example, 1.2 other characterizations, 1.3 what do we consider} → skim/quick-index/first-pass {2. background, 3. algorithms, 3.1 MSCM, 3.2 labeled MCG} → read and ponder 3.3 & 3.4 and algorithms 1 and 2, then skim through assumptions in 3.5 and read algorithm 3. If you really want to get into it you can then do several more passes to properly understand the algorithms.
This took me several days with multiple calls with friends, as I was new to SCMs. I’m abbreviating things so there isn’t an easy gloss of what I’m referring to without reading the paper; I can’t summarize precisely so I’m choosing to not summarize at all. Hopefully this isn’t new to @Max H, but on the off chance it is, this is my reply to describe why I disagree.
Hadn’t seen the paper, but I think I basically agree with it, and your claim.
I was mainly saying something even weaker: the policy itself is just a function, so it can’t be an agent. The thing that might or might not be an agent is an embodiment of the policy by repeatedly executing it in the appropriate environment, while hooked up to (real or simulated) I/O channels.
Interesting distinction. An agent that is asleep isn’t an agent, by this usage.
By the way, are you Max H of the space rock ai thingy?
Also, I didn’t mean for this distinction to be particularly interesting—I am still slightly concerned that it is so pedantic / boring / obvious that I’m the only one who finds it worth distinguishing at all.
I’m literally just saying, a description of a function / mind / algorithm is a different kind of thing than the (possibly repeated) execution of that function / mind / algorithm on some substrate. If that sounds like a really deep or interesting point, I’m probably still being misunderstood.
Well, a sleeping person is still an embodied system, with running processes and sensors that can wake the agent up. And the agent, before falling asleep, might arrange things such that they are deliberately woken up in the future under certain circumstances (e.g. setting an alarm, arranging a guard to watch over them during their sleep).
The thing I’m saying that is not an agent is more like, a static description of a mind. e.g. the source code of an AGI isn’t an agent until it is compiled and executed on some kind of substrate. I’m not a carbon (or silicon) chauvinist; I’m not picky about which substrate. But without some kind of embodiment and execution, you just have a mathematical description of a computation, the actual execution of which may or may not be computable or otherwise physically realizable within our universe.
Nope, different person!
okay, perhaps sleep doesn’t cut it. I was calling the unrun policy a sleeping ai, but perhaps suspended or stopped might be better words to generalize the unrun state of a system that would be agentic when you type
python inference.py
and hit enter on your commandline.