Also, I didn’t mean for this distinction to be particularly interesting—I am still slightly concerned that it is so pedantic / boring / obvious that I’m the only one who finds it worth distinguishing at all.
I’m literally just saying, a description of a function / mind / algorithm is a different kind of thing than the (possibly repeated) execution of that function / mind / algorithm on some substrate. If that sounds like a really deep or interesting point, I’m probably still being misunderstood.
Interesting distinction. An agent that is asleep isn’t an agent, by this usage.
Well, a sleeping person is still an embodied system, with running processes and sensors that can wake the agent up. And the agent, before falling asleep, might arrange things such that they are deliberately woken up in the future under certain circumstances (e.g. setting an alarm, arranging a guard to watch over them during their sleep).
The thing I’m saying that is not an agent is more like, a static description of a mind. e.g. the source code of an AGI isn’t an agent until it is compiled and executed on some kind of substrate. I’m not a carbon (or silicon) chauvinist; I’m not picky about which substrate. But without some kind of embodiment and execution, you just have a mathematical description of a computation, the actual execution of which may or may not be computable or otherwise physically realizable within our universe.
By the way, are you Max H of the space rock ai thingy?
okay, perhaps sleep doesn’t cut it. I was calling the unrun policy a sleeping ai, but perhaps suspended or stopped might be better words to generalize the unrun state of a system that would be agentic when you type python inference.py and hit enter on your commandline.
Interesting distinction. An agent that is asleep isn’t an agent, by this usage.
By the way, are you Max H of the space rock ai thingy?
Also, I didn’t mean for this distinction to be particularly interesting—I am still slightly concerned that it is so pedantic / boring / obvious that I’m the only one who finds it worth distinguishing at all.
I’m literally just saying, a description of a function / mind / algorithm is a different kind of thing than the (possibly repeated) execution of that function / mind / algorithm on some substrate. If that sounds like a really deep or interesting point, I’m probably still being misunderstood.
Well, a sleeping person is still an embodied system, with running processes and sensors that can wake the agent up. And the agent, before falling asleep, might arrange things such that they are deliberately woken up in the future under certain circumstances (e.g. setting an alarm, arranging a guard to watch over them during their sleep).
The thing I’m saying that is not an agent is more like, a static description of a mind. e.g. the source code of an AGI isn’t an agent until it is compiled and executed on some kind of substrate. I’m not a carbon (or silicon) chauvinist; I’m not picky about which substrate. But without some kind of embodiment and execution, you just have a mathematical description of a computation, the actual execution of which may or may not be computable or otherwise physically realizable within our universe.
Nope, different person!
okay, perhaps sleep doesn’t cut it. I was calling the unrun policy a sleeping ai, but perhaps suspended or stopped might be better words to generalize the unrun state of a system that would be agentic when you type
python inference.py
and hit enter on your commandline.