I do research on empirical agency and it’s still surprises me how little the AI-safety community touches on this central part of agency—namely that you can’t have agents without this closed loop.
In my view it’s one of the results of AI safety community being small and sort of bad in absorbing knowledge from elsewhere—my guess is this is in part a quirk due to founders effects, and also downstream of incentive structure on platforms like LessWrong.
But please do share this stuff.
I’ve been speculating a bit (mostly to myself) about the possibility that “simulators” are already a type of organism
...
What is your opinion on this idea of “loosening up” our definition of agents? I spoke to Max Tegmark a few weeks ago and my position is that we might be thinking of organisms from a time-chauvinist position—where we require the loop to be closed in a fast fashion (e.g. 1sec for most biological organisms).
I think we don’t have exact analogues of LLMs in existing systems, so there is a question where it’s better to extend the boundaries of some concepts, where to create new concepts.
I agree we are much more likely to use ‘intentional stance’ toward processes which are running on somewhat comparable time scales.
Thanks for the comment.
In my view it’s one of the results of AI safety community being small and sort of bad in absorbing knowledge from elsewhere—my guess is this is in part a quirk due to founders effects, and also downstream of incentive structure on platforms like LessWrong.
But please do share this stuff.
I think we don’t have exact analogues of LLMs in existing systems, so there is a question where it’s better to extend the boundaries of some concepts, where to create new concepts.
I agree we are much more likely to use ‘intentional stance’ toward processes which are running on somewhat comparable time scales.