I think the relevant notion of “being an agent” is whether we have reason to believe it generalizes like a consequentialist (e.g. its internal cognition considers possible actions and picks among them based on expected consequences and relies minimally on the imitative prior). This is upstream of the most important failure modes as described by Roger Grosse here.
I think Sora is still in the bottom left like LLMs, as it has only been trained to predict. Without further argument or evidence I would expect that it probably for the most part hasn’t learned to simulate consequentialist cognition, similar to how LLMs haven’t demonstrated this ability yet (e.g. fail to win a chess game in an easy but OOD situation).
I think the relevant notion of “being an agent” is whether we have reason to believe it generalizes like a consequentialist (e.g. its internal cognition considers possible actions and picks among them based on expected consequences and relies minimally on the imitative prior). This is upstream of the most important failure modes as described by Roger Grosse here.
I think Sora is still in the bottom left like LLMs, as it has only been trained to predict. Without further argument or evidence I would expect that it probably for the most part hasn’t learned to simulate consequentialist cognition, similar to how LLMs haven’t demonstrated this ability yet (e.g. fail to win a chess game in an easy but OOD situation).