I think a notion of understanding individual actions requires breaking things down into steps which aim to accomplish specific things.
(Including potentially judging decompositions that AIs come up with.)
So, in the maze case, you’re probably fine just judging where it ends up (and the speed/cost of the path) given that we don’t care about particular choices and there very likely aren’t problematic side effects unless the AI is very, very super intelligent.
I think a notion of understanding individual actions requires breaking things down into steps which aim to accomplish specific things.
(Including potentially judging decompositions that AIs come up with.)
So, in the maze case, you’re probably fine just judging where it ends up (and the speed/cost of the path) given that we don’t care about particular choices and there very likely aren’t problematic side effects unless the AI is very, very super intelligent.