But also, even ignoring all of that, I see this post as compatible with my post. My goal was for people to premise their AI safety risk arguments on the concept of goal-directedness, rather than utility maximization, and this post does exactly that.
But also, even ignoring all of that, I see this post as compatible with my post. My goal was for people to premise their AI safety risk arguments on the concept of goal-directedness, rather than utility maximization, and this post does exactly that.