My interpretation of Eliezer’s position is something like: “If you see an intelligent agent that wasn’t specifically optimized away from goal-directedness, then it will be goal-directed”. I think I could restate arguments for that, though I remember reading about a bunch of stuff related to this on Arbital, so maybe one of the writeups there gives more background.
My interpretation of Eliezer’s position is something like: “If you see an intelligent agent that wasn’t specifically optimized away from goal-directedness, then it will be goal-directed”. I think I could restate arguments for that, though I remember reading about a bunch of stuff related to this on Arbital, so maybe one of the writeups there gives more background.