Whoops, mea culpa on that one! Deleted and changed to:
the main post there pointed out that seemingly anything can be trivially modeled as being a “utility maximizer” (further discussion here), whereas only some intelligent agents can be described as being “goal-directed” (as defined in this post), and the latter is a more useful concept for reasoning about AI safety.
Whoops, mea culpa on that one! Deleted and changed to: