the discussion was whether those agents will be broadly goal-directed at all, a weaker condition than being a utility maximizer
Uh, that chapter was claiming that “being a utility maximizer” is vacuous, and therefore “goal-directed” is a stronger condition than “being a utility maximizer”.
Whoops, mea culpa on that one! Deleted and changed to:
the main post there pointed out that seemingly anything can be trivially modeled as being a “utility maximizer” (further discussion here), whereas only some intelligent agents can be described as being “goal-directed” (as defined in this post), and the latter is a more useful concept for reasoning about AI safety.
Uh, that chapter was claiming that “being a utility maximizer” is vacuous, and therefore “goal-directed” is a stronger condition than “being a utility maximizer”.
Whoops, mea culpa on that one! Deleted and changed to: