The authors argue that [… in addition to some other agents] the goal-seeking agent that gets one utiliton every time it satisfies a pre-specified goal and no utility otherwise [...], will all decide to build and use a delusion box.
They’re using the term “goal seeking agent” in a perverse way. As EY explains in his third and fourth paragraphs, seeking a result defined in sensory-data terms is not the only, or even usual, sense of “goal” that people would attach to the phrase “goal seeking agent”. Nor is that a typical goal that a programmer would want an AI to seek.
They’re using the term “goal seeking agent” in a perverse way. As EY explains in his third and fourth paragraphs, seeking a result defined in sensory-data terms is not the only, or even usual, sense of “goal” that people would attach to the phrase “goal seeking agent”. Nor is that a typical goal that a programmer would want an AI to seek.