Re: I contend that not all utility functions will lead to the “drives” described by Omohundro.
Well, of course they won’t. The idea is that “the drives” are what you get unless you code things into the utility function to prevent them.
For example, you could make an AI that turns itself off in the year 2100 - and therefore fails to expand and grow indefinitely—by incorporating a “2100” clause into its utility function.
However, some of the “drives” are not so easy to circumvent. Try thinking of a utility function that allows humans to turn off an AI, which doesn’t lead to it wanting to turn itself off—for example.
Re: the AI can immediately delete those purely economic goals
One of the ideas is that AI’s defend and protect their utility functions. They can’t just change or delete them—or rather they could, but they won’t want to.
Re: Humans don’t behave like “homo economicus” and who says sentient AIs will.
AIs will better approximate rational economic agents—else they will have the “vulnerabilities” Omohundro mentions—they will burn up their resources without attaining their goals. Humans have a laundry list of such vulnerabilities—and we are generally worse off for them.
Re: The paper just begs the question.
Well, the paper doesn’t address the question of what utility functions will be chosen—that’s beyond its scope.
Re: I contend that not all utility functions will lead to the “drives” described by Omohundro.
Well, of course they won’t. The idea is that “the drives” are what you get unless you code things into the utility function to prevent them.
For example, you could make an AI that turns itself off in the year 2100 - and therefore fails to expand and grow indefinitely—by incorporating a “2100” clause into its utility function.
However, some of the “drives” are not so easy to circumvent. Try thinking of a utility function that allows humans to turn off an AI, which doesn’t lead to it wanting to turn itself off—for example.
Re: the AI can immediately delete those purely economic goals
One of the ideas is that AI’s defend and protect their utility functions. They can’t just change or delete them—or rather they could, but they won’t want to.
Re: Humans don’t behave like “homo economicus” and who says sentient AIs will.
AIs will better approximate rational economic agents—else they will have the “vulnerabilities” Omohundro mentions—they will burn up their resources without attaining their goals. Humans have a laundry list of such vulnerabilities—and we are generally worse off for them.
Re: The paper just begs the question.
Well, the paper doesn’t address the question of what utility functions will be chosen—that’s beyond its scope.