Then why don’t you go ahead and slay it? I share your dislike for surface analogies, but it seems like this one runs deeper.
Although the cow doesn’t have the intelligence to form that thought, the point is that the hypothetical cow thinks “It takes intelligence to increase my utility function, therefore intelligence much greater than mine must lead to greater increases in my utility”. It turns out that the cow is wrong, and a counterexample is us. There are supercow intelligences running around, but they kill and eat cows which is presumably not something the cow wants.
If you get the exact same argument out of a human brain, it’s just as invalid, though (thankfully) there isn’t any real life example to point to.
The deep connection is the same; there is more than one possible utility function.
Then why don’t you go ahead and slay it? I share your dislike for surface analogies, but it seems like this one runs deeper.
Although the cow doesn’t have the intelligence to form that thought, the point is that the hypothetical cow thinks “It takes intelligence to increase my utility function, therefore intelligence much greater than mine must lead to greater increases in my utility”. It turns out that the cow is wrong, and a counterexample is us. There are supercow intelligences running around, but they kill and eat cows which is presumably not something the cow wants.
If you get the exact same argument out of a human brain, it’s just as invalid, though (thankfully) there isn’t any real life example to point to.
The deep connection is the same; there is more than one possible utility function.