It sounds like, “the better you do maximizing your utility function, the more likely you are to get a bad result,” which can’t be true with the ordinary meanings of all those words. The only ways I can see for this to be true is if you aren’t actually maximizing your utility function, or your true utility function is not the same as the one you’re maximizing. But then you’re just plain old maximizing the wrong thing.
Absolutely, granted. I guess I just found this post to be an extremely convoluted way to make the point of “if you maximize the wrong thing, you’ll get something that you don’t want, and the more effectively you achieve the wrong goal, the more you diverge from the right goal.” I don’t see that the existence of “marketing worlds” makes maximizing the wrong thing more dangerous than it already was.
Additionally, I’m kinda horrified about the class of fixes (of which the proposal is a member) which involve doing the wrong thing less effectively. Not that I have an actual fix in mind. It just sounds like a terrible idea—”we’re pretty sure that our specification is incomplete in an important, unknown way. So we’re going to satisfice instead of maximize when we take over the world.”
It sounds like, “the better you do maximizing your utility function, the more likely you are to get a bad result,” which can’t be true with the ordinary meanings of all those words. The only ways I can see for this to be true is if you aren’t actually maximizing your utility function, or your true utility function is not the same as the one you’re maximizing. But then you’re just plain old maximizing the wrong thing.
Er, yes? But we don’t exactly have the right thing lying around, unless I’ve missed some really exciting FAI news...
Absolutely, granted. I guess I just found this post to be an extremely convoluted way to make the point of “if you maximize the wrong thing, you’ll get something that you don’t want, and the more effectively you achieve the wrong goal, the more you diverge from the right goal.” I don’t see that the existence of “marketing worlds” makes maximizing the wrong thing more dangerous than it already was.
Additionally, I’m kinda horrified about the class of fixes (of which the proposal is a member) which involve doing the wrong thing less effectively. Not that I have an actual fix in mind. It just sounds like a terrible idea—”we’re pretty sure that our specification is incomplete in an important, unknown way. So we’re going to satisfice instead of maximize when we take over the world.”