The utility function of Deep Blue has 8,000 parts—and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it’s aim is to win games of chess. The exact details of the information in the original utility function are not recovered—but the eventual functional outcome would be much the same—a powerful chess computer.
The supposed complexity is actually a bunch of implementation details that can be effectively recreated from the goal—if that should prove to be necessary.
It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue’s utility function while improving it would actually have a crippling negative effect on its future development. For example, the “look 9 moves ahead” heuristic is a feature when the program is weak, but a serious bug when it grows stronger.
Similarly with complexity of human values: those are a bunch of implementation details to deal with the problem of limited resources—not some kind of representation of the real target.
That’s a wiki article—which can’t be responded to directly. The point I raise is an old controversy now. This message seems rather redundant now—since the question it responded to has subsequently been dramatically edited.
Why was this comment voted down so much (to −4 as of now)? It seems to be a reasonable point, clearly written, not an obvious troll or off-topic. Why does it deserve to be ignored?
I’ve critiqued this “value is complex” [http://lesswrong.com/lw/y3/value_is_fragile/] material before. To summarise from my objections there:
The utility function of Deep Blue has 8,000 parts—and contained a lot of information. Throw all that information away, and all you really need to reconstruct Deep Blue is the knowledge that it’s aim is to win games of chess. The exact details of the information in the original utility function are not recovered—but the eventual functional outcome would be much the same—a powerful chess computer.
The supposed complexity is actually a bunch of implementation details that can be effectively recreated from the goal—if that should prove to be necessary.
It is not precious information that must be preserved. If anything, attempts to preserve the 8,000 parts of Deep Blue’s utility function while improving it would actually have a crippling negative effect on its future development. For example, the “look 9 moves ahead” heuristic is a feature when the program is weak, but a serious bug when it grows stronger.
Similarly with complexity of human values: those are a bunch of implementation details to deal with the problem of limited resources—not some kind of representation of the real target.
It looks like this is a response to the passing link to http://wiki.lesswrong.com/wiki/Complexity_of_value in the article. At first I didn’t understand what in the article you were responding to.
The article it was posted in response to was this one—from the conclusion of the post:
http://wiki.lesswrong.com/wiki/Complexity_of_value
That’s a wiki article—which can’t be responded to directly. The point I raise is an old controversy now. This message seems rather redundant now—since the question it responded to has subsequently been dramatically edited.
Yes, I edited, but before your response. Sorry for the confusion.
Why was this comment voted down so much (to −4 as of now)? It seems to be a reasonable point, clearly written, not an obvious troll or off-topic. Why does it deserve to be ignored?
It is off topic. The article was not value being complex, fragile, or hard to preserve.