This seems to have led a slew of people to conclude that simple values lead to simple outcomes. You yourself suggest that the simple value of “filling the universe with orgasmium” is one whose outcome would mean that “the future of the universe will turn out to be rather simple”.
Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity—in addition to lots of orgasmium.
Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity
...but not in the least convenient possible world with an ontologically simple turn-everything-into-orgasmium button; and the sort of complexity that you mention that (I agree) would be involved in the actual world isn’t a sort that most people regard as terminally valuable.
Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Human morality is the point under discussion, so of course it’s relevant. It seems clear that the chief kind of “complexity” that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.
I think I missed the bit where you went off into a wild and highly-improbable fantasy world.
Re: Human morality is the point under discussion
What I was discussing was the “tendency to assume that complexity of outcome must have been produced by complexity of value”. That is not specifically to do with human values.
AFAICT, the origin of these ideas is here:
http://lesswrong.com/lw/l3/thou_art_godshatter/
http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/
http://lesswrong.com/lw/lq/fake_utility_functions/
http://lesswrong.com/lw/y3/value_is_fragile/
This seems to have led a slew of people to conclude that simple values lead to simple outcomes. You yourself suggest that the simple value of “filling the universe with orgasmium” is one whose outcome would mean that “the future of the universe will turn out to be rather simple”.
Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity—in addition to lots of orgasmium.
...but not in the least convenient possible world with an ontologically simple turn-everything-into-orgasmium button; and the sort of complexity that you mention that (I agree) would be involved in the actual world isn’t a sort that most people regard as terminally valuable.
Here we were talking about a superintelligent agent whose “fondest desire is to fill the universe with orgasmium”. About the only way such an agent would fail to produce enormous complexity is if it died—or was otherwise crippled or imprisoned.
Whether humans would want to live—or would survive in—the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me—and it seems rather irrelevant to the point under discussion.
Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?
Human morality is the point under discussion, so of course it’s relevant. It seems clear that the chief kind of “complexity” that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.
Re: Did you even read what I wrote?
I think I missed the bit where you went off into a wild and highly-improbable fantasy world.
Re: Human morality is the point under discussion
What I was discussing was the “tendency to assume that complexity of outcome must have been produced by complexity of value”. That is not specifically to do with human values.