I doubt I’d do an unassailable job. And an unassailable job is probably necessary, if everyone will abide by Laws of Fun forever.
...
Unless we plan to deceive them about the nature of the postsingularity universe, their not being allowed X will be known to be somebody’s fault. Someone believed in some Laws of Fun that they programmed into an AI that determined that the best way to optimize for that kind of Fun was to disallow X.
One point of that sequence is that typical suggestions for Utopias are abominably flawed, that there are certain things that could be done much better, and that people don’t spontaneously notice these flaws upon hearing a description of a Utopia. The sequence draws attention to certain problems of human judgment, and trains you to notice such problems whenever you see another proposed “Utopia”, makes you less gullible.
It is emphatically NOT a suggested set of rules for an AGI to enforce, indeed one of the arguments that could be drawn from that sequence is that trying to manually construct such rules is a bad idea, that lists of issues such as that sequence or one given in this post will inevitably follow from any simplistic ruleset. You don’t program rules into an AI, you program a way of figuring out what the rules should be (and those rules have to get down to the level of saying how to arrange atoms, so won’t have much to do with verbal descriptions of human condition).
One point of that sequence is that typical suggestions for Utopias are abominably flawed, that there are certain things that could be done much better, and that people don’t spontaneously notice these flaws upon hearing a description of a Utopia. The sequence draws attention to certain problems of human judgment, and trains you to notice such problems whenever you see another proposed “Utopia”, makes you less gullible.
It is emphatically NOT a suggested set of rules for an AGI to enforce, indeed one of the arguments that could be drawn from that sequence is that trying to manually construct such rules is a bad idea, that lists of issues such as that sequence or one given in this post will inevitably follow from any simplistic ruleset. You don’t program rules into an AI, you program a way of figuring out what the rules should be (and those rules have to get down to the level of saying how to arrange atoms, so won’t have much to do with verbal descriptions of human condition).