I was hopeful that there would actually be 240 questions, presumably as a linked quiz/survey or something.
For the questions that are already there;
1.1: 100% self-expression
1.2: 100% health
2:1: I’m not sure, and not for unselfish reasons. I know pleasure being desirable is basically the most fundamental thing about utility functions but the idea of eliminating hedonic adaptation so as to live a life of constant happiness significantly greater than just baseline contentment actually scared me when Eliezer discussed it at one point in the sequences. Because of this fear, I wouldn’t answer 100%. Any inbetween seems unsatisfactory—if I was at the 90%-as-good-as-the-best-possible-feast/orgy/LAN party, I would have to wonder about the actual best possible, and want more.
Thus, I would choose 100% justice, ignoring the wizard completely.
2.2: 100% pleasure.
I don’t think the questions perfectly encapsulate a trade-off between one value and another, and I think the hypotheticals might either need to be refined, or a much larger sampling of hypotheticals that make these trade-offs happen.
Also, I don’t think everyone’s values perfectly match the sixteen you give. To me, the list seems more like a collection of applause lights than an actual list of intrinsic values, and I expect a real list of things intrinsically valued by a specific human’s utility function would be much messier, with some values being very broad things that encompass/cause several of the equivalent values in the list, and some being ridiculously specific additions or exceptions. And I would expect every human’s to be different, except when abstracted to the point of maybe-uselessness.
I was hopeful that there would actually be 240 questions, presumably as a linked quiz/survey or something.
For the questions that are already there; 1.1: 100% self-expression 1.2: 100% health
2:1: I’m not sure, and not for unselfish reasons. I know pleasure being desirable is basically the most fundamental thing about utility functions but the idea of eliminating hedonic adaptation so as to live a life of constant happiness significantly greater than just baseline contentment actually scared me when Eliezer discussed it at one point in the sequences. Because of this fear, I wouldn’t answer 100%. Any inbetween seems unsatisfactory—if I was at the 90%-as-good-as-the-best-possible-feast/orgy/LAN party, I would have to wonder about the actual best possible, and want more. Thus, I would choose 100% justice, ignoring the wizard completely. 2.2: 100% pleasure.
I don’t think the questions perfectly encapsulate a trade-off between one value and another, and I think the hypotheticals might either need to be refined, or a much larger sampling of hypotheticals that make these trade-offs happen.
Also, I don’t think everyone’s values perfectly match the sixteen you give. To me, the list seems more like a collection of applause lights than an actual list of intrinsic values, and I expect a real list of things intrinsically valued by a specific human’s utility function would be much messier, with some values being very broad things that encompass/cause several of the equivalent values in the list, and some being ridiculously specific additions or exceptions. And I would expect every human’s to be different, except when abstracted to the point of maybe-uselessness.