The god might give great weight to individual preferences. I have tried to convince lots of people to sign up for cryonics. When I say something like “if it were free and you knew it would work would you sign up?” some people have said “no”, or even “of course not.” Plus, the god might have resource constrains and at the margin it could be a close call whether to bring me back, and my stating a desire to be brought back could tip the god to do so with probability high enough to justify the time I spent making the original comment.
Our stated preferences are predictably limited and often untrue accounts what actually constitutes our well-being and our utility to those around us. I’m not sure I want to wake up to a god psychologically incompetent enough to revive people based on weighing wishes greatly. If there are resource constraints which I highly doubt it’s especially important to make decisions based on reliable data.
When I say something like “if it were free and you knew it would work would you sign up?” some people have said “no”, or even “of course not.”
I think this much more likely reflects the dynamics of the discussion, the perceived unlikelihood of the hypothetical and the badness of death than actual preferences. If the hypothetical is improbable enough, changing your mind only has the cost of losing social status and whatever comforting lies you have learned to keep death off your mind and not much upside to talk about.
True. Since people are so irrational, not to mention inconsistent and slow, it might be one of the most difficult problems of FAI. The whole concept of consent in the presence of a much more powerful mind seems pretty shaky.
I can easily imagine that if I ran a simulation of mankind’s evolutionary history, I’d adopt a principle of responding to the requests of simulants given that they are small enough and won’t interfere with the goals of the simulation, just in case they have some awareness. If the purpose of the simulation isn’t simply to satisfy all the simulants’ needs for them (and would in fact be orthogonal to its actual purpose), they would have to make some kind of request for me to do something.
A god smart enough to know what’s good for us is smart enough not to need a prayer to be summoned.
The god might give great weight to individual preferences. I have tried to convince lots of people to sign up for cryonics. When I say something like “if it were free and you knew it would work would you sign up?” some people have said “no”, or even “of course not.” Plus, the god might have resource constrains and at the margin it could be a close call whether to bring me back, and my stating a desire to be brought back could tip the god to do so with probability high enough to justify the time I spent making the original comment.
For many people, 32 karma would also be sufficient benefit to justify the investment made in the comment.
Our stated preferences are predictably limited and often untrue accounts what actually constitutes our well-being and our utility to those around us. I’m not sure I want to wake up to a god psychologically incompetent enough to revive people based on weighing wishes greatly. If there are resource constraints which I highly doubt it’s especially important to make decisions based on reliable data.
I think this much more likely reflects the dynamics of the discussion, the perceived unlikelihood of the hypothetical and the badness of death than actual preferences. If the hypothetical is improbable enough, changing your mind only has the cost of losing social status and whatever comforting lies you have learned to keep death off your mind and not much upside to talk about.
Consent seems to be an important ethical principle for many people and an FAI might well end up implementing it in some form.
True. Since people are so irrational, not to mention inconsistent and slow, it might be one of the most difficult problems of FAI. The whole concept of consent in the presence of a much more powerful mind seems pretty shaky.
I can easily imagine that if I ran a simulation of mankind’s evolutionary history, I’d adopt a principle of responding to the requests of simulants given that they are small enough and won’t interfere with the goals of the simulation, just in case they have some awareness. If the purpose of the simulation isn’t simply to satisfy all the simulants’ needs for them (and would in fact be orthogonal to its actual purpose), they would have to make some kind of request for me to do something.