I expected the cross options to no receive votes. But they did. That mean it was not an equivalent question and some of the details did matter. I would find use for a clarification on what those were.
Yeah, granted that premise and given that maximizing utility may very well involve telling you stuff, option 2 seems to imply one of the following:
you don’t trust Omega
you don’t trust your utility function
you have objections (other than trust) to accepting direct help from an alien supercomputer
The second of these possibilities seems the most compelling; we aren’t Friendly in a strong sense. Depending on Omega’s idea of your utility function, you can make an argument that maximizing it would be a disaster from a more general perspective, either because you think your utility function’s hopelessly parochial and is likely to need modification once we better understand metaethics and fun theory, or because you don’t think you’re really all that ethical at whatever level Omega’s going to be looking at. This latter is almost certainly true, and the former at least seems plausible.
Judging from the vote that doesn’t seem to be the case. I guess the options are still not phrased precisely enough. Probably utility needs to be made more clear.
This cries fr a poll. To make this into a more balanced question I changed the “simulation” variant into something more ‘real’:
[pollid:750]
Granting the question’s premise that we have a utility function, you have just defined option 1 as the rational choice.
Indeed it collapses in my to a question of:
Would you rather [pollid:751]
I expected the cross options to no receive votes. But they did. That mean it was not an equivalent question and some of the details did matter. I would find use for a clarification on what those were.
Yeah, granted that premise and given that maximizing utility may very well involve telling you stuff, option 2 seems to imply one of the following:
you don’t trust Omega
you don’t trust your utility function
you have objections (other than trust) to accepting direct help from an alien supercomputer
The second of these possibilities seems the most compelling; we aren’t Friendly in a strong sense. Depending on Omega’s idea of your utility function, you can make an argument that maximizing it would be a disaster from a more general perspective, either because you think your utility function’s hopelessly parochial and is likely to need modification once we better understand metaethics and fun theory, or because you don’t think you’re really all that ethical at whatever level Omega’s going to be looking at. This latter is almost certainly true, and the former at least seems plausible.
Judging from the vote that doesn’t seem to be the case. I guess the options are still not phrased precisely enough. Probably utility needs to be made more clear.