Yeah, granted that premise and given that maximizing utility may very well involve telling you stuff, option 2 seems to imply one of the following:
you don’t trust Omega
you don’t trust your utility function
you have objections (other than trust) to accepting direct help from an alien supercomputer
The second of these possibilities seems the most compelling; we aren’t Friendly in a strong sense. Depending on Omega’s idea of your utility function, you can make an argument that maximizing it would be a disaster from a more general perspective, either because you think your utility function’s hopelessly parochial and is likely to need modification once we better understand metaethics and fun theory, or because you don’t think you’re really all that ethical at whatever level Omega’s going to be looking at. This latter is almost certainly true, and the former at least seems plausible.
Yeah, granted that premise and given that maximizing utility may very well involve telling you stuff, option 2 seems to imply one of the following:
you don’t trust Omega
you don’t trust your utility function
you have objections (other than trust) to accepting direct help from an alien supercomputer
The second of these possibilities seems the most compelling; we aren’t Friendly in a strong sense. Depending on Omega’s idea of your utility function, you can make an argument that maximizing it would be a disaster from a more general perspective, either because you think your utility function’s hopelessly parochial and is likely to need modification once we better understand metaethics and fun theory, or because you don’t think you’re really all that ethical at whatever level Omega’s going to be looking at. This latter is almost certainly true, and the former at least seems plausible.