I don’t understand this. If the singleton’s utility function was written such that it’s highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?
I don’t think that my brain was working optimally at 1am last night.
My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn’t have the precision of thought necessary to put it into such language). However, it’s pretty absurd for us to be telling our CEV what to do, considering that they’ll have much more information than we do and much more refined thinking processes. I actually don’t think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).
My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner’s dilemma). However, the counterargument from point 1 also applies to this point.
Dorikka,
I don’t understand this. If the singleton’s utility function was written such that it’s highest value was for humans to become the Affront, then making it the case that humans believed they were the Affront while not being the Affront would not satisfy the utility function. So why would the singleton do such a thing?
I don’t think that my brain was working optimally at 1am last night.
My first point was that our CEV might decide to go Baby-Eater, and so the FAI should treat the caring-about-the-real-world-state part of its utility function as a mere preference (like chocolate ice cream), and pop humanity into a nicely designed VR (though I didn’t have the precision of thought necessary to put it into such language). However, it’s pretty absurd for us to be telling our CEV what to do, considering that they’ll have much more information than we do and much more refined thinking processes. I actually don’t think that our Last Judge should do anything more than watch for coding errors (as in, we forgot to remove known psychological biases when creating the CEV).
My second point was that the FAI should also slip us into a VR if we desire a world-state in which we defect from each other (with similar results as in the prisoner’s dilemma). However, the counterargument from point 1 also applies to this point.