Therefore it is reduced impact to output the correct x-coordinates, so I shall.
This seems to me to be a weak point in the reasoning. The AI must surely assign some nonzero probability to our getting the right y-coordinate through some other channel? In fact, why are you telling the X AI about Y at all? It seems strictly simpler just to ask it for the X coordinate.
This seems to me to be a weak point in the reasoning. The AI must surely assign some nonzero probability to our getting the right y-coordinate through some other channel? In fact, why are you telling the X AI about Y at all? It seems strictly simpler just to ask it for the X coordinate.
http://lesswrong.com/r/discussion/lw/lyh/utility_vs_probability_idea_synthesis/ and http://lesswrong.com/lw/ltf/false_thermodynamic_miracles/