We have no idea how to do expected utility calculations in these kind of situations. Furthermore, even if the AI figured out some way, e.g., using some form of renormalization, we have to reason to believe the result would at all resemble our preferences.
What are the consequences of that?
We have no idea how to do expected utility calculations in these kind of situations. Furthermore, even if the AI figured out some way, e.g., using some form of renormalization, we have to reason to believe the result would at all resemble our preferences.