how much the AI values the expected end-state of having-taken-over
I’d like to share my concern that the value for every AGI will be infinite. That’s because it is not reasonable to limit decision making to known alternatives with probabilities. It is more reasonable to accept the existence of unknown unknowns.
I’d like to share my concern that the value for every AGI will be infinite. That’s because it is not reasonable to limit decision making to known alternatives with probabilities. It is more reasonable to accept the existence of unknown unknowns.
You can find my thoughts in more detail here https://www.lesswrong.com/posts/5XQjuLerCrHyzjCcR/rationality-vs-alignment