The default case of FOOM is an unFriendly AI
Before this, we also have: “The default case of an AI is to not FOOM at all, even if it’s self-modifying (like a self-optimizing compiler).” Why not anti-predict that no AIs will FOOM at all?
This AI becomes able to improve itself in a haphazard way, makes various changes that are net improvements but may introduce value drift, and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever).
Given the tiny minority of AIs that will FOOM at all, what is the probability that an AI which has been designed for a purpose other than FOOMing, will instead FOOM?
The default case of FOOM is an unFriendly AI Before this, we also have: “The default case of an AI is to not FOOM at all, even if it’s self-modifying (like a self-optimizing compiler).” Why not anti-predict that no AIs will FOOM at all?
This AI becomes able to improve itself in a haphazard way, makes various changes that are net improvements but may introduce value drift, and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever). Given the tiny minority of AIs that will FOOM at all, what is the probability that an AI which has been designed for a purpose other than FOOMing, will instead FOOM?