None of these ideas seem especially promising even for achieving temporary power over the world (sufficient for preventing creation of other AI).
It seems even harder to achieve a long-term stable and safe world environment, in which we can take our time to solve remaining philosophy and AI safety problems and eventually realize the full potential value of the universe.
Some of them (other forms of the narrow AI to take over the world, Christiano’s approach) seem to require solving something like decision theory or metaphilosophy anyway to ensure safety.
None of these ideas seem especially promising even for achieving temporary power over the world (sufficient for preventing creation of other AI).
It seems even harder to achieve a long-term stable and safe world environment, in which we can take our time to solve remaining philosophy and AI safety problems and eventually realize the full potential value of the universe.
Some of them (other forms of the narrow AI to take over the world, Christiano’s approach) seem to require solving something like decision theory or metaphilosophy anyway to ensure safety.