I’m starting to suspect that AGI might require decision theoretic insights about reflection in order to be truly dangerous
Another way in which decision theoretic insights may be harmful is if they increase the sophistication of UFAI and allow them to control less sophisticated AGIs in other universes.
They seem to be intent on laying the groundwork for the ennead.
I’m trying to avoid being too confrontational, which might backfire, or I might be wrong myself. It seems safer to just push them to be more strategic and either see the danger themselves or explain why it’s a good idea despite the dangers.
Another way in which decision theoretic insights may be harmful is if they increase the sophistication of UFAI and allow them to control less sophisticated AGIs in other universes.
I’m trying to avoid being too confrontational, which might backfire, or I might be wrong myself. It seems safer to just push them to be more strategic and either see the danger themselves or explain why it’s a good idea despite the dangers.