Good for creating AGI, maybe bad for surviving it. Hopefully the knowledge will also help us predict the actions of strong self-modifying AI.
It does seem promising to this layman, since it removes the best reason I could imagine for considering that last goal impossible.
Good for creating AGI, maybe bad for surviving it. Hopefully the knowledge will also help us predict the actions of strong self-modifying AI.
It does seem promising to this layman, since it removes the best reason I could imagine for considering that last goal impossible.