This doesn’t sit well with dreams of making a superintelligent FAI that will be the last invention we ever need make, after which we will have attained the perfect life for everyone always.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.
Indeed, but it does agree with the argument for the importance of not getting AI wrong in a way that does chain the future.