Indeed. It’s a good thing nobody would cooperate with trying to make AIs that run on their own, holds finger to in-ear monitor, ah crap nevermind,
So I do think it’s unlikely that yud’s fear of a sudden totalizing AI is quite exactly what comes true—at least, not for a while, because as you point out, that’s much harder than weaker forms of growth. But this threat model does not massively reassure me—humans could simply spend a while disempowered before dying. It gives us a bigger window, but actually achieving reliable integration with the human network of overlapping utility functions (people objectively caring about each other) is still not guaranteed and is worth pushing hard for. (not that you implied otherwise—just reciting the thing I’d want to say to a random person who linked me this post.)
Indeed. It’s a good thing nobody would cooperate with trying to make AIs that run on their own, holds finger to in-ear monitor, ah crap nevermind,
So I do think it’s unlikely that yud’s fear of a sudden totalizing AI is quite exactly what comes true—at least, not for a while, because as you point out, that’s much harder than weaker forms of growth. But this threat model does not massively reassure me—humans could simply spend a while disempowered before dying. It gives us a bigger window, but actually achieving reliable integration with the human network of overlapping utility functions (people objectively caring about each other) is still not guaranteed and is worth pushing hard for. (not that you implied otherwise—just reciting the thing I’d want to say to a random person who linked me this post.)
Strong upvote.