That all sounds very plausible. But isn’t this all mostly relevant before AGI is a possibility? That would be a heavy negative tail risk, in which people motivated to “do great things” are quite prone to get us all killed. Should we survive that risk, progress probably mostly won’t be driven by humans, so humans doing great things will barely count. If humans are actually still in charge when we hit ASI, it seems like doing great things with them will probably still have large tail risks (inter-ASI wars).
Right? Or do you see it differently?
It’s a fascinating empirical claim that sounds right now that I hear it.
AGI is heavy-tailed in both directions I think. I don’t think we get utopias by default even without misalignment, since governance of AGI is so complicated.
That all sounds very plausible. But isn’t this all mostly relevant before AGI is a possibility? That would be a heavy negative tail risk, in which people motivated to “do great things” are quite prone to get us all killed. Should we survive that risk, progress probably mostly won’t be driven by humans, so humans doing great things will barely count. If humans are actually still in charge when we hit ASI, it seems like doing great things with them will probably still have large tail risks (inter-ASI wars).
Right? Or do you see it differently?
It’s a fascinating empirical claim that sounds right now that I hear it.
AGI is heavy-tailed in both directions I think. I don’t think we get utopias by default even without misalignment, since governance of AGI is so complicated.