Having read your post, I have disagreements with your expectations about AGI.
But it doesn’t matter. It seems that we agree that “human alignment”, and self-alignment to a better version of human ethics is a very worthwhile task. (and so is civilizational alignment, even though I don’t hold much hope for it yet).
To put this it way, if we align our civilization, we win. Because, once aligned, we wouldn’t build AGI unless we were absolutely sure it would be safe and aligned with our values.
My hope is that we can, perhaps, at least align humans who are directly involved with building systems that might become AGI, with our principles regarding AI safety.
Having read your post, I have disagreements with your expectations about AGI.
But it doesn’t matter. It seems that we agree that “human alignment”, and self-alignment to a better version of human ethics is a very worthwhile task. (and so is civilizational alignment, even though I don’t hold much hope for it yet).
To put this it way, if we align our civilization, we win. Because, once aligned, we wouldn’t build AGI unless we were absolutely sure it would be safe and aligned with our values.
My hope is that we can, perhaps, at least align humans who are directly involved with building systems that might become AGI, with our principles regarding AI safety.