After all, if we agree that there is a set of values, a set of behaviors that we would want to a superintelligence acting in humanity’s best interest to have, why wouldn’t I myself choose to hold these values and do these behaviors?
I know Jordan Peterson is quite the controversial figure, but that’s some core advice of his. Aim for the highest, the best you could possibly aim for—what else is there to do? We’re bounded by death, you’ve got nothing to lose, and everything to gain—why not aim for the highest?
What’s quite interesting is that, if you do what it is that you’re called upon to do—which is to lift your eyes up above the mundane, daily, selfish, impulsive issues that might upset you—and you attempt to enter into a contractual relationship with that which you might hold in the highest regard, whatever that might be—to aim high, and to make that important above all else in your life—that fortifies you against the vicissitudes of existence, like nothing else can. I truly believe that’s the most practical advice that you could possibly receive.
I sincerely believe there is nothing more worthwhile for us humans to do than that: aim for the best, for ourselves, for our families, for our communities, for the world, in the now, in the short term, and in the long term. It seems… obvious? And if we truly work that out and act on it, wouldn’t that help convince an AGI to do the same?
Having read your post, I have disagreements with your expectations about AGI.
But it doesn’t matter. It seems that we agree that “human alignment”, and self-alignment to a better version of human ethics is a very worthwhile task. (and so is civilizational alignment, even though I don’t hold much hope for it yet).
To put this it way, if we align our civilization, we win. Because, once aligned, we wouldn’t build AGI unless we were absolutely sure it would be safe and aligned with our values.
My hope is that we can, perhaps, at least align humans who are directly involved with building systems that might become AGI, with our principles regarding AI safety.
I fully agree here. This is a very valuable post.
I know Jordan Peterson is quite the controversial figure, but that’s some core advice of his. Aim for the highest, the best you could possibly aim for—what else is there to do? We’re bounded by death, you’ve got nothing to lose, and everything to gain—why not aim for the highest?
I sincerely believe there is nothing more worthwhile for us humans to do than that: aim for the best, for ourselves, for our families, for our communities, for the world, in the now, in the short term, and in the long term. It seems… obvious? And if we truly work that out and act on it, wouldn’t that help convince an AGI to do the same?
(You might be interested in this recent post of mine)
Having read your post, I have disagreements with your expectations about AGI.
But it doesn’t matter. It seems that we agree that “human alignment”, and self-alignment to a better version of human ethics is a very worthwhile task. (and so is civilizational alignment, even though I don’t hold much hope for it yet).
To put this it way, if we align our civilization, we win. Because, once aligned, we wouldn’t build AGI unless we were absolutely sure it would be safe and aligned with our values.
My hope is that we can, perhaps, at least align humans who are directly involved with building systems that might become AGI, with our principles regarding AI safety.