Building a powerful AI such that doing so is a good thing rather than a bad thing. Perhaps even there being survivors shouldn’t insist on the definite article, on being the question, as there are many questions with various levels of severity, that are not mutually exclusive.
The question “Aligned to whom?” is sufficiently vague to admit many reasonable interpretations, but has some unfortunate connotations. It sounds like there’s a premise that AIs are always aligned to someone, making the possibility that they are aligned to no one but themselves less salient. And it boosts the frame of competition, as opposed to distribution of radical abundance, of possibly there being someone who gets half of the universe.
“Aligned to whom?” remains the question in the field of “AI alignment,” and as it has been since the beginning it continues to be hand-waved away.
Building a powerful AI such that doing so is a good thing rather than a bad thing. Perhaps even there being survivors shouldn’t insist on the definite article, on being the question, as there are many questions with various levels of severity, that are not mutually exclusive.
Do you believe this answers the question “… for whom?” or are you helpfully illustrating how it typically gets hand-waved away?
The usage of the definite article does not imply there are no other questions, just that they are all subordinate to this one.
The question “Aligned to whom?” is sufficiently vague to admit many reasonable interpretations, but has some unfortunate connotations. It sounds like there’s a premise that AIs are always aligned to someone, making the possibility that they are aligned to no one but themselves less salient. And it boosts the frame of competition, as opposed to distribution of radical abundance, of possibly there being someone who gets half of the universe.