How are we possibly aiming for “much more specific goals”—remember evolution intra-aligned brains to each other through altruism. We only need to improve on that.
And regardless we could completely ignore human values and just create AI that optimizes for human empowerment (maximization of our future optionality, or future potential to fulfill any goal).
How are we possibly aiming for “much more specific goals”—remember evolution intra-aligned brains to each other through altruism. We only need to improve on that.
And regardless we could completely ignore human values and just create AI that optimizes for human empowerment (maximization of our future optionality, or future potential to fulfill any goal).