Yes, the individual and the collected are tightly coupled with short-term and long-term goals, which exist within individuals too. I think it’s interesting to think of yourself as a city, where you need to make systemic changes sometimes to enable individual flourishing.
I really think there is something to making alignment the actual goal of AI—but in a way where the paradoxical nature of alignment is acknowledged, so the AI is not looking for a “final solution” but is rather measuring the success of various strategies in lowering society’s (to return to the metaphor of the individual) cognitive dissonance.
Just to note your last paragraph reminds me of Stuart Russel’s approach to AI alignment in Human Compatible. And I agree this sounds like a reasonable starting point.
Thanks, very astute point.
Yes, the individual and the collected are tightly coupled with short-term and long-term goals, which exist within individuals too. I think it’s interesting to think of yourself as a city, where you need to make systemic changes sometimes to enable individual flourishing.
I really think there is something to making alignment the actual goal of AI—but in a way where the paradoxical nature of alignment is acknowledged, so the AI is not looking for a “final solution” but is rather measuring the success of various strategies in lowering society’s (to return to the metaphor of the individual) cognitive dissonance.
Just to note your last paragraph reminds me of Stuart Russel’s approach to AI alignment in Human Compatible. And I agree this sounds like a reasonable starting point.
There’s a tiny possibility he may have influenced my thinking. I did spend 6 months editing him, among others for a documentary.