We don’t have to solve any deep philosophical problems here finding the one true pointer to “society’s values”, or figuring out how to analogize society to an individual.
I agree with this, in a nutshell. After all, you can put almost whatever values you like and it will work, which is the point of my long commennt.
My point is once you have the instrumental goals done like survival and technological progress down for everyone, alignment in practice should reduce to this:
Everyone have their own personal superintelligence that they can brainwash to do whatever they want.
And the alignment problem is simple enough: How do you brainwash an AI to have your goals?
I agree with this, in a nutshell. After all, you can put almost whatever values you like and it will work, which is the point of my long commennt.
My point is once you have the instrumental goals done like survival and technological progress down for everyone, alignment in practice should reduce to this:
And the alignment problem is simple enough: How do you brainwash an AI to have your goals?