Even more hairy. Any primary goal will, I think, eventually end up with a paperclipper. We need more research into how intelligent beings (ie, humans) actually function. I do not think people, with rare exceptions, actually have primary goals, only temporary, contingent goals to meet temporary ends. That is one reason I don’t think much of utilitarianism—peoples “utilities” are almost always temporary, contingent, and self-limiting.
This is also one reason why I have said that I think provably Friendly AI is impossible. I will be glad to be proven wrong if it does turn out to be possible.
You have merely redefined the goal from ‘the benefit of humanity’ to ‘non dead-end goal’, which may just be equally hairy.
Even more hairy. Any primary goal will, I think, eventually end up with a paperclipper. We need more research into how intelligent beings (ie, humans) actually function. I do not think people, with rare exceptions, actually have primary goals, only temporary, contingent goals to meet temporary ends. That is one reason I don’t think much of utilitarianism—peoples “utilities” are almost always temporary, contingent, and self-limiting.
This is also one reason why I have said that I think provably Friendly AI is impossible. I will be glad to be proven wrong if it does turn out to be possible.