It would be good if you could summarise your strongest argument in favour of your conclusion “no alignement = bad for humanity”.
Things are rarely black or white, I don’t see an AI partially aligned as necessaly a bad thing.
As an example, consider the partial alignement between a child and his parent. A parent is not simply fulfilling every desire of the child, but only a subset.
It would be good if you could summarise your strongest argument in favour of your conclusion “no alignement = bad for humanity”.
Things are rarely black or white, I don’t see an AI partially aligned as necessaly a bad thing.
As an example, consider the partial alignement between a child and his parent. A parent is not simply fulfilling every desire of the child, but only a subset.