That is not what most AI doomers are worried about. They are worried that AI will simply steamroll over us, as it pursues its own purposes. So the problem there is indifference, not malevolence.
That is the basic worry associated with “unaligned AI”.
If one supposes an attempt to “align” the AI, by making it an ideal moral agent, or by instilling benevolence, or whatever one’s favorite proposal is—then further problems arise: can you identify the right values for an AI to possess? can you codify them accurately? can you get the AI to interpret them correctly, and to adhere to them?
Mistakes in those areas, amplified by irresistible superintelligence, can also end badly.
That is not what most AI doomers are worried about. They are worried that AI will simply steamroll over us, as it pursues its own purposes. So the problem there is indifference, not malevolence.
That is the basic worry associated with “unaligned AI”.
If one supposes an attempt to “align” the AI, by making it an ideal moral agent, or by instilling benevolence, or whatever one’s favorite proposal is—then further problems arise: can you identify the right values for an AI to possess? can you codify them accurately? can you get the AI to interpret them correctly, and to adhere to them?
Mistakes in those areas, amplified by irresistible superintelligence, can also end badly.