Could any AI be “friendly” enough to keep things local?
Any goal, any criterion for action, any ethical principle or decision procedure can be part of how an AI makes its choices. Whether it attaches utility to GDP, national security, or paperclips, it will act accordingly. If it is designed to regard localism as an axiomatic virtue, or if its traits otherwise incline it to agree with Voltaire, then it will act accordingly. The question for FAI designers is not, could it be like that; the question is, should it be like that.
Any goal, any criterion for action, any ethical principle or decision procedure can be part of how an AI makes its choices. Whether it attaches utility to GDP, national security, or paperclips, it will act accordingly. If it is designed to regard localism as an axiomatic virtue, or if its traits otherwise incline it to agree with Voltaire, then it will act accordingly. The question for FAI designers is not, could it be like that; the question is, should it be like that.