The part that got my attention was: “You’ll have to make the FAI based on the assumption that the vast majority of people won’t be persuaded by anything you say.”
Some people will be persuaded, and some won’t be, and the AI has to be able to tell them apart reliably regardless, so I don’t see assumptions about majorities coming into play, instead they seem like an unnecessary complication once you grant the AI a certain amount of insight into individuals that is assumed as the basis for the AI being relevant.
I.e., if it (we) has (have) to make assumptions for lack of understanding about individuals, the game is up anyway. So we still approach the issue from the standpoint of individuals (such as us) influencing other individuals, because an FAI doesn’t need separate group parameters, and because it doesn’t, it isn’t an obviously relevantly different scenario than anything else we can do and it can theoretically do better.
The part that got my attention was: “You’ll have to make the FAI based on the assumption that the vast majority of people won’t be persuaded by anything you say.”
Some people will be persuaded, and some won’t be, and the AI has to be able to tell them apart reliably regardless, so I don’t see assumptions about majorities coming into play, instead they seem like an unnecessary complication once you grant the AI a certain amount of insight into individuals that is assumed as the basis for the AI being relevant.
I.e., if it (we) has (have) to make assumptions for lack of understanding about individuals, the game is up anyway. So we still approach the issue from the standpoint of individuals (such as us) influencing other individuals, because an FAI doesn’t need separate group parameters, and because it doesn’t, it isn’t an obviously relevantly different scenario than anything else we can do and it can theoretically do better.