I’m still intrigued by how the original post might be relevant for FAI in a way that I’m not seeing.
I didn’t quite understand what you said you were seeing, but I’ll try to describe the relevance.
The normal case is people talk about moral philosophy with a fairly relaxed emotional tone, from the point of view “it would be nice if people did such-and-such, they usually don’t, nobody’s listening to us, and therefore this conversation doesn’t matter much”. If you’re thinking of making an FAI, the emotional tone is different because the point of view is “we’re going to implement this, and we have to get it right because if it’s wrong the AI will go nuts and we’re all going to DIE!!!” But then you try to sound nice and calm anyway because accurately reflecting the underlying emotions doesn’t help, not to mention being low-status.
I think most talk about morality on this website is from the more tense point of view above. Otherwise, I wouldn’t bother with it, and I think many of the other people here wouldn’t either. A minority might think it’s an armchair philosophy sort of thing.
The problem with these discussions is that you have to know the design of the FAI is correct, so that design has to be as simple as possible. If we come up with some detailed understanding of human morality and program it into the FAI, that’s no good—we’ll never know it’s right. So IMO you need to delegate the work of forming a model of what people want to the FAI and focus on how to get the FAI to correctly build that model, which is simpler.
However, if lukeprog has some simple insight, it might be useful in this context. I’m expectantly waiting for his next post on this issue.
The part that got my attention was: “You’ll have to make the FAI based on the assumption that the vast majority of people won’t be persuaded by anything you say.”
Some people will be persuaded, and some won’t be, and the AI has to be able to tell them apart reliably regardless, so I don’t see assumptions about majorities coming into play, instead they seem like an unnecessary complication once you grant the AI a certain amount of insight into individuals that is assumed as the basis for the AI being relevant.
I.e., if it (we) has (have) to make assumptions for lack of understanding about individuals, the game is up anyway. So we still approach the issue from the standpoint of individuals (such as us) influencing other individuals, because an FAI doesn’t need separate group parameters, and because it doesn’t, it isn’t an obviously relevantly different scenario than anything else we can do and it can theoretically do better.
I didn’t quite understand what you said you were seeing, but I’ll try to describe the relevance.
The normal case is people talk about moral philosophy with a fairly relaxed emotional tone, from the point of view “it would be nice if people did such-and-such, they usually don’t, nobody’s listening to us, and therefore this conversation doesn’t matter much”. If you’re thinking of making an FAI, the emotional tone is different because the point of view is “we’re going to implement this, and we have to get it right because if it’s wrong the AI will go nuts and we’re all going to DIE!!!” But then you try to sound nice and calm anyway because accurately reflecting the underlying emotions doesn’t help, not to mention being low-status.
I think most talk about morality on this website is from the more tense point of view above. Otherwise, I wouldn’t bother with it, and I think many of the other people here wouldn’t either. A minority might think it’s an armchair philosophy sort of thing.
The problem with these discussions is that you have to know the design of the FAI is correct, so that design has to be as simple as possible. If we come up with some detailed understanding of human morality and program it into the FAI, that’s no good—we’ll never know it’s right. So IMO you need to delegate the work of forming a model of what people want to the FAI and focus on how to get the FAI to correctly build that model, which is simpler.
However, if lukeprog has some simple insight, it might be useful in this context. I’m expectantly waiting for his next post on this issue.
The part that got my attention was: “You’ll have to make the FAI based on the assumption that the vast majority of people won’t be persuaded by anything you say.”
Some people will be persuaded, and some won’t be, and the AI has to be able to tell them apart reliably regardless, so I don’t see assumptions about majorities coming into play, instead they seem like an unnecessary complication once you grant the AI a certain amount of insight into individuals that is assumed as the basis for the AI being relevant.
I.e., if it (we) has (have) to make assumptions for lack of understanding about individuals, the game is up anyway. So we still approach the issue from the standpoint of individuals (such as us) influencing other individuals, because an FAI doesn’t need separate group parameters, and because it doesn’t, it isn’t an obviously relevantly different scenario than anything else we can do and it can theoretically do better.