“Identity for NPCs” sounds a bit dismissive for a mechanism that works pretty well (to me it sounds close to “cooperating is for suckers, rationalists should defect”).
Oh. Uhm, if we put it to the Prisonner’s Dilemma language, I’d rather say—rational people can analyze the situation and choose with whom to cooperate even if the other person is different, but stupid people need some simple and safe algorithm, such as: “cooperate with identical copies of myself, defect against everyone else”.
Which works decently, if you have many identical copies at the same place, interacting mostly with each other. You cannot be exploited by someone using a smart algorithm. That’s a relatively impressive outcome for such a simple algorithm.
The disadvantage is that you can’t cooperate even with people using functionally identical algorithms with different group markers (e.g. using a different language, or dressing differently). And you have to suppress your deviations from the official norm (e.g. a minority sexual orientation). And you have a barrier to self-improvement, because the improved version is by definition different than the original one. -- On the other hand, if you manage to somehow artificially impose some positive change on everyone at the same time (or very slowly during a long time period), then the positive change will be preserved. Unfortunately, it works the same way with negative changes.
Oh. Uhm, if we put it to the Prisonner’s Dilemma language, I’d rather say—rational people can analyze the situation and choose with whom to cooperate even if the other person is different, but stupid people need some simple and safe algorithm, such as: “cooperate with identical copies of myself, defect against everyone else”.
It’s not clear that someone analyzing the situation will choose to cooperate—plenty of smart people have argued that the rational behavior in prisoner’s dilemma is to defect.
And I would argue that even for smart people, a simple algorithm is more likely to get everyone on board; a complicated but “better” solution which no one else follows (and that would only count as “better” if everybody was following it) is not worth following.
Oh. Uhm, if we put it to the Prisonner’s Dilemma language, I’d rather say—rational people can analyze the situation and choose with whom to cooperate even if the other person is different, but stupid people need some simple and safe algorithm, such as: “cooperate with identical copies of myself, defect against everyone else”.
Which works decently, if you have many identical copies at the same place, interacting mostly with each other. You cannot be exploited by someone using a smart algorithm. That’s a relatively impressive outcome for such a simple algorithm.
The disadvantage is that you can’t cooperate even with people using functionally identical algorithms with different group markers (e.g. using a different language, or dressing differently). And you have to suppress your deviations from the official norm (e.g. a minority sexual orientation). And you have a barrier to self-improvement, because the improved version is by definition different than the original one. -- On the other hand, if you manage to somehow artificially impose some positive change on everyone at the same time (or very slowly during a long time period), then the positive change will be preserved. Unfortunately, it works the same way with negative changes.
It’s not clear that someone analyzing the situation will choose to cooperate—plenty of smart people have argued that the rational behavior in prisoner’s dilemma is to defect.
And I would argue that even for smart people, a simple algorithm is more likely to get everyone on board; a complicated but “better” solution which no one else follows (and that would only count as “better” if everybody was following it) is not worth following.