Why would person A being significantly smarter be a bad thing? Just from the danger of being hacked? I’m not thinking of anything else that would weigh against the extra utility from their intelligence.
If you have two agents who can read each others source code they could cooperate on a prisoners dilemma, since they would have assurance as to not defect.
Of course we can’t read each others source code, but if our intelligence or rather our ability to asses each others honesty is rather matched, the risk for the other side defecting is at its lowest possible point shy of that, (in the absence of more complex stations where we have to think about signalling to other people), wouldn’t you agree?
When one side is vastly more intelligent/capable, the cost of defection is clearly much much smaller for the more capable side.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower. In other words Bs have a discount on needed cognitive resources, despite their inferior maps, and even As have a discount when working with Bs! What I wanted to say with the PS post was that under certain circumstances (say very expensive cognitive resources) opportunity costs associated with a bunch of As cooperating, especially As that have group norms to actively exclude Bs, can’t be neglected.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower.
The cost to predict consciously intended defection is lower.
I can and have produced numerous examples of Bs unintentionally defecting in our society, but for a less controversial example, let us take a society now deemed horrid. Let us consider the fake Nazi Dr. Wernher von Braun. Dr. Wernher von Braun was an example of A. His associates were examples of Bs. He proceeded to save their lives by lying to them and others, causing them to be captured by the Americans rather than the Russians. The B’s around him were busy trying to get him killed, and themselves killed.
The cost to predict consciously intended defection is lower.
I generally find it it easier to predict behaviour when people pursue their interests than when they pursue their ideals. If their behaviour matches their interests rather than a set of ideals that they hide, isn’t it easier to predict their behaviour?
Why would person A being significantly smarter be a bad thing? Just from the danger of being hacked? I’m not thinking of anything else that would weigh against the extra utility from their intelligence.
If you have two agents who can read each others source code they could cooperate on a prisoners dilemma, since they would have assurance as to not defect.
Of course we can’t read each others source code, but if our intelligence or rather our ability to asses each others honesty is rather matched, the risk for the other side defecting is at its lowest possible point shy of that, (in the absence of more complex stations where we have to think about signalling to other people), wouldn’t you agree? When one side is vastly more intelligent/capable, the cost of defection is clearly much much smaller for the more capable side.
All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower. In other words Bs have a discount on needed cognitive resources, despite their inferior maps, and even As have a discount when working with Bs! What I wanted to say with the PS post was that under certain circumstances (say very expensive cognitive resources) opportunity costs associated with a bunch of As cooperating, especially As that have group norms to actively exclude Bs, can’t be neglected.
The cost to predict consciously intended defection is lower.
I can and have produced numerous examples of Bs unintentionally defecting in our society, but for a less controversial example, let us take a society now deemed horrid. Let us consider the fake Nazi Dr. Wernher von Braun. Dr. Wernher von Braun was an example of A. His associates were examples of Bs. He proceeded to save their lives by lying to them and others, causing them to be captured by the Americans rather than the Russians. The B’s around him were busy trying to get him killed, and themselves killed.
I generally find it it easier to predict behaviour when people pursue their interests than when they pursue their ideals. If their behaviour matches their interests rather than a set of ideals that they hide, isn’t it easier to predict their behaviour?