Hi Donald- author of opentheory.net here. Really appreciate the thoughtful comment. A few quick notes:
I definitely (and very strongly) do not “predict that agents that believe in open individualism will always cooperate in prisoners dilemmas”—as I said in the OP, “an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources.” I would say OI implies certain Schelling points, but I don’t think an agent that believes in OI has to always cooperate (largely due to the ambiguity in what a ‘belief’ may be- there’s a lot of wiggle-room here. Best to look at the implementation).
I think the overall purpose of discussing these definitions of personal identity is first, dissolving confusion (and perhaps seeing how tangled up the ‘Closed Individualism’ cluster is); second, trying to decipher Schelling points for each theory of identity. We only get predictions indirectly from this latter factor; mostly this is a definitional exercise.
Doesn’t the efficacy of open individualism depend on whether the number of intelligent agents is high enough to “interfere” with the resources of one another (overpopulation, and overpopulation is a basis for some fears for AI risk)
Hi Donald- author of opentheory.net here. Really appreciate the thoughtful comment. A few quick notes:
I definitely (and very strongly) do not “predict that agents that believe in open individualism will always cooperate in prisoners dilemmas”—as I said in the OP, “an open individualist who assumes computationalism is true (team bits) will have a hard time coordinating with an open individualist who assumes physicalism is true (team atoms) — they’re essentially running incompatible versions of OI and will compete for resources.” I would say OI implies certain Schelling points, but I don’t think an agent that believes in OI has to always cooperate (largely due to the ambiguity in what a ‘belief’ may be- there’s a lot of wiggle-room here. Best to look at the implementation).
I think the overall purpose of discussing these definitions of personal identity is first, dissolving confusion (and perhaps seeing how tangled up the ‘Closed Individualism’ cluster is); second, trying to decipher Schelling points for each theory of identity. We only get predictions indirectly from this latter factor; mostly this is a definitional exercise.
Doesn’t the efficacy of open individualism depend on whether the number of intelligent agents is high enough to “interfere” with the resources of one another (overpopulation, and overpopulation is a basis for some fears for AI risk)