I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT.
On reflection, I’m not sure whether I agree with this or not. I’ll edit the post.
However, the point is non-essential. What I’ve said holds true if you replace “CDT” with “weird bundle of heuristics.” The point is that it’s not UDT: an UDT agent needs other agents to be UDT or similar to cooperate with them for stuff like voting. (Or at least that’s what I believe is true and what matters for this question.) And I certainly think the UDT proportion is small enough to be modeled as 0.
I think there is a strong similarity between FDT (can’t speak to UDT/TDT) and Kantian lines of thought in ethics. (To bring this out: the Kantian thought is roughly to consider yourself simply as an instance of a rational agent, and ask “can I will that all rational agents in these circumstances do what I’m considering doing?” FDT basically says “consider all agents that implement my algorithm or something sufficiently similar. What action should all those algorithm-instances output in these circumstances?” It’s not identical, but it’s pretty close.) Lots of people have Kantian intuitions, and to the extent that they do, I think they are implementing something quite similar to FDT. Lots of people probably vote because they think something like “well, if everyone didn’t vote, that would be bad, so I’d better vote.” (Insert hedging and caveats here about how there’s a ton of debate over whether Kantianism is/should be consequentialist or not.) So they may be countable as at least partially FDT agents for purposes of FDT reasoning.
I think that memetically/genetically evolved heuristics are likely to differ systematically from CDT.
Here’s a brief argument why they would (and why they might diverge specifically in the direction of FDT): the metric evolution optimizes for is inclusive genetic fitness, not merely fitness of the organism. Witness kin selection. The heuristics that evolution would install to exploit this would tend to be: act as if there are other organisms in the environment running a similar algorithm to you (i.e. those that share lots of genes with you), and cooperate with those. This is roughly FDT-reasoning, not CDT-reasoning.
On reflection, I’m not sure whether I agree with this or not. I’ll edit the post.
However, the point is non-essential. What I’ve said holds true if you replace “CDT” with “weird bundle of heuristics.” The point is that it’s not UDT: an UDT agent needs other agents to be UDT or similar to cooperate with them for stuff like voting. (Or at least that’s what I believe is true and what matters for this question.) And I certainly think the UDT proportion is small enough to be modeled as 0.
I think there is a strong similarity between FDT (can’t speak to UDT/TDT) and Kantian lines of thought in ethics. (To bring this out: the Kantian thought is roughly to consider yourself simply as an instance of a rational agent, and ask “can I will that all rational agents in these circumstances do what I’m considering doing?” FDT basically says “consider all agents that implement my algorithm or something sufficiently similar. What action should all those algorithm-instances output in these circumstances?” It’s not identical, but it’s pretty close.) Lots of people have Kantian intuitions, and to the extent that they do, I think they are implementing something quite similar to FDT. Lots of people probably vote because they think something like “well, if everyone didn’t vote, that would be bad, so I’d better vote.” (Insert hedging and caveats here about how there’s a ton of debate over whether Kantianism is/should be consequentialist or not.) So they may be countable as at least partially FDT agents for purposes of FDT reasoning.
Here’s a brief argument why they would (and why they might diverge specifically in the direction of FDT): the metric evolution optimizes for is inclusive genetic fitness, not merely fitness of the organism. Witness kin selection. The heuristics that evolution would install to exploit this would tend to be: act as if there are other organisms in the environment running a similar algorithm to you (i.e. those that share lots of genes with you), and cooperate with those. This is roughly FDT-reasoning, not CDT-reasoning.
I’ve never thought about this, but your comment is persuasive. I’ve un-endorsed my answer and moved it to the comments.