How about comparing the theories by setting them all loose in a simulated world, as in the tournaments run by Axelrod and others? A world in which they are continually encountering Omega, police who want to pin one on them, potential rescuers of hitchhikers, and so on.
In your experiment, the only difference between the FDTers and the updateless CDTers is how they view the world; specifically, how they think of themselves in relation to their environment. And yes, sure, perhaps the FDTer will end up with a larger pot of money in the end, but this is just because the algorithmic ontology is arguably more “accurate” in that it e.g. tells the agent that it will make the same choice as their twin in the Twin PD (modulo brittleness issues). But this is the level, I argue, that we should have the debate about ontology on (that is, e.g. about accurate predictions etc.)—not on the level of decision-theoretic performance.
See who wins.
How do you define “winning”? As mentioned in another comment, there is no “objective” sense in which one theory outperforms another, even if we are operating in the same ontology.
In your experiment, the only difference between the FDTers and the updateless CDTers is how they view the world; specifically, how they think of themselves in relation to their environment. And yes, sure, perhaps the FDTer will end up with a larger pot of money in the end, but this is just because the algorithmic ontology is arguably more “accurate” in that it e.g. tells the agent that it will make the same choice as their twin in the Twin PD (modulo brittleness issues). But this is the level, I argue, that we should have the debate about ontology on (that is, e.g. about accurate predictions etc.)—not on the level of decision-theoretic performance.
How do you define “winning”? As mentioned in another comment, there is no “objective” sense in which one theory outperforms another, even if we are operating in the same ontology.