But, we might be able to compare ontologies themselves, and if it is the case that we prefer one or think that one is more ‘correct’, then we should situate decision theories in that one map before comparing them.
How about comparing the theories by setting them all loose in a simulated world, as in the tournaments run by Axelrod and others? A world in which they are continually encountering Omega, police who want to pin one on them, potential rescuers of hitchhikers, and so on. See who wins.
The difficulty is in how to weight the frequency/importance of the situations they face. Unless one dominates (in the strict sense—literally does better in at least one case and no worse in any other case), the “best” is determined by environment.
Of course, if you can algorithmically determine what kind of situation you’re facing, you can use a meta-decision-theory which chooses the winner for each decision. This does dominate any simpler theory, but it reveals the flaw in this kind of comparison: if you know in advance what will happen, there’s no actual decision to your decision. Real decisions have enough unknowns that it’s impossible to understand the causality that fully.
The difficulty is in how to weight the frequency/importance of the situations they face.
I agree with this. On the one hand, you could just have a bunch of Procreation problems which would lead to the FDTer ending up with a smaller pot of money; or you could of course have a lot of Counterfactual Muggings in which case the FDTer would come out on top—at least in the limit.
How about comparing the theories by setting them all loose in a simulated world, as in the tournaments run by Axelrod and others? A world in which they are continually encountering Omega, police who want to pin one on them, potential rescuers of hitchhikers, and so on.
In your experiment, the only difference between the FDTers and the updateless CDTers is how they view the world; specifically, how they think of themselves in relation to their environment. And yes, sure, perhaps the FDTer will end up with a larger pot of money in the end, but this is just because the algorithmic ontology is arguably more “accurate” in that it e.g. tells the agent that it will make the same choice as their twin in the Twin PD (modulo brittleness issues). But this is the level, I argue, that we should have the debate about ontology on (that is, e.g. about accurate predictions etc.)—not on the level of decision-theoretic performance.
See who wins.
How do you define “winning”? As mentioned in another comment, there is no “objective” sense in which one theory outperforms another, even if we are operating in the same ontology.
How about comparing the theories by setting them all loose in a simulated world, as in the tournaments run by Axelrod and others? A world in which they are continually encountering Omega, police who want to pin one on them, potential rescuers of hitchhikers, and so on. See who wins.
The difficulty is in how to weight the frequency/importance of the situations they face. Unless one dominates (in the strict sense—literally does better in at least one case and no worse in any other case), the “best” is determined by environment.
Of course, if you can algorithmically determine what kind of situation you’re facing, you can use a meta-decision-theory which chooses the winner for each decision. This does dominate any simpler theory, but it reveals the flaw in this kind of comparison: if you know in advance what will happen, there’s no actual decision to your decision. Real decisions have enough unknowns that it’s impossible to understand the causality that fully.
I agree with this. On the one hand, you could just have a bunch of Procreation problems which would lead to the FDTer ending up with a smaller pot of money; or you could of course have a lot of Counterfactual Muggings in which case the FDTer would come out on top—at least in the limit.
In your experiment, the only difference between the FDTers and the updateless CDTers is how they view the world; specifically, how they think of themselves in relation to their environment. And yes, sure, perhaps the FDTer will end up with a larger pot of money in the end, but this is just because the algorithmic ontology is arguably more “accurate” in that it e.g. tells the agent that it will make the same choice as their twin in the Twin PD (modulo brittleness issues). But this is the level, I argue, that we should have the debate about ontology on (that is, e.g. about accurate predictions etc.)—not on the level of decision-theoretic performance.
How do you define “winning”? As mentioned in another comment, there is no “objective” sense in which one theory outperforms another, even if we are operating in the same ontology.