But I don’t think this makes sense at all: we can easily make up an (arbitrarily absurd) ontology that yields great decision-theoretic results from the perspective of that ontology, e.g. the genetic one with respect to the Twin Prisoner’s Dilemma.
I don’t understand this part. If you follow the “genetic agent ontology”, and you grew up with a twin, and they (predictably) sometimes make a different decision than you, then you messed up. There is an objective sense in which the “genetic agent ontology” is incorrect, because de-facto you did not make the same decision, and treating the two of you as a single agent is therefore a wrong thing to do.
I kind of buy that a lot of the decision-theory discourse boils down to “what things can you consider as the same agent, and to what degree?”, but that feels like a thing that can be empirically verified, and performance can be measured on. CDT in the classical Newcomb’s problem is delusional about it definitely not having copies of itself, and the genetic agent ontology is delusional because it pretends its twin is a copy when it isn’t. These seem both like valid arguments to make that allow for comparison.
If you follow the “genetic agent ontology”, and you grew up with a twin, and they (predictably) sometimes make a different decision than you, then you messed up. There is an objective sense in which the “genetic agent ontology” is incorrect, because de-facto you did not make the same decision, and treating the two of you as a single agent is therefore a wrong thing to do.
Yes, I agree. This is precisely my point; it’s a bad ontology. In the paragraph you quoted, I am not arguing against the algorithmic ontology (and obviously not for the “genetic ontology”), but against the claim that decision-theoretic performance is a reason to prefer one ontology over another. (The genetic ontology-analogy is supposed to be a reductio of that claim.) And I think the authors of the FDT papers are implicitly making this claim by e.g. comparing FDT to CDT in the Twin PD. Perhaps I should have made this clearer.
performance can be measured on
Yes, I think you can measure performance, but since every decision theory merely corresponds to a stipulation of what (expected) value is, there is no “objective” way of doing so. See The lack of performance metrics for CDT versus EDT, etc. by Caspar Oesterheld for more on this.
CDT in the classical Newcomb’s problem is delusional about it definitely not having copies of itself
(The CDTer could recognize that they have a literal copy inside Omega’s brain, but might just not care about that since they are causally isolated. So I would not say they are “delusional”.)
I don’t understand this part. If you follow the “genetic agent ontology”, and you grew up with a twin, and they (predictably) sometimes make a different decision than you, then you messed up. There is an objective sense in which the “genetic agent ontology” is incorrect, because de-facto you did not make the same decision, and treating the two of you as a single agent is therefore a wrong thing to do.
I kind of buy that a lot of the decision-theory discourse boils down to “what things can you consider as the same agent, and to what degree?”, but that feels like a thing that can be empirically verified, and performance can be measured on. CDT in the classical Newcomb’s problem is delusional about it definitely not having copies of itself, and the genetic agent ontology is delusional because it pretends its twin is a copy when it isn’t. These seem both like valid arguments to make that allow for comparison.
Yes, I agree. This is precisely my point; it’s a bad ontology. In the paragraph you quoted, I am not arguing against the algorithmic ontology (and obviously not for the “genetic ontology”), but against the claim that decision-theoretic performance is a reason to prefer one ontology over another. (The genetic ontology-analogy is supposed to be a reductio of that claim.) And I think the authors of the FDT papers are implicitly making this claim by e.g. comparing FDT to CDT in the Twin PD. Perhaps I should have made this clearer.
Yes, I think you can measure performance, but since every decision theory merely corresponds to a stipulation of what (expected) value is, there is no “objective” way of doing so. See The lack of performance metrics for CDT versus EDT, etc. by Caspar Oesterheld for more on this.
(The CDTer could recognize that they have a literal copy inside Omega’s brain, but might just not care about that since they are causally isolated. So I would not say they are “delusional”.)