I don’t understand. It seems that when people appeal to the algorithmic ontology to motivate interesting decision-theoretic claims — like, say, “you should choose to one-box in Transparent Newcomb” — they’re not just taking a more general perspective. They’re making a substantive claim that it’s sensible to regard yourself as an algorithm, over and above your particular instantiation in concrete reality.
Another way to say it is that the substantive part of the claim around decision theory that favors the algorithmic ontology is that your identity is closer to an isomorphism class/equivalence class of programs than a concrete instantiation of a program, akin to generic programming:
An argument for that is in an infinite universe where our patch of the affectable universe isn’t special at the large scale in how it clumps up atoms into bigger structures, which is very probable then if you had arbitrarily good faster-than-light travel, you’d eventually meet copies of yourself, since there’s only a finite number of possibilities for your identity, due to finite memory (conditional on you existing), but while it’s combinatorially large, it’s not infinite, and thus some states have to be repeated in an infinite universe, so your identity isn’t unique under the instance-focused worldview (because there are infinite instances, not 1 instance), but from the algorithmic perspective they are one you, and thus you can trade with yourself.
The more practical argument is that a lot of the algorithmic ontology stuff was focused on future people who are closer to AIs today (but better), in that they can merge/copy to the extent that they have compute, and importantly are way more parallel in the sense that they can meaningfully have multiple experiences like current AI probably does today, and thus a lot of the issue with translating it to the human case is that humans are way more serial than AI.
Okay, my point here was that the copies would not only have the same algorithm, but also the same physical structure arbitrarily finely, and I don’t need to assume the algorithmic ontology, I only need to remember that there’s only a finite amount of configurations of atoms that end up in human bodies, meaning that the number of distinct identities is upper bounded by a finite number. The search space is combinatorically large, but not infinitely large, which ensures that in an infinite universe, some states must be repeated exactly.
That’s why you’d meet yourself, eventually, it’s not because of the algorithmic ontology, but because there isn’t an infinite number of possibilities for your identity.
the copies would not only have the same algorithm, but also the same physical structure arbitrarily finely
I understand, I’m just rejecting the premise that “same physical structure” implies identity to me. (Perhaps confusingly, despite the fact that I’m defending the “physicalist ontology” in the context of this thread (in contrast to algorithmic ontology), I reject physicalism in the metaphysics sense.)
This also seems tangential, though, because the substantive appeals to the algorithmic ontology that get made in the decision theory context aren’t about physically instantiated copies. They’re about non-physically-instantiated copies of your algorithm. I unfortunately don’t know of a reference for this off the top of my head, but it has come up in some personal communications FWIW.
I don’t understand. It seems that when people appeal to the algorithmic ontology to motivate interesting decision-theoretic claims — like, say, “you should choose to one-box in Transparent Newcomb” — they’re not just taking a more general perspective. They’re making a substantive claim that it’s sensible to regard yourself as an algorithm, over and above your particular instantiation in concrete reality.
Another way to say it is that the substantive part of the claim around decision theory that favors the algorithmic ontology is that your identity is closer to an isomorphism class/equivalence class of programs than a concrete instantiation of a program, akin to generic programming:
https://en.wikipedia.org/wiki/Generic_programming
https://en.wikipedia.org/wiki/Parametric_polymorphism
An argument for that is in an infinite universe where our patch of the affectable universe isn’t special at the large scale in how it clumps up atoms into bigger structures, which is very probable then if you had arbitrarily good faster-than-light travel, you’d eventually meet copies of yourself, since there’s only a finite number of possibilities for your identity, due to finite memory (conditional on you existing), but while it’s combinatorially large, it’s not infinite, and thus some states have to be repeated in an infinite universe, so your identity isn’t unique under the instance-focused worldview (because there are infinite instances, not 1 instance), but from the algorithmic perspective they are one you, and thus you can trade with yourself.
The more practical argument is that a lot of the algorithmic ontology stuff was focused on future people who are closer to AIs today (but better), in that they can merge/copy to the extent that they have compute, and importantly are way more parallel in the sense that they can meaningfully have multiple experiences like current AI probably does today, and thus a lot of the issue with translating it to the human case is that humans are way more serial than AI.
But a copy of me =/= me. I don’t see how you establish this equivalence without assuming the algorithmic ontology in the first place.
Okay, my point here was that the copies would not only have the same algorithm, but also the same physical structure arbitrarily finely, and I don’t need to assume the algorithmic ontology, I only need to remember that there’s only a finite amount of configurations of atoms that end up in human bodies, meaning that the number of distinct identities is upper bounded by a finite number. The search space is combinatorically large, but not infinitely large, which ensures that in an infinite universe, some states must be repeated exactly.
That’s why you’d meet yourself, eventually, it’s not because of the algorithmic ontology, but because there isn’t an infinite number of possibilities for your identity.
I understand, I’m just rejecting the premise that “same physical structure” implies identity to me. (Perhaps confusingly, despite the fact that I’m defending the “physicalist ontology” in the context of this thread (in contrast to algorithmic ontology), I reject physicalism in the metaphysics sense.)
This also seems tangential, though, because the substantive appeals to the algorithmic ontology that get made in the decision theory context aren’t about physically instantiated copies. They’re about non-physically-instantiated copies of your algorithm. I unfortunately don’t know of a reference for this off the top of my head, but it has come up in some personal communications FWIW.