The post mentioned some problems/issues with this approach that remain to be resolved. Here are some additional ones.
My brain has preferences between probability distributions built into it.
Your brain is built to intuitively grapple with distribution over future experiences, like your example “I have a 50% chance of remaining me, and a 50% chance of becoming my copy.” Unfortunately UDASSA doesn’t give you that. It only gives you a distribution over observer-moments in an absolute sense (hence the “A” in ASSA), and there is no good way to convert such a distribution into a distribution over future experiences. (Suppose you’re copied at time 0, then the “copy” is copied again at time 1. Under UDASSA this is entirely unproblematic, but it doesn’t tell you whether you should anticipate being the “original” at time 2 with probability 1⁄2 or 1⁄3.) The “pure” UDASSA position would be that there is no such thing as “remaining me” or “becoming my copy”, and you just have to make your choices using the distribution over observer-moments without “linking” the observer-moments together in any way.
What I want is a probability distribution over all possible experiences (or “observer-moments”), so that I can use my existing preferences to make intelligent decisions in a universe with more than one observer I care about.
Do you consider this probability distribution an objective measure of how much each observer-moment exists? Or is it just a (possibly approximate) measure of how much you care about each observer-moment? I’m still going back and forth on these two positions myself. See What Are Probabilities, Anyway? where I go into this distinction a bit more. (The former is what I usually mean when I say UDASSA. Perhaps we could call the latter UDT-UMC for Updateless Decision Theory w/ Universal Measure of Care, unless someone has a better name for it. :)
UDASSA implies that simulations on the 2 atom thick computer count for twice as much as simulations on the 1 atom thick computer, because they are easier to specify.
Does this not seem counterintuitive to you? Suppose you find out you are living in a simulation on a 2 atom thick computer, and the simulation-keeper gives you a choice of (a) moving to a 1 atom thick computer, or (b) flipping a coin and shutting down the simulation or not based on the coin flip, would you really be indifferent? Under UDT-UMC, we can say that how much we care about an observer-moment is related to its “probability” under UD, but not necessarily exactly equal and could be influenced by other factors. If we accept the complexity of value thesis, then there is no reason why the measure of care has to be maximally simple, right? (This post is also related.)
The post mentioned some problems/issues with this approach that remain to be resolved. Here are some additional ones.
Your brain is built to intuitively grapple with distribution over future experiences, like your example “I have a 50% chance of remaining me, and a 50% chance of becoming my copy.” Unfortunately UDASSA doesn’t give you that. It only gives you a distribution over observer-moments in an absolute sense (hence the “A” in ASSA), and there is no good way to convert such a distribution into a distribution over future experiences. (Suppose you’re copied at time 0, then the “copy” is copied again at time 1. Under UDASSA this is entirely unproblematic, but it doesn’t tell you whether you should anticipate being the “original” at time 2 with probability 1⁄2 or 1⁄3.) The “pure” UDASSA position would be that there is no such thing as “remaining me” or “becoming my copy”, and you just have to make your choices using the distribution over observer-moments without “linking” the observer-moments together in any way.
Do you consider this probability distribution an objective measure of how much each observer-moment exists? Or is it just a (possibly approximate) measure of how much you care about each observer-moment? I’m still going back and forth on these two positions myself. See What Are Probabilities, Anyway? where I go into this distinction a bit more. (The former is what I usually mean when I say UDASSA. Perhaps we could call the latter UDT-UMC for Updateless Decision Theory w/ Universal Measure of Care, unless someone has a better name for it. :)
Does this not seem counterintuitive to you? Suppose you find out you are living in a simulation on a 2 atom thick computer, and the simulation-keeper gives you a choice of (a) moving to a 1 atom thick computer, or (b) flipping a coin and shutting down the simulation or not based on the coin flip, would you really be indifferent? Under UDT-UMC, we can say that how much we care about an observer-moment is related to its “probability” under UD, but not necessarily exactly equal and could be influenced by other factors. If we accept the complexity of value thesis, then there is no reason why the measure of care has to be maximally simple, right? (This post is also related.)