I skimmed the sequence looking for “How come I can move without bumping into a copy of myself?”.
Rephrasing: When I close my eyes and set aside which observer I am, the SIA prior places me in a densely populated environment, and then I open my eyes and find that my timeline doesn’t look outcome pumped for population density.
In fact, we find ourselves on the spawning planet of our civilization. Since the Fermi paradox is dissolved, a randomly selected observer should have been much later; instead we find ourselves near the fulcrum of the plot. Therefore we should update towards UDT: The anthropic assumption that we are more likely to find ourselves in the positions that matter. Oh look, I accidentally solved the simulation hypothesis and Boltzmann brains and the dreaming butterfly. Curious, this even predicted that we find ourselves with above-average intelligence and working on AI math.
One may also confuse the density of observers and the numbers of observers. You are more likely to be in the region with the highest number of observers, but not of the highest density. A region can win the biggest number of observers not because it has higher density, but because it is larger in size.
For example, Tokyo is densiest city, but most people live in rural India.
SIA can be made to deal in densities, as one must with infinities involved.
Though absolute SIA also favors the hypothesis that I find myself in a post-singularity megacivilization, to the point that our observations rule SIA out.
I see—SIA can be finagled to produce the “we find ourselves at history’s pivot” we observe, by rationalizing that apparently something somewhere is desperate enough to accurately predict what the people there would do to make most of all the anthropic mass be there. I admit this has a simplicity and ring of history to it.
re densities: If two universes are infinite and both have infinite observers and they only differ in whether every observer sees an observable universe with 10^10 other observers or 10^20 other observers then we could, if we wanted, call one more likely to find oneself in than the other.
Yes, indeed, “measure monsters” could fight to get biggest share of measure over desirable observers thus effectively controlling them. Here I assume that “share of measure” is equal to the probability of finding oneself in that share under SIA. An example of such “measure monster” may be Friendly AI which want to prevent most people to be in hands of Evil AI, so it creates as much copies of people as it can.
Alternatively, very strong and universal Great Filter Doomsday argument is true, and Earth is the biggest possible concentration of observers in the universe and will go extinct soon. Larger civilizations are extremely rare.
But I think that you want to say that SIA prediction that we are already in “measure monster” is false, as we should observe much more observers, maybe a whole Galaxy densily packed with them.
Your last paragraph is what I meant by “find myself in a post-singularity megacivilization”.
Your first paragraph misunderstands my “SIA can be finagled”. The ring of history comes not from “AIs deliberately place a lot of measure on people to compel them”, but from “AIs incidentally place a lot of measure on people in the process of predicting them”. Predicting what we would do is very important in order to correctly estimate the probabilities that any particular AI wins the future, which is a natural Schelling point to set the bargaining power of each acausal trader.
Agree that AI will do a lot past simulations to predict possible variants of world history and even to try to solve Fermi paradox and-or predict behaviour of alien AIs. But it could be outweighed by FAI which tries to get most measure in its hands, for example to cure past sufferings via indexical uncertainty for any possible mind.
Infinite copies of you may each spend 1 point. If #1 does, everyone gains 2 points. Equipped with the Self-Importance Assumption that we are >50% likely to be #1, CDT acts like UDT.
Suppose we modify the game so that you can now spend 1 point on the “if you’re #1, everyone gets 2 points” gamble, and also can choose to spend 1 point on the “if you’re #1, you get 2 points” gamble. UDT still is fine, self-important CDT loses all its gains. I feel like the moral of the story is “just don’t be CDT.”
You’re right! My own search for counterarguments only came as far as “as the amount of points everyone gains decreases to 1, the probability of #1 required to match UDT rises to 1”—I didn’t manage to leave the parameter space I’d constructed, to prove myself wrong.
And yet. Yes, for any probabilities, CDT will take the one action iff it takes the other, and UDT has some probability distribution (used to weight each copy’s utility) such that it takes one action but not the other. Does every game have a probability distribution where CDT and UDT agree? Can we naturally construct a sane such anthropic assumption? The utility function isn’t up for grabs, but this’d still seems like a hint.
I skimmed the sequence looking for “How come I can move without bumping into a copy of myself?”.
Rephrasing: When I close my eyes and set aside which observer I am, the SIA prior places me in a densely populated environment, and then I open my eyes and find that my timeline doesn’t look outcome pumped for population density.
In fact, we find ourselves on the spawning planet of our civilization. Since the Fermi paradox is dissolved, a randomly selected observer should have been much later; instead we find ourselves near the fulcrum of the plot. Therefore we should update towards UDT: The anthropic assumption that we are more likely to find ourselves in the positions that matter. Oh look, I accidentally solved the simulation hypothesis and Boltzmann brains and the dreaming butterfly. Curious, this even predicted that we find ourselves with above-average intelligence and working on AI math.
One may also confuse the density of observers and the numbers of observers. You are more likely to be in the region with the highest number of observers, but not of the highest density. A region can win the biggest number of observers not because it has higher density, but because it is larger in size.
For example, Tokyo is densiest city, but most people live in rural India.
SIA can be made to deal in densities, as one must with infinities involved.
Though absolute SIA also favors the hypothesis that I find myself in a post-singularity megacivilization, to the point that our observations rule SIA out.
You are most likely in the singularity post-civilization. But in simulation which it created. So no SIA-refutation here.
I didn’t get what do you mean here.
I see—SIA can be finagled to produce the “we find ourselves at history’s pivot” we observe, by rationalizing that apparently something somewhere is desperate enough to accurately predict what the people there would do to make most of all the anthropic mass be there. I admit this has a simplicity and ring of history to it.
re densities: If two universes are infinite and both have infinite observers and they only differ in whether every observer sees an observable universe with 10^10 other observers or 10^20 other observers then we could, if we wanted, call one more likely to find oneself in than the other.
Yes, indeed, “measure monsters” could fight to get biggest share of measure over desirable observers thus effectively controlling them. Here I assume that “share of measure” is equal to the probability of finding oneself in that share under SIA. An example of such “measure monster” may be Friendly AI which want to prevent most people to be in hands of Evil AI, so it creates as much copies of people as it can.
Alternatively, very strong and universal Great Filter
Doomsday argumentis true, and Earth is the biggest possible concentration of observers in the universe and will go extinct soon. Larger civilizations are extremely rare.But I think that you want to say that SIA prediction that we are already in “measure monster” is false, as we should observe much more observers, maybe a whole Galaxy densily packed with them.
Your last paragraph is what I meant by “find myself in a post-singularity megacivilization”.
Your first paragraph misunderstands my “SIA can be finagled”. The ring of history comes not from “AIs deliberately place a lot of measure on people to compel them”, but from “AIs incidentally place a lot of measure on people in the process of predicting them”. Predicting what we would do is very important in order to correctly estimate the probabilities that any particular AI wins the future, which is a natural Schelling point to set the bargaining power of each acausal trader.
Agree that AI will do a lot past simulations to predict possible variants of world history and even to try to solve Fermi paradox and-or predict behaviour of alien AIs. But it could be outweighed by FAI which tries to get most measure in its hands, for example to cure past sufferings via indexical uncertainty for any possible mind.
I am pretty sure that UDT doesn’t say we should expect to be important, but maybe you should elaborate just in case :P
Infinite copies of you may each spend 1 point. If #1 does, everyone gains 2 points. Equipped with the Self-Importance Assumption that we are >50% likely to be #1, CDT acts like UDT.
Suppose we modify the game so that you can now spend 1 point on the “if you’re #1, everyone gets 2 points” gamble, and also can choose to spend 1 point on the “if you’re #1, you get 2 points” gamble. UDT still is fine, self-important CDT loses all its gains. I feel like the moral of the story is “just don’t be CDT.”
You’re right! My own search for counterarguments only came as far as “as the amount of points everyone gains decreases to 1, the probability of #1 required to match UDT rises to 1”—I didn’t manage to leave the parameter space I’d constructed, to prove myself wrong.
And yet. Yes, for any probabilities, CDT will take the one action iff it takes the other, and UDT has some probability distribution (used to weight each copy’s utility) such that it takes one action but not the other. Does every game have a probability distribution where CDT and UDT agree? Can we naturally construct a sane such anthropic assumption? The utility function isn’t up for grabs, but this’d still seems like a hint.