SIA can be made to deal in densities, as one must with infinities involved.
Though absolute SIA also favors the hypothesis that I find myself in a post-singularity megacivilization, to the point that our observations rule SIA out.
I see—SIA can be finagled to produce the “we find ourselves at history’s pivot” we observe, by rationalizing that apparently something somewhere is desperate enough to accurately predict what the people there would do to make most of all the anthropic mass be there. I admit this has a simplicity and ring of history to it.
re densities: If two universes are infinite and both have infinite observers and they only differ in whether every observer sees an observable universe with 10^10 other observers or 10^20 other observers then we could, if we wanted, call one more likely to find oneself in than the other.
Yes, indeed, “measure monsters” could fight to get biggest share of measure over desirable observers thus effectively controlling them. Here I assume that “share of measure” is equal to the probability of finding oneself in that share under SIA. An example of such “measure monster” may be Friendly AI which want to prevent most people to be in hands of Evil AI, so it creates as much copies of people as it can.
Alternatively, very strong and universal Great Filter Doomsday argument is true, and Earth is the biggest possible concentration of observers in the universe and will go extinct soon. Larger civilizations are extremely rare.
But I think that you want to say that SIA prediction that we are already in “measure monster” is false, as we should observe much more observers, maybe a whole Galaxy densily packed with them.
Your last paragraph is what I meant by “find myself in a post-singularity megacivilization”.
Your first paragraph misunderstands my “SIA can be finagled”. The ring of history comes not from “AIs deliberately place a lot of measure on people to compel them”, but from “AIs incidentally place a lot of measure on people in the process of predicting them”. Predicting what we would do is very important in order to correctly estimate the probabilities that any particular AI wins the future, which is a natural Schelling point to set the bargaining power of each acausal trader.
Agree that AI will do a lot past simulations to predict possible variants of world history and even to try to solve Fermi paradox and-or predict behaviour of alien AIs. But it could be outweighed by FAI which tries to get most measure in its hands, for example to cure past sufferings via indexical uncertainty for any possible mind.
SIA can be made to deal in densities, as one must with infinities involved.
Though absolute SIA also favors the hypothesis that I find myself in a post-singularity megacivilization, to the point that our observations rule SIA out.
You are most likely in the singularity post-civilization. But in simulation which it created. So no SIA-refutation here.
I didn’t get what do you mean here.
I see—SIA can be finagled to produce the “we find ourselves at history’s pivot” we observe, by rationalizing that apparently something somewhere is desperate enough to accurately predict what the people there would do to make most of all the anthropic mass be there. I admit this has a simplicity and ring of history to it.
re densities: If two universes are infinite and both have infinite observers and they only differ in whether every observer sees an observable universe with 10^10 other observers or 10^20 other observers then we could, if we wanted, call one more likely to find oneself in than the other.
Yes, indeed, “measure monsters” could fight to get biggest share of measure over desirable observers thus effectively controlling them. Here I assume that “share of measure” is equal to the probability of finding oneself in that share under SIA. An example of such “measure monster” may be Friendly AI which want to prevent most people to be in hands of Evil AI, so it creates as much copies of people as it can.
Alternatively, very strong and universal Great Filter
Doomsday argumentis true, and Earth is the biggest possible concentration of observers in the universe and will go extinct soon. Larger civilizations are extremely rare.But I think that you want to say that SIA prediction that we are already in “measure monster” is false, as we should observe much more observers, maybe a whole Galaxy densily packed with them.
Your last paragraph is what I meant by “find myself in a post-singularity megacivilization”.
Your first paragraph misunderstands my “SIA can be finagled”. The ring of history comes not from “AIs deliberately place a lot of measure on people to compel them”, but from “AIs incidentally place a lot of measure on people in the process of predicting them”. Predicting what we would do is very important in order to correctly estimate the probabilities that any particular AI wins the future, which is a natural Schelling point to set the bargaining power of each acausal trader.
Agree that AI will do a lot past simulations to predict possible variants of world history and even to try to solve Fermi paradox and-or predict behaviour of alien AIs. But it could be outweighed by FAI which tries to get most measure in its hands, for example to cure past sufferings via indexical uncertainty for any possible mind.