1- In this post I don’t really mention “non-me-simulations”. I try to compare the probability of only one full-time conscious being (me-simulation) to what Bostrom calls ancestor-simulations, as those full-scale simulations where one could replay “the entire mental history of humankind”.
For any simulation consisting of N individuals (e.g. N = 7 billion), there could in principle exist simulations where 0, 1, 2, … or N of those individuals are conscious.
When the number k of individuals being conscious satisfies k << N then I call the simulation selective.
I think your comment points out to the following apparent conjunction fallacy: I am trying to estimate the probability of the event “simulation of only one conscious individual” instead of “simulation of a limited number of individuals k << N” of greater probability (first problem)
The point I was trying to make is the following: 1) ancestor-simulations (i.e. full-scale and computationally-intensive simulations to understand ancestor’s history) would be motivated by more and more evidence of a Great Filter behind the posthuman civilization. 2) the need for me-simulation (which would be the most probable type of selective simulation because it only needs one player (e.g. a guy in his spaceship)) do not appear to rely on the existence of a Great Filter behind the posthuman civilization. They could be like cost-efficient single consciousness play for fun, or prisoners are condemned to.
I guess the second problem with my argument for the probablity of me-simulations is that I don’t give any probability of being in a me-simulation, whereas in the original simulation argument, the strength of Bostrom’s argument is that whenever an ancestor-simulation is generated, 100 billion conscious lives are created, which greatly improves the probability of being in such a simulation. Here, I could only estimate the cost-effectiveness of me-simulation in comparison with ancestor simulation.
2- I think you are assuming I believe in Utilitarianism. Yes, I agree that if I am Utilitarian I may want to act altruistically, even with some very small non-zero probability of being in a non-me-simulation or in reality.
I already answered to this question Yesterday in the effective egoist post (cf. comment to Ikaxas) and I am realizing that my answer was wrong because I didn’t assume that other people could be full-time-conscious.
My argument (supposing I am Utilitarian, for the sake of argument), essentially, was that if I had 10$ in my pocket and wanted to buy me an icecream (utility of 10 for me let’s say) I would need to provide an utility of 10*1000 to someone being full-time conscious to consider giving him the icecream (his utility would rise to 10 000 for instance). In the absence of some utility monster, I believe this case to be extremely unlikely and would end up eating icecreams all by myself.
[copy paste from Yesterday’s answer to Ikaxas] In practice, I don’t share deeply the Utilitarian view. To describe it shortly I believe I am a Solipsist who values the perception of complexity. So I value my own survival because I may not have any proof of any kind of the complexity of the Universe if I stop to exist, but also the survival of Humanity (because I believe humans are amazingly complex creatures), but I don’t value positive subjective perceptions of other conscious human beings. I value my own positive subjective perceptions because it maximizes my utility function of maximizing my perception of complexity.
Anyway, I don’t want to enter the debate of highly-controversial Effective Egoism inside what I wanted to be a more scientific probability-estimation post about a particular kind of simulation.
Thank you for your comment. I hope I answered you well. Feel free to ask any other clarification or point out to other fallacies in my reasoning.
Thank you for reading me.
1- In this post I don’t really mention “non-me-simulations”. I try to compare the probability of only one full-time conscious being (me-simulation) to what Bostrom calls ancestor-simulations, as those full-scale simulations where one could replay “the entire mental history of humankind”.
For any simulation consisting of N individuals (e.g. N = 7 billion), there could in principle exist simulations where 0, 1, 2, … or N of those individuals are conscious.
When the number k of individuals being conscious satisfies k << N then I call the simulation selective.
I think your comment points out to the following apparent conjunction fallacy: I am trying to estimate the probability of the event “simulation of only one conscious individual” instead of “simulation of a limited number of individuals k << N” of greater probability (first problem)
The point I was trying to make is the following: 1) ancestor-simulations (i.e. full-scale and computationally-intensive simulations to understand ancestor’s history) would be motivated by more and more evidence of a Great Filter behind the posthuman civilization. 2) the need for me-simulation (which would be the most probable type of selective simulation because it only needs one player (e.g. a guy in his spaceship)) do not appear to rely on the existence of a Great Filter behind the posthuman civilization. They could be like cost-efficient single consciousness play for fun, or prisoners are condemned to.
I guess the second problem with my argument for the probablity of me-simulations is that I don’t give any probability of being in a me-simulation, whereas in the original simulation argument, the strength of Bostrom’s argument is that whenever an ancestor-simulation is generated, 100 billion conscious lives are created, which greatly improves the probability of being in such a simulation. Here, I could only estimate the cost-effectiveness of me-simulation in comparison with ancestor simulation.
2- I think you are assuming I believe in Utilitarianism. Yes, I agree that if I am Utilitarian I may want to act altruistically, even with some very small non-zero probability of being in a non-me-simulation or in reality.
I already answered to this question Yesterday in the effective egoist post (cf. comment to Ikaxas) and I am realizing that my answer was wrong because I didn’t assume that other people could be full-time-conscious.
My argument (supposing I am Utilitarian, for the sake of argument), essentially, was that if I had 10$ in my pocket and wanted to buy me an icecream (utility of 10 for me let’s say) I would need to provide an utility of 10*1000 to someone being full-time conscious to consider giving him the icecream (his utility would rise to 10 000 for instance). In the absence of some utility monster, I believe this case to be extremely unlikely and would end up eating icecreams all by myself.
[copy paste from Yesterday’s answer to Ikaxas] In practice, I don’t share deeply the Utilitarian view. To describe it shortly I believe I am a Solipsist who values the perception of complexity. So I value my own survival because I may not have any proof of any kind of the complexity of the Universe if I stop to exist, but also the survival of Humanity (because I believe humans are amazingly complex creatures), but I don’t value positive subjective perceptions of other conscious human beings. I value my own positive subjective perceptions because it maximizes my utility function of maximizing my perception of complexity.
Anyway, I don’t want to enter the debate of highly-controversial Effective Egoism inside what I wanted to be a more scientific probability-estimation post about a particular kind of simulation.
Thank you for your comment. I hope I answered you well. Feel free to ask any other clarification or point out to other fallacies in my reasoning.