I had a very similar idea , I’ve even created a name for it, vestibules of paradise hypothesis. I’ve messaged author and he advised me to publish my thoughts here, so I am copying them below. Please excuse my english and my unsophisticated language (in part because of my limited vocabulary and in part because of unofficial nature of conversation)
“How do You think “paradise” would look like? Wouldn’t be computationally profitable to fuse simulated observer moments to ultimately one?” We agreed some individuality would be desirable, yet if to SI saving more beings is way more important, we can reach a bit different conclusions:
“It would be great I think, yet I am tempting to give some credence to hypothesis where it would be neccessary to minimize amount of observer moments in paradise to save from suffering more minds, for example in case when evil SI would be more common or if there were so many observer moments needed to redeem. In such a case it be preferable to eventually fuse all saved beings to one state, possibly of the smallest possible suffering/highest possible wellbeing yet using as little as possible of computing power, that state would have to be simulated in great amount of copies, so if there would be only one it would be simpler I guess. One can imagine that state as “pure cinsciousness”, like nirvana, or maybe rather something similar to deep sleep, with minimal amount of consciousness. Do You think complexity of experience has impact on probability of being that one? For example if one observer moment has two possible, of equal objective probability, yet one of it is more complex, more conscious, what then with subjective probabilities?”
(That las question is not strictly bounded to thopic, although I will use it to propose some other form od “saving” from suffering, I think very speculative, but worth considering)
More thoughts: “I think if there would exist a set of universal best possible experiences, then it would be easier to maintain continuity when generations of Universes with benevolent SI would begin to die. If it was standarized and predictible to other SI what observer-pattern should be universal they all could seek to simulate that exact pattern/set of patterns, additionally the narrower the more efficient saving greater number of others. Wouldn’t it be the case that in older universes, when star formation ends and only black holes remain it would be less and less energy available to perform computations, which will result in more strict energy economy?” (Here one can have some interesting thoughts of Landauer limit, but I think anyway it is a valid thought)
“I am wondering what would it be if the price of salvation was to erase your “identity”. At first glance it does not look appealing I think, although it appears to me as logical besides my eventual preferences”
If we assume that benevolent SI care only for maximizing chances of every suffering observer-moment being saved and not about identity (for example fusing observer moment into one (I think rather conscious and pleased, maybe a form of enlighted mind). We can think of form of salvation in descrobed below form.
(There is much about interpretations of multiverse immortality, yet I think it can be important, and it is easier to understand what I postulate in this version of salvation by benevolent SI, I find it highly speculative and I rather tend to view author’s position as more probable, neverteless I think it can be interesting scenario to consider)
“When I was a child I liked to play a game I’ve creared myself. To experirnce it You would need only a bit of imagination. So, imagine you have power to rule over all space and all time. When You wish to pause your time, your thoughts, your life, you become pure spirit, not neccessarily that kind reliogious people tend to praise but something less metaphysical, or maybe more, that doesn’t matter. You can live in such a state seconds, hours or milions of years, any finite amount of subjective time you wish. You can be everyone you can imagine, you are the ruler and the creator of reality. All your wildest dreams may come true, all your loves and hopes, as long as you decide them to last.
The only thing is when You want to return to “your” body and unpause the time, all memories from that spiritual life have to be completely erased.
From your mind’s perspective it felt like a mere blink of an eye.
I don’t know is it deeply connected to what I want to underline, I don’t think so, it is just “cool” in some sense of that word. I am thinking about what I intuitively belived, that you have bigger chances to be that of future observer moments that are more similar to your present one, or that you have bigger chances to find yourself in more/most complex mind. This are rather random conclusions yet they show the trend of thinking one could use.
First, my biggest objection to super-strong self sampling assumption, why we are such an intelligent and complex minds when there are so much more animals having much less complex experiences (at least when it comes to abstract ones and self awareness?) SSSSA states we should reason as if there was greater probability to find oneself in a most complex conscious state, and uses it to conclude Superintelligence would be rare, because we should be statistical observers. What if we try to think of that another way. We use antrophic reasoning to create a model of the world, where apparent fine-tuning of our universe seems more probable if there would be plenty of other, lifeless universes, then it would be nothing strange we are in life-containing universe, bacause in principle we can observe only (mainly) such a universes. What if we cannot think of ourselves as of more probable because of more complex mind-state, but rather in the following way: one can experience being “yourself” only in minds capable of producing self-awareness, other mind, where self-awareness doesn’t exist, could be treated simply like “non-existence”, for sure nonexistence of any obvious self. That is why we, knowing we already have something like self-awareness, cannot think of ourselves as we would be part of referrence class containing every consciousness and deduce we are improbable because of amount of animals minds. We know for sure our refference class must exclude all states that are not self aware, so SSSSA becomes controversial and we cannot assume being more conscious makes us more probable. Rather achieving certain level of consciousness makes “US”- self awareness- possible at all. We cannot assume it is obvious we are more probable if our conscious state is more and more complex, and conclude superintelligence is tchus improbable to exist in the future (we can still think it is rare, we should still think we are probably most common type of self-awareness, but not because of application of SSSSA)
We can even conclude the most common type of self-awareness should be rather simple one, what we can see today (I would humorously say the prove is we are still thinking of such “easy” questions like what we are and why we are). If we think about all self-conscious beings living in history, Homo sapiens could has more individuals than any other Hominid or dolphin. Even if all H. Sapiens constitued just one half of all Self-aware beings (on earth) it wouldn’t be so strange we are ones of them.
( there would be a question about where is the lowest boundary of self-awareness, do rats are self-aware? I the end I am sure it seems to be more probable to be in the “bigger” self-awareness, yet I don’t know if we are allowed to reason that way knowing that our level of self awareness is high… Should we exclude from our refference class every observer who is not able to understand basic mathematics? Wouldn’t it mean we can think of our current mindstate as of refference class, tchus conclude “I have to be me, there is nothing strange about it”?)
We could still think there is more probable to be more self-aware, but I do not know if self awareness vould be much higher, we can imagine superinelligence not being much more -self aware ( not many orders of magnitude more) yet having amazing computational abilities, memory and awareness of external world.
Next, if we assume that not complexity of neural-like web is responsible for probability of being that mind but merely intensity of feeling “selfness” we can think that our next observer moment is not dependent of its complexity but only of its measure.
I think about “impossibility of sleep”, I’ve “discovered” that objection to quantum immortality when I was 18 and found it in the internet some days after. So, as you know, it underlines that if in fact we cannot subjectively die, we shouldn’t be able to lose our consciousness in any other way either, including sleep. What was clear to me was that when we fall asleep, the next “thought” is rather that just after waking up, usually with some shadows of your dreams. One could assume tchus probability of finding yourself in most complex state of consciousness, more consciois state is greater. But if we look from perspective of just self-awareness, then comlexity of conscious state in principle doesn’t mean higher probability of being in it/being it.
If it were true than we not neccesarily should expect to find ourselves in a more consciois state, or more complex conscious state after death of our brain, it may be worth to think of, that if we in fact are subjectively “skipping” that part of our future observer moments that has low complexity and/or low self-awareness and subjectively feel ourselves only in state with self-awareness, if follow my experimental reasoning “you” should not expect to survive death as “you” namely your self-identity and memories have really low chance to survive (it may be different if we exist in simulations more often). Instead you should rathet expext to find yourself in lets say randomly choosen mindstate with basic self-awareness, so subjective immortality in practice most probably would not imply survival of “person” (or some kind of subjectively connectef continuum of observer moments/ model of your personal identity), but immortality would mean that it is consciousness, feeling, what is sure to survibe everything, because each observer moment has some other observer moment in the future.
(I don’t know how it really look like, it was just a thought I’ve had today, what if classic multiverse immortality could be on of several ways of preserving observer-moments continuity)
The question is does every observer moment have previous one, and if so, what observer moments before our self-aware existence were?
If we imagine some scenario of experience immortality, we could reason as follows: death is by no means binary process, rather fading of consciousness, like sleep but deeper. When brain dies it has lower and lower self-awareness and consciousness in general, so next observer-moments are like that of less and less conscious beings. At the end we reach level zero of consciousness, but before we do so, there are probably amazingly many systems with low-level consciousness, like animal. We would not expect that next observer-moment would be in some form of animal, it had many observers moments before, certainly much different that that of dying human brain. Yet there are plenty of low-consciousness states of minds, without memories, personality and self-awareness, there are minds that are just emerging, new brains during their formation. With no assumptions more exotic than that of multiverse immortality, actually using that same reasoning, we can think of experiences of near dead brain and emerging brain as of the same state of mind, fused together. Then next observer moments would be than next observer moment of some emerging common animal/alien animal, then its life, death and again. Nevertheless you should expect to find Yourself, that is self-awareness, in some point of that “reincarnation” there will be many emerging minds that will grow to have such a level of self-awareness that will allow “you” to exist.
That view is similar to reincarnation and I don’t like it, yet it can’t be a premise not to think of it as a possibility. (It also look like some form of open individualis what I don’t like too, maybe empty individualism would be bettet but I don’t think it is practical)
It wouldn’t be strange we cannot feel (remember feeling) being animals/ alien animals, bacause they are to us like other universes where life (self-aware life) cannot exist, metaphorically like cyclic universe, where life can exist only in some of them. Subjectively we would feel exactly what we seem to feel, namely being a mind with self-awareness, similar to many others around us, with history (memory) reaching to childhood and then void. In that view there would me infinitely many observer moments before us.
I think things could be different if most of observer-states exist in simulations.
Last, if we assume that view, we can reach some conclusions about salvation by benevolent SI. In that case SI could simulate gigantic amount of copies of emerging mind that reaches the highest possible (or computationally preferable) self-awareness (not neccesarily complexity) with pleasing experience, maybe pure experience of self-awareness, so that every dyind being has the highest chances to “reincarnate” to that mind. It would make easier to save bigger amount of actually immortal minds (with immortal self identity) in simulations runned by evil SI for example.
If we don’t assume we are more probable to find ourselves in more complex observer states, we can reach that conclusions, I think it may be interesting to consider that.”
I hope that there are some interesting thoughts in what I shared, please forgive me chaothic apparence of tgat comment. I think what author postulates is a really valid theory, I encourage to read his article on that topic :
(PDF) Back to the Future: Curing Past Sufferings and S-Risks via Indexical Uncertainty (Turchin)
I absolutely agree with You.
My only objection is that SI may value minimalization of suffering more than preserving personal identities from death (I think the same, reincarnation in above interpretation and fusing minds are death of “person”). Such an SI would be in some (maybe even strong) sense promortalist. For now I don’t want to choose what vision seems more probable to me. I don’t think mine is impossible, though for sure is not more preferable.
I also hope there would be possible to fuse minds without destroying their personal identity. Maybe SI would choose to simulate less copies of more diverse minds after fusion rather than greater amount of just one.