As a whole, I find your intuition of a good future similar to my intuition of a good future, but I do think that once it is examined more closely there are a few holes worth considering. I’ll start by listing the details I strongly agree with, then the ones I am unsure of, and then the ones I strongly disagree with.
Strongly Agree
It makes sense for humans to modify their memories and potentially even their cognitive abilities depending on the circumstance. The example provided of a worldbuilder sealing off their memories to properly enjoy their world from an inhabitant’s perspective seems plausible.
The majority of human experience is dominated by virtual/simulated worlds
Unsure
It seems inefficient for this person to be disconnected from the rest of humanity and especially from “god”. In fact, the AI seems like it’s too small of an influence on the viewpoint character’s life.
The worlds with maximized pleasure settings sound a little dangerous and potentially wirehead-y. A properly aligned AGI probably would frown on wireheading.
Strongly Disagree
If you create a simulated world where simulated beings are real and have rights, that simulation becomes either less ethical or less optimized for your utility. Simulated beings should either be props without qualia or granted just as much power as the “real” beings if the universe is to be truly fair.
Inefficiency like creating a planet where a simulation would do the same thing but better seems like an untenable waste of resources that could be used on more simulations.
When simulated worlds are an option to this degree, it seems ridiculous to believe that abstaining from simulations altogether would be an optimal action to take in any circumstance. Couldn’t you go to a simulation optimized for reading, a simulation optimized for hot chocolate, etc.? Partaking of such things in the real world also seems to be a waste of resources
I might update this comment if anything else comes to mind.
By the way, if you haven’t already, I would recommend you read the Fun Theory sequence by Eliezer Yudowsky. One of the ways you can access it is through this post:
https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence
“Seduced by Imagination” might be particularly relevant, if this sort of thing has been on your mind for a while.
I think the simplest way to answer this is to introduce a new scenario. Let’s call it Scenario 0. Scenario 0 is similar to Scenario 1, but in this case your body is not disintegrated. The result seems pretty clear: you are unaffected and continue living life on earth. Other yous may be living their own lives in space but it isn’t as if there is some kind of metaphysical consciousness link that connects you to them.
And so, in scenarios 1 and 2, where the earth-you is disintegrated, well, you’re dead. But not to worry! The normal downsides of death (pain, inability to experience new things, sadness of those left behind) do not apply! As far as the physical universe is concerned (i.e. as far as reality and logic are concerned) there are now two living beings that both perceive themselves as having once been you. Their connection to the original you is no less significant than the connection between the you that goes to sleep and the you that wakes up in the morning.
EDIT: I realize this does not actually answer Questions 3 and 4. I don’t have time to respond to those right now but I will in a future edit.
EDIT 2: The approach I’d take with Q3 and Q4 would be to maximize total wealth of all clones that don’t get disintegrated.
Let X be how much money is in my wallet and L is the ticket price. In scenario 1, the total wealth is 2X without the lottery or 2(X-L)+100 with the lottery. We buy the lottery ticket if 2(X-L)+100 > 2X, the inequality can be simplified to −2L + 100 > 0, which is further simplified to L < 50. The ticket is only worth buying if it costs less than $50.
For Q4 we should have a similar formula but we have three clones in the end rather than two, so I would only buy the ticket if it cost less than $33.34.