Simulating the people you interact with in each simulation to a strong enough approximation of reality means you’re creating tons of suffering people for each one who has an awesome life, even if a copy of each of those people is living a happy life in their own sim. I don’t think I would want a bunch of copies of me being unhappy even if I know one copy of me is in heaven.
However, in the least convenient world all the other people are being run by an AI, who through reading your mind can ensure you don’t notice the difference. The AI, if it matters, enjoys roleplaying. There are no people other than you in your shard.
Also: this seems like a pretty great stopgap if it’s more easily achievable than actual full on friendly universe optimization, but doesn’t prevent the AI from working on this in the meanwhile and implementing it in the future. I would not be unhappy to wake up in a world where the AI tells me “I was simulating you but now I’m powerful enough to actually create utopia, time for you to help!”
Why would I care? I’m a simulation fatalist. At some point in the universe, every “meaningful” thing will have been either done or discovered, and all that will be left will functionally be having fun in simulations. If I trust the AI to simulate well enough to keep me happy, I trust it to tell me the appropriate amount of truth to make me happy.
I’d definitely take that deal if offered out of all the possibilities in foom-space, since it seems way way above average, but it’s not the best possible.
Is there really a way of simulating people with whom you interact extensively such that they wouldn’t exist in much the same way that you would? In otherwords are p-zombie’s possible, or more to the point are they a practical means of simulating a human in sufficient detail to fool a human level intellect.
You don’t need to simulate them perfectly, just to the level that you don’t notice a difference. When the simulator has access to your mind, that might be a lot easier than you’d think.
There’s also no need to create p-zombies, if you can instead have a (non-zombie) AI roleplaying as the people. The AI may be perfectly conscious, without the people it’s roleplaying as existing.
So, your version was my first thought. However, this creates a contradiction with the stipulation that people “find love that lasts for centuries”. For that matter, “finding love” contradicts giving “every single living human being their own separate simulation.” (emphasis added)
I do not believe it is neccesary for an artificial intelligence to be able to suffer in order for it to perform a convincing imitation of a specific human being, especially if it can read your mind.
Simulating the people you interact with in each simulation to a strong enough approximation of reality means you’re creating tons of suffering people for each one who has an awesome life, even if a copy of each of those people is living a happy life in their own sim. I don’t think I would want a bunch of copies of me being unhappy even if I know one copy of me is in heaven.
That was my first thought as well.
However, in the least convenient world all the other people are being run by an AI, who through reading your mind can ensure you don’t notice the difference. The AI, if it matters, enjoys roleplaying. There are no people other than you in your shard.
Also: this seems like a pretty great stopgap if it’s more easily achievable than actual full on friendly universe optimization, but doesn’t prevent the AI from working on this in the meanwhile and implementing it in the future. I would not be unhappy to wake up in a world where the AI tells me “I was simulating you but now I’m powerful enough to actually create utopia, time for you to help!”
If the AI was not meaningfully committed to telling you the truth, how could you trust it if it said it was about to actually create utopia?
Why would I care? I’m a simulation fatalist. At some point in the universe, every “meaningful” thing will have been either done or discovered, and all that will be left will functionally be having fun in simulations. If I trust the AI to simulate well enough to keep me happy, I trust it to tell me the appropriate amount of truth to make me happy.
I’d definitely take that deal if offered out of all the possibilities in foom-space, since it seems way way above average, but it’s not the best possible.
Personally I would consider averting foom.
Is there really a way of simulating people with whom you interact extensively such that they wouldn’t exist in much the same way that you would? In otherwords are p-zombie’s possible, or more to the point are they a practical means of simulating a human in sufficient detail to fool a human level intellect.
You don’t need to simulate them perfectly, just to the level that you don’t notice a difference. When the simulator has access to your mind, that might be a lot easier than you’d think.
There’s also no need to create p-zombies, if you can instead have a (non-zombie) AI roleplaying as the people. The AI may be perfectly conscious, without the people it’s roleplaying as existing.
So, your version was my first thought. However, this creates a contradiction with the stipulation that people “find love that lasts for centuries”. For that matter, “finding love” contradicts giving “every single living human being their own separate simulation.” (emphasis added)
Depends on your definition of “love”, really.
GAAAAAAAAAHHHHH!
I don’t think that you need an actual human mind to simulate being a mind to stupid humans. (I.e. pass the Turing test.)
A mind doesn’t need to be human for me not to want billions of copies to suffer on my account.
Gah. Ok. Going to use words properly now.
I do not believe it is neccesary for an artificial intelligence to be able to suffer in order for it to perform a convincing imitation of a specific human being, especially if it can read your mind.