Not really. Think of Nozick’s experience machine. If you were to use the machine to simulate yourself in a situation extremely close to the center of the singularity, would you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
a) Would this not make the experience feel so ‘unreal’ that your simulated self would have trouble believing it’s not a simulation, and therefore not enjoy the simulation at all? In constructing the simulation, you need to define how many positive attributes you can give your simulated self before it realizes that its situation is so improbable that it must be a simulation. I’d use caution and not make my simulated self too ‘lucky.’
b) More importantly, you may believe that a) doesn’t apply, and that your simulated self would take the blue pill, and willingly choose to continue to live in the simulation. Even then, having great looks and great wealth would probably distract you from creating the singularity. All I’d care about is the singularity, and I’d design the simulation so that I have a comfortable, not too distracting life that would allow me to focus maximally on the singularity, and nothing else.
I agree these are possibilities. However, it seems to me that if you’re going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn’t sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more “real” for you, or to give you additional challenges. Of course, we’re only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.
Not really. Think of Nozick’s experience machine. If you were to use the machine to simulate yourself in a situation extremely close to the center of the singularity, would you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
a) Would this not make the experience feel so ‘unreal’ that your simulated self would have trouble believing it’s not a simulation, and therefore not enjoy the simulation at all? In constructing the simulation, you need to define how many positive attributes you can give your simulated self before it realizes that its situation is so improbable that it must be a simulation. I’d use caution and not make my simulated self too ‘lucky.’
b) More importantly, you may believe that a) doesn’t apply, and that your simulated self would take the blue pill, and willingly choose to continue to live in the simulation. Even then, having great looks and great wealth would probably distract you from creating the singularity. All I’d care about is the singularity, and I’d design the simulation so that I have a comfortable, not too distracting life that would allow me to focus maximally on the singularity, and nothing else.
I agree these are possibilities. However, it seems to me that if you’re going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn’t sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more “real” for you, or to give you additional challenges. Of course, we’re only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.