And not only Obama. The closer you are to the center of human history, the more likely you are to be on a holodeck. People simulating others should be more likely to simulate people in historically interesting times, and people simulating themselves for fun and blocking their memory should be more likely to simulate themselves as close to interesting events as possible.
And...if Singularity theory is true, the Singularity will be the most interesting and important event in all human history. Now, all of us are suspiciously close to the Singularity, with a suspiciously large ability to influence its course. Even I, a not-too-involved person who’s just donated a few hundred dollars to SIAI and gets to sit here talking to the SIAI leadership each night, am probably within the top millionth of humans who have ever lived in terms of Singularity “proximity”.
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they’re holodecking with probability ~1.
After all, which is more likely from their perspective—that they’re one of the dozen or so people most responsible for creating the Singularity and ensuring Friendly AI, or that they’re some posthuman history buff who wanted to know what being the guy who led the Singularity Institute was like?
(the alternate explanation, of course, is that we’re all on the completely wrong track and that we’re simply in the larger percentage of humans who think they’re extremely important.)
Still, I think that in most EU calculations, the weight of “holy crap this is improbable, how am I actually this important?” on the one side, and of “well, if I am this dude, I’d really better not @#$% this up” on the other should more or less scale together. I don’t think I’m stepping into Pascalian territory here.
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they’re holodecking with probability ~1.
The “with probability ~1” part is wrong, AFAICT. I’m confused about how to think about anthropics, and everybody I’ve talked to is also confused as far as I’ve noticed. Given this confusion, we can perhaps obtain simulation-probabilities by estimating the odds that our best-guess means of calculating anthropic probabilities is reliable, and then obtaining an estimate that we’re in a holodeck conditional on our anthropic calculation methods being correct. But it would be foolish to assign more than, say, a 90% estimate to “our best-guess means of calculating anthropic probabilities is basically correct”, unless someone has a better analysis of such methods than I’d expect.
Shouldn’t the fact that they can probably imagine better versions of themselves reduce this probability? If you’re in a holodeck, in addition to putting yourself at the center of the Singularity, why wouldn’t you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
We are actually in a ‘chip-punk’ version of the past in which silicon based computers became available all the way back in the late 20th century. The original Eliezer made friendly AI with vacuum tubes.
No if they are in a historical simulation. The real architects of the Singularity weren’t billionaires.
No if they are in some kind of holo-game, for the same reason that people playing computer games don’t hack them to make their character level infinity and impervious to bullets. Where would be the fun in that?
Not really. Think of Nozick’s experience machine. If you were to use the machine to simulate yourself in a situation extremely close to the center of the singularity, would you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
a) Would this not make the experience feel so ‘unreal’ that your simulated self would have trouble believing it’s not a simulation, and therefore not enjoy the simulation at all? In constructing the simulation, you need to define how many positive attributes you can give your simulated self before it realizes that its situation is so improbable that it must be a simulation. I’d use caution and not make my simulated self too ‘lucky.’
b) More importantly, you may believe that a) doesn’t apply, and that your simulated self would take the blue pill, and willingly choose to continue to live in the simulation. Even then, having great looks and great wealth would probably distract you from creating the singularity. All I’d care about is the singularity, and I’d design the simulation so that I have a comfortable, not too distracting life that would allow me to focus maximally on the singularity, and nothing else.
I agree these are possibilities. However, it seems to me that if you’re going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn’t sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more “real” for you, or to give you additional challenges. Of course, we’re only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.
Unless most conscious observers are ancestor simulations of people in positions of historical importance, in which case most people are correct about the importance of the position and incorrect about who/where they are.
(Vide Doomsday Argument, Simulation Argument, and the “surprise” of finding yourself on Ancient Earth rather than much later in a civilization’s development. Of course these are all long-standing controversies in anthropics, I’m just raising their existence.)
Among people who believe themselves to be Barack Obama, most are mistaken about their position rather than the importance of the position.
Not all that unlikely. There have certainly been a lot of people who have believed themselves to be Napoleon or Jesus. I’d say 10 Obamas seems a little right now, but I wouldn’t be at all surprised by, say, three.
The idea of eternal inflation might cut against this. Under eternal inflation new universes are always being created at an exponentially increasing rate so there are always far more young than old universes. So under this theory if you are uncertain of whether you are at a relatively early (pre-singularity) or relatively late (post-singularity) point in the universe you are almost certainly in the relatively early state because there are so many more universes in this state.
Note: Eliezer and Robin object to this idea for reasons I don’t understand.
James, I don’t think inflation implies there are more early than late universes, nor do I object to inflation. I just don’t think inflation solves time-asymmetry.
And not only Obama. The closer you are to the center of human history, the more likely you are to be on a holodeck. People simulating others should be more likely to simulate people in historically interesting times, and people simulating themselves for fun and blocking their memory should be more likely to simulate themselves as close to interesting events as possible.
And...if Singularity theory is true, the Singularity will be the most interesting and important event in all human history. Now, all of us are suspiciously close to the Singularity, with a suspiciously large ability to influence its course. Even I, a not-too-involved person who’s just donated a few hundred dollars to SIAI and gets to sit here talking to the SIAI leadership each night, am probably within the top millionth of humans who have ever lived in terms of Singularity “proximity”.
And Michael Vassar and Eliezer are so close to the theorized center of human history that they should assume they’re holodecking with probability ~1.
After all, which is more likely from their perspective—that they’re one of the dozen or so people most responsible for creating the Singularity and ensuring Friendly AI, or that they’re some posthuman history buff who wanted to know what being the guy who led the Singularity Institute was like?
(the alternate explanation, of course, is that we’re all on the completely wrong track and that we’re simply in the larger percentage of humans who think they’re extremely important.)
Still, I think that in most EU calculations, the weight of “holy crap this is improbable, how am I actually this important?” on the one side, and of “well, if I am this dude, I’d really better not @#$% this up” on the other should more or less scale together. I don’t think I’m stepping into Pascalian territory here.
The “with probability ~1” part is wrong, AFAICT. I’m confused about how to think about anthropics, and everybody I’ve talked to is also confused as far as I’ve noticed. Given this confusion, we can perhaps obtain simulation-probabilities by estimating the odds that our best-guess means of calculating anthropic probabilities is reliable, and then obtaining an estimate that we’re in a holodeck conditional on our anthropic calculation methods being correct. But it would be foolish to assign more than, say, a 90% estimate to “our best-guess means of calculating anthropic probabilities is basically correct”, unless someone has a better analysis of such methods than I’d expect.
Shouldn’t the fact that they can probably imagine better versions of themselves reduce this probability? If you’re in a holodeck, in addition to putting yourself at the center of the Singularity, why wouldn’t you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
We are actually in a ‘chip-punk’ version of the past in which silicon based computers became available all the way back in the late 20th century. The original Eliezer made friendly AI with vacuum tubes.
The more powerful computers are when you turn 15, the higher the difficulty level.
No if they are in a historical simulation. The real architects of the Singularity weren’t billionaires.
No if they are in some kind of holo-game, for the same reason that people playing computer games don’t hack them to make their character level infinity and impervious to bullets. Where would be the fun in that?
Not really. Think of Nozick’s experience machine. If you were to use the machine to simulate yourself in a situation extremely close to the center of the singularity, would you also give yourself the looks of Brad Pitt and the wealth of Bill Gates?
a) Would this not make the experience feel so ‘unreal’ that your simulated self would have trouble believing it’s not a simulation, and therefore not enjoy the simulation at all? In constructing the simulation, you need to define how many positive attributes you can give your simulated self before it realizes that its situation is so improbable that it must be a simulation. I’d use caution and not make my simulated self too ‘lucky.’
b) More importantly, you may believe that a) doesn’t apply, and that your simulated self would take the blue pill, and willingly choose to continue to live in the simulation. Even then, having great looks and great wealth would probably distract you from creating the singularity. All I’d care about is the singularity, and I’d design the simulation so that I have a comfortable, not too distracting life that would allow me to focus maximally on the singularity, and nothing else.
I agree these are possibilities. However, it seems to me that if you’re going to use improbable good fortune in some areas as evidence for being in a holodeck, it only makes sense to use misfortune (or at least lack of optimization, or below-averageness) in other areas as evidence against it. It doesn’t sit well with me to write off every shortcoming as an intentional contrivance to make the simulation more “real” for you, or to give you additional challenges. Of course, we’re only talking a priori probability here; if, say, Eliezer directly catalyzed the Singularity and found himself historically renowned, the odds would have to go way up.
The alternate explanation is of course far more likely a priori.
How likely is it that, say, at least 10 people think they’re Barack Obama, only one of which is correct?
Being mistaken about your importance is different from, and much more common than, being mistaken about who/where you are.
Unless most conscious observers are ancestor simulations of people in positions of historical importance, in which case most people are correct about the importance of the position and incorrect about who/where they are.
(Vide Doomsday Argument, Simulation Argument, and the “surprise” of finding yourself on Ancient Earth rather than much later in a civilization’s development. Of course these are all long-standing controversies in anthropics, I’m just raising their existence.)
Among people who believe themselves to be Barack Obama, most are mistaken about their position rather than the importance of the position.
Agreed.
Not all that unlikely. There have certainly been a lot of people who have believed themselves to be Napoleon or Jesus. I’d say 10 Obamas seems a little right now, but I wouldn’t be at all surprised by, say, three.
“seems a little MUCH right now”, I meant.
The idea of eternal inflation might cut against this. Under eternal inflation new universes are always being created at an exponentially increasing rate so there are always far more young than old universes. So under this theory if you are uncertain of whether you are at a relatively early (pre-singularity) or relatively late (post-singularity) point in the universe you are almost certainly in the relatively early state because there are so many more universes in this state.
Note: Eliezer and Robin object to this idea for reasons I don’t understand.
James, I don’t think inflation implies there are more early than late universes, nor do I object to inflation. I just don’t think inflation solves time-asymmetry.
Note that the alternate explanation is MUCH more probable.