What you—or everyone—believes does not change the reality.
It can give evidence, though. Consider Hypothesis A: “Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears” and Hypothesis B which omits the word “not”. Then:
The decisions made by, and ideas widely held in, our society, can be evidence favouring A or B.
We are more likely simulations if B is right than if A is right.
Similarly if the hypotheses are ”… to engage in massive simulation of their forebears, including blissful afterlives”, in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though—perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.)
My opinion, for what it’s worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.
I think that the problem with this sort of arguments is that it’s like cooperating in prisoner’s dilemma hoping that superrationality will make the other player cooperate: It doesn’t work.
It seems that lots of people here conflate Newcomb’s problem, which is a very unusual single-player decision problem, with prisoner’s dilemma, which is the prototypical competitive game from game theory.
Also, I don’t see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?
My understanding is that the proposal here isn’t that an accurate simulation of your life should be counted as an afterlife; it’s that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they’d be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful—or for that matter torturous—afterlife might be provided for you).
As for Newcomb versus prisoners’ dilemma, see my comments elsewhere in the thread: I am not proposing that our decision whether to engage in large-scale ancestor simulation has any power to affect our past, only that it may provide some evidence bearing on what’s likely to have been in our past.
I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment.
I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer.
My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the sims over into an afterlife after they die. This also circumvents teletransportation concerns (which would still exist if we were uploading ourselves into a simulation of our own!) since everything we are now would just be brought over to the afterlife part of the simulation fully intact.
My understanding is that the proposal here isn’t that an accurate simulation of your life should be counted as an afterlife; it’s that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they’d be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful—or for that matter torturous—afterlife might be provided for you).
Or they are just interested in the password needed to access the cute cat pictures on my phone. Seriously, we are in the realm of wild speculation, we can’t say that evidence points any particular way.
I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.
I think that the problem with this sort of arguments is that it’s like cooperating in prisoner’s dilemma hoping that superrationality will make the other player cooperate: It doesn’t work.
I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.
There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.
It seems that lots of people here conflate Newcomb’s problem, which is a very unusual single-player decision problem, with prisoner’s dilemma, which is the prototypical competitive game from game theory.
I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.
Also, I don’t see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?
So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.
My opinion, for what it’s worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.
I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.
I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have:
1) Issues with the theory of identity
2) Issues with theory of mind
3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil)
4) Self-interest (more resources for existing individuals to upload into and utilize)
5) The existence of a convincing two-boxer “proof” of some sort
I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?
Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.
I’m not sure I have much to say that you won’t have thought of already. But: First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren’t really available—that there’s only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren’t smart enough to find our way to them.
Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we’re all living in some kind of virtual universe; wouldn’t it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world? Someone else—entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors’ minds to simulate them anywhere else, so it’s just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What’s so special about them, compared with all the other possible minds?
Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it’s possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar’s last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there’s just no reconstructing our ancestors at this point.
Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?
First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren’t really available—that there’s only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren’t smart enough to find our way to them.
Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this.
Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we’re all living in some kind of virtual universe; wouldn’t it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world?
So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt.
Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual afterlife” of the simulation computer we are in.
As it is likely that the technology for simulations will come about at about the same time as for a “glorious virtual universe” we could even treat it as our last big hurrah before we upload ourselves. This makes sense as the people who exist when this technology becomes available will know a large number of loved ones who just missed it. They will also potentially be in especially imminent fear of the teletransportation paradox. I do not think there is any inherent conflict between doing both of these things.
Someone else—entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors’ minds to simulate them anywhere else, so it’s just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What’s so special about them, compared with all the other possible minds?
Just to be clear, I am not talking about our actual individual ancestors. I actually avoided using the term intentionally as I think it is a bit confusing. I am pretty sure this is how Bostrom meant it as well in the original paper, with the word “ancestor” being used in the looser sense, like how we say “homo erectus where our ancestors.” That might be my misinterpretation, but I do not think so. While I could be convinced, I am personally, currently, very skeptical that it would be possible to do any meaningful sort of replication of a person after they die. I think the only way that someone who has already died has any chance of an afterlife is if we are already in a simulation. This is also why my personal, atheistic mind could be susceptible to donating to such a cause when in grief. I wrote an update to my original post at the bottom where I clarify this. The point of the simulation is to change our credence regarding our self-location. If the vast majority of “people like us” (which can be REALLY broadly construed) exist in simulations with afterlives, and do not know it, we have reason to think we might also exist in such a simulation. If this is still not clear after the update, please let me know, as I am trying to pin down something difficult and am not sure if I am continuing to privilege brevity to the detriment of clarity.
Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it’s possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar’s last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there’s just no reconstructing our ancestors at this point.
I agree with your point so strongly that I am a little surprised to have been interpreted as meaning this. I think that it seems theoretically feasible to simulate a world full of individual people as they advance their way up from simple stone tools onward, each with their own unique life and identity, each existing in a unique world with its own history. Trying to somehow make this the EXACT SAME as ours does not seem at all possible. I also do not see what the advantage of it would be, as it is not more informative or helpful for our purposes to know that we are the same or not as the people above us, so why would be try to “send that down” below us. We do not care about that as a feature of our world, and so would have no reason to try to instill it in the worlds below us. There is sort of a “golden rule” aspect to this in that you do to the simulation below you the best feasible, reality-conforming version of what you want done to you.
Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?
Maybe? I think that one of the interesting parts about this is where we would choose to draw policy lines around it. Do dogs go to the afterlife? How about fetuses? How about AI? What is heaven like? Who gets to decide this? These are all live questions. It could be that they take a consequential hedonistic approach that is mostly neutral between “who” gets the heaven. It could be that they feel obligated to go back further in gratitude of all those (“types”) who worked for advancement as a species and made their lives possible. It could be that we are actually not too far from superintelligent AI, and that this is going to become a live question in the next century or so, in which case “we” are that class of people they want to simulate in order to increase their credence of others similar to us (their relatives, friends who missed the revolution) being simulated.
As far as how far back you bother to simulate people, it might actually be easier to start off with some very small bands of people in a very primitive setting then to try to go through and make a complex world for people to “start” in without the benefit of cultural knowledge or tradition. It might even be that the “first people” are based on some survivalist hobby back-to-basics types who volunteered to be emulated, copied, and placed in different combinations in primitive earth environments in order to live simple hunter-gatherer lives and have their children go on to populate an earth (possible date of start? https://en.wikipedia.org/wiki/Population_bottleneck). That said, this is deep into the weeds of extremely low-probability speculation. Fun to do, but increasingly meaningless.
We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely).
Yes, but that it isn’t enough to defeat simulations. One successful future can create a huge number of sims. Observational selection effects thus make survival fare more likely than otherwise expected.
It might turn out that computational superpowers just aren’t really available—that there’s only so much processing power we have any realistic way of harnessing.
Even without quantum computing or reversible computing, even just using sustainable resources on earth (solar) - even with those limitations—there are plenty of resources to create large numbers of sims.
In this sort of scenario, maybe we’re all living in some kind of virtual universe; wouldn’t it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world
The cost is about the same either way. So the question is one of economic preferences. When people can use their wealth to create either new children or bring back the dead, what will they do? You are thus assuming there will be very low demand for resurrecting the dead vs creating new children. This is rather obviously unlikely.
This technology probably isn’t that far away—it is a 21st century tech, not 25th. It almost automatically follows AGI, as AGI is actually just the tech to create minds—nothing less. Many people alive today will still be alive when these sims are built. They will bring back their loved ones, who then will want to bring back theirs, and so on.
I see no good reason to think it’s possible.
Most people won’t understand or believe it until it happens. But likewise very few people actually understand how modern advanced rendering engines work—which would seem like magic to someone from just 50 years ago.
It’s an approximate inference problem. The sim never needs anything even remotely close to atomic information. In terms of world detail levels it only requires a little more than current games. The main new tech required is just the large scale massive inference supercomputing infrastructure that AGI requires anyway.
It’s easier to understand if you just think of a human brain sim growing up in something like the Matrix, where events are curiously staged and controlled behind the scenes by AIs.
The opinion-to-reasons ratio is quite high in both your comment and mine to which it’s replying, which is probably a sign that there’s only limited value in exploring our disagreements, but I’ll make a few comments.
One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as “why would it create any?”.)
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it’s easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it’s possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where’s the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that’s one more reason to think that resurrecting them is terribly expensive.)
(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they’ll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)
You say with apparent confidence that “this technology probably isn’t that far away”. Of course that could be correct, but my guess is that you’re wronger than a very wrong thing made of wrong. We can’t even simulate C. elegans yet, even though that only has about 1k neurons and they’re always wired up the same way (which we know).
Yes, it’s an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I’m unconvinced that “the sim never needs anything even remotely close to atomic information” given that the (simulated or not) world we’re in appears to contain particle accelerators and the like, but let’s suppose you’re right and that nothing finer-grained than simple neuron simulations is needed; you’re still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it’s worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from—so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it’s all just a colossal challenge and I really don’t think waving your hands and saying “just think of a human brain sim growing up in something like the Matrix” is on the same planet as the right ballpark for justifying a claim that it’s anywhere near within reach.
One future civilization could perhaps create huge numbers of simulations. But why would it want to?
I’ve already answered this—because living people have a high interest in past dead people, and would like them to live again. It’s that simple.
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations.
True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual—very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds.
You have to figure out exactly what the dead were like
No, you don’t. For example the amount of information remaining about my grandfather who died in the 1950′s is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available.
So from my perspective the target is very wide. Personal identity is subjectively relative.
But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before?
You wouldn’t. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim).
We can’t even simulate C. elegans yet, even though that only has about 1k neurons and they’re always wired up the same way (which we know).
You could also say we can’t simulate bacteria, but neither is relevant. I’m not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn’t tell us much because only a tiny amount of resources have been spent on that.
Just to be clear—the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn’t that far away, it’s because AGI isn’t that far away, and this follows shortly thereafter.
you’re still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person
Hardly. You are assuming naive encoding without compression. Neural nets—especially large biological brains -are enormously redundant and highly compressible.
Look—it’s really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upper bound, but the lower bound is much lower.
For the lower bound, consider compressing the inner monologue—which naturally includes everything a person has ever read, heard, and said (even to themselves).
So that gives a lower bound of 10^10 for a 100 year old. This doesn’t include visual information, but the visual cortex is also highly compressible due to translational invariance.
And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it’s all just a colossal challenge and I really don’t think waving your hands and saying “just think of a human brain sim growing up in something like the Matrix”
No—again naysayers will always be able to claim “these aren’t really the same people”. But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people, and the turing test for resurrection is entirely subjective, relative to their limited knowledge of the resurrectee.
But the answer you go on to repeat is one I already explained wasn’t relevant, in the sentence after the one you quoted.
most of the additional cost boils down to a constant factor once you amortize at large scale.
I’m not sure what you’re arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don’t understand how this is any sort of counterargument to my suggestion that it’s a reason to simulate new minds rather than old.
the amount of information remaining about my grandfather who died in the 1950′s is pretty small.
You say that like it’s a good thing, but what it actually means is that almost certainly we can’t bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that’s all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can?
you always need one copy to remain in the historical sim for consistency
I’m not sure what that means. I’d expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won’t start going wrong just because you shut down the historical one.
Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that’s another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don’t believe you (if you’re estimating the parameters this way, you’ll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in that case surely there are anthropic reasons to find this scenario unlikely, because then we should be very surprised to find ourselves in the historical sim rather than the non-historical one that’s the real purpose.
When I say this tech isn’t that far away, it’s because AGI isn’t that far away, and this follows shortly thereafter.
Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you’re outrageously overconfident about what’s going to happen on what timescales. We don’t know whether, or when, AGI will be achieved. We don’t know whether when it is it will rapidly turn into way-superhuman intelligence, or whether that will happen much slower (e.g., depending on hardware technology development which may not be sped up much by slightly-superhuman AGI), or even whether actually the technological wins that would lead to very-superhuman AGI simply aren’t possible for some kind of fundamental physical reason we haven’t grasped. We don’t know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don’t value at all.
You are assuming naive encoding without compression
No, I am assuming that smarter encoding doesn’t buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.
that gives a lower bound of 10^10 for a 100 year old
Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.
naysayers will always be able to claim “these aren’t really the same people”. But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people
What makes you think those are different people’s opinions? If you present me with a simulated person who purports to be my dead grandfather, and I learn that he’s reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather. Perhaps I will have no way of telling the difference (since my own reactions on interacting with this simulated person can be available to the optimization process—if I don’t mind hundreds of years of simulated-me being used for that purpose) but there’s a big difference between “I can’t prove it’s not him” and “I have good reason to think it’s him”.
I don’t really have a great deal of time to explain this so I”ll be brief. Basically this is something I’ve thought a great deal about and I have a rather detailed technical vision on how to achieve (At least to the extant that anyone can today. I’m an expert in the relevant fields—computer simulation/graphics and machine learning, and this is my long term life goal.). Fully explaining a rough roadmap would require a small book or long paper, so just keep that in mind.
most of the additional cost boils down to a constant factor once you amortize at large scale.
I’m not sure what you’re arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead.
Sorry—I meant a large constant, not a constant multiplier. Simulating a mind costs the same—doesn’t matter whether it’s in a historical sim world or a modern day sim or a futuristic sim or a fantasy sim … the cost of simulating the world to (our very crude ) sensory perception limits is always about the same.
The extra cost for an h-sim vs others is in the initial historical research/setup (a constant) and consistency guidance. The consistency enforcement can be achieved by replacing standard forward inference with a goal-directed hierarchical bidirectional inference. The cost ends up asymptotically about the same.
Instead of just a physical sim, or it’s more like a very deep hierarchy where at the highest levels of abstraction historical events are compressed down to text like form in some enormous evolving database written and rewritten by an army of historian AIs. Lower more detailed levels in the graph eventually resolve down into 3D objects and physical simulation sparsely as needed.
You say that like it’s a good thing, but what it actually means is that almost certainly we can’t bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that’s all.
As I said earlier—you do not determine who is or is not my grandfather. Your beliefs have zero weight on that matter. This is such an enormously different perspective that it isn’t worth discussing more until you actually understand what I mean when I say personal identity is relative and subjective. Do you grok it?
Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you’re outrageously overconfident about what’s going to happen on what timescales. We don’t know whether, or when, AGI will be achieved.
Perhaps, but I’m not a random sample—not part of your ‘we’. I’ve spent a great deal of time researching the road to AGI. I’ve written a little about related issues in the past.
AGI will be achieved shortly after we have brain-scale machine learning models (such as ANNs) running on affordable (< 10K) machines. This is at most only about 5 years away. Today we can simulate a few tens of billions of synapses in real time on a single GPU, and another 1000x performance improvement is on the table in the near future—from some mix of software and hardware advances. In fact, it could very well happen in just a year. (I happen to be working on this directly, I know more about it than just about anyone).
AGI can mean many different things, so consider before arguing with the above.
We don’t know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don’t value at all.
Sure, but this whole conversation started with the assumption that we avoid such existential risks.
No, I am assuming that smarter encoding doesn’t buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.
The number of parameters in the compressed model needs to be far less than the number of synapses—otherwise the model will overfit. Compression does not hurt performance, it improves it—enormously. More than that, it’s actually required at a fundamental level due to the connection between compression and prediction.
Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.
Obviously a model fitting a dataset of size 10^10 would need to compress that down even further to learn anything- so that’s an upper bound for the parameter bitsize.
If you present me with a simulated person who purports to be my dead grandfather, and I learn that he’s reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather.
Say you die tomorrow from some accident. You wakeup in ‘heaven’ - which you find out is really a sim in the year 2046. You discover that you are a sim (an AI really) recreated in a historical sim from the biological original. You have all the same memories, and your friends and family (or sims of them? admittedly confusing) still call you by the same name and consider you the same. Who are you?
Do you really think that in this situation you would say—“I’m not the same person! I’m just an AI simulacra. I don’t deserve to inherit any of my original’s wealth, status, or relationships! Just turn me off!”
Not yet. :) I meant expert only in “read up on the field”, not recognized academic expert. Besides, much industrial work is not published in academic journals for various reasons (time isn’t justified, secrecy, etc).
Historical versus other sims: I agree that if the simulation runs for infinitely long then the relevant difference is an additive rather than a multiplicative constant. But in practice it won’t do.
Yes, of course I understand your point that I don’t get to decide what counts as your grandfather; neither do you get to decide what counts as mine. You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors. I do not expect that. Not because I think I get to decide what counts as your grandfather, but because I don’t expect our successors to think in the way that you apparently expect them to think.
Yes, you’ll have terrible overfitting problems if you have too many parameters. But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down. If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately. I appreciate that you think something far cruder will suffice. I hope you appreciate that I disagree. (I also hope you don’t think I disagree because I’m an idiot.) Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y), and if Y<<X then the correct conclusion isn’t “excellent, our number of parameters[1] will be relatively small to avoid overfitting, so we don’t need to worry that the fitting process will take for ever”, it’s “damn, it turns out we can’t reconstruct this person”.
[1] It would be better to say something like “number of independent parameters”, of course; the right thing might be lots of parameters + regularization rather than few parameters.
I would expect a sim whose opinions resemble mine to say, on waking up in heaven, something like “well, gosh, this is nice, and I certainly don’t want it turned off, but do you really have good reason to think that I’m an accurate model of the person whose memories I think I have?”. Perhaps not out loud, since no doubt that sim would prefer not to be turned off. But the relevant point here isn’t about what the sim would want (and particularly not about whether the sim would want to be turned off, which I bet would generally not be the case even if they were convinced they weren’t an accurate model) but about whether for the people responsible for creating the sim a crude approximation was close enough to their ancestor for it to be worth a lot of extra trouble to create that sim rather than a completely new one.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors.
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.
Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.
But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately.
Right—again we know that it can’t be much more than 10^14 (number of synapses in human adult, it’s not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it’s been measured—the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn’t really matter because either way it isn’t that much.
Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y),
No—it just doesn’t work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency—that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.
We really don’t remember that much at all—not accurately.
AGI will change our world in many ways, one of which concerns our views on personal identity.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
Copy implies a version that is somehow lesser
That’s not how I was intending to use the word.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
identity is not binary
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
[...] What matters most [...] this isn’t nearly as important [...] far less important [...] What actually matters [...]
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
We are in the same situation today
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI).
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.
“Follow” is probably an exaggeration since this is pretty handwavy, but:
First of all, a clarification: I should really have written something like “We are more likely accurate ancestor-simulations …” rather than “We are more likely simulations”. I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.
Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.
So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.
So, if B then some instances of us are probably ancestor-sims, and if A then probably not.
So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).
Extreme case: if we somehow know not A but the much stronger A’: “A society just like ours will never lead to any sort of ancestor-sims” then we can be confident of not being accurate ancestor-sims.
(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn’t tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears’) descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)
That might be highly relevant[1] if I’d made any argument of the form “If we do X, we make it more likely that we are simulated”. But I didn’t make any such argument. I said “If societies like ours tend to do X, then it is more likely that we are simulated”. That differs in two important ways.
[1] Leaving aside arguments based on exotic decision theories (which don’t necessarily deserve to be left aside but are less obvious than the fact that you’ve completely misrepresented what I said).
the fact that you’ve completely misrepresented what I said
You might want to think about downsizing that chip on your shoulder. My comment asks you to consider my argument. It says nothing—literally, not a single word—about what you have said.
But so as not to waste your righteous indignation, let me ask you a couple of questions that will surely completely misrepresent what you said. Those “societies like ours” that you mentioned, can you tell me a bit more about them? How many did you observe, on the basis of which features did you decide they are “like ours”, what did the ones that are not “like ours” look like?
Oh, and your comment seems to be truncated, did you lose the second part somewhere?
No chip so far as I can see. If you think your comment says nothing at all about what I said, go and look up conversational implicatures.
You can define “societies like ours” in lots of ways. Any reasonable way is likely to have the properties (1) that observing what our society does gives us (probabilistic) information about what societies like ours tend to do and (2) that information about what societies like ours tend to do gives (probabilistic) information about our future.
(Not very much information, so any argument of this sort is weak. But I already said that.)
did you lose the second part somewhere?
Nope. Why do you think I might have? Because I didn’t say what the “two important ways” are? I thought that would be obvious, but I’ll make it explicit. (1) “If we do …” versus “If societies like ours tend to do …” (hence, since some of those societies may be in the past, no need for reverse causation etc.) (2) “we make it more likely that …” versus “it is more likely that …” (hence, since not a claim about what “we” do, no question about what we have power to do).
If our world is not simulated, there’s nothing we do can make it simulated. We can work towards other simulations, but that’s not us.
If our world is simulated, we are already simulated and there’s nothing we can do to increase our chance of being simulated because it’s already so.
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.
If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.
If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.
I am guessing you two-box in the Newcomb paradox as well, right?
Yes, of course.
a lot of people do not
I don’t think this is true. The correct version is your following sentence:
A lot of people on LW do not
People on LW, of course, are not terribly representative of people in general.
What matters, as an empirical matter, is that they exist.
I agree that such people exist.
If we want to belong to the type of species
Hold on, hold on. What is this “type of species” thing? What types are there, what are our options?
And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.
Nope, sorry, I don’t find this reasoning valid.
it will have evidential value still.
Still nope. If you think that people wishing to be in a simulation has “evidential value” for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have “evidential value”? Are you going to cherry-pick “right” beliefs and “wrong” beliefs?
I don’t think this is true. The correct version is your following sentence:
A lot of people on LW do not
People on LW, of course, are not terribly representative of people in general.
LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2⁄3 of people two box. Nozick, who popularized this, said he thought it was about 50⁄50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.
What matters, as an empirical matter, is that they exist.
I agree that such people exist.
Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?
If we want to belong to the type of species
Hold on, hold on. What is this “type of species” thing? What types are there, what are our options?
As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….
And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.
Nope, sorry, I don’t find this reasoning valid.
If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?
I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.
If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?
it will have evidential value still.
Still nope. If you think that people wishing to be in a simulation has “evidential value” for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have “evidential value”? Are you going to cherry-pick “right” beliefs and “wrong” beliefs?
Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?
One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.
I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2⁄3 of people two box. Nozick, who popularized this, said he thought it was about 50⁄50.
Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part—Omega is basically God, so do you try to contest His knowledge..?
can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster—so what?
As a turn of phrase, I was referring two types.
My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about “types”—one can certainly imagine them, but that has nothing to do with reality.
I am asking you to adjust your credence based on new information.
Which new information?
Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?
Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq?
You are conflating here two very important concepts, that is, “present” and “future”.
People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.
As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated.
Correct.
However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.”
My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it’s not affecting any decisions I’m making. I still don’t see why the number of one-boxers around should cause me to update this probability to anything more significant.
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster—so what?
By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.
For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.
For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1⁄52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1⁄50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.
Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.
One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.
My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about “types”—one can certainly imagine them, but that has nothing to do with reality.
I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.
Which new information?
Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?
Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.
Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/
People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.
The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)
As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.
For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.
If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.
The probability is non-zero, but it’s not affecting any decisions I’m making. I still don’t see why the number of one-boxers around should cause me to update this probability to anything more significant.
Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….
Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.
Why do you talk in terms of credence? In Bayesianism your belief of how likely something is is just a probability, so we’re talking about probabilities, right?
I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate.
Sure, OK.
Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation?
Aren’t you doing some rather severe privileging of the hypothesis?
The world has all kinds of people. Some want to destroy the world (and that should increase my credence that the world will get destroyed); some want electronic heavens (and that should increase my credence that there will be simulated heavens); some want break out of the circle of samsara (and that should increase my credence that any death will be truly final); some want a lot of beer (and that should increase my credence that the future will be full of SuperExtraSpecialBudLight), etc. etc. And as the Egan’s Law says, “It all adds up to normality”.
want to create simulations with pleasant afterlives
I think you’re being very Christianity-centric and Christians are only what, about a third of the world’s population? I still don’t know why people would create imprecise simulations of those who lived and died long ago.
If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.
Locate this statement on a timeline. Let’s go back a couple of hundred years: do humans want to make simulations of humans? No, they don’t.
Things change and eternal truths are rare. Future is uncertain and judgements of what people of far future might want to do or not to do are not reliable.
How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge?
Easily enough. You assume—for no good reason known to me—that a simulation must mimic the real world to the best of its ability. I don’t see why this should be so. A petri dish, in way, is a controlled simulation of, say, the growth and competition between different strains of bacteria (or yeast, or mold, etc.). Imagine an advanced (post-human or, say, alien) civilization doing historical research through simulations, running A/B tests on the XXI-century human history. If we change X, will the history go in the Y direction? Let’s see. That’s a petri dish—or a video game, take your pick.
When we use what we know about human nature, we have reason to believe that people might make simulations.
That’s not a comforting thought. From what I know about human nature, people will want to make simulations where the simulation-makers are Gods.
that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one
And since I two-box, I still say that they can “act out” anything they want, it’s not going to change their circumstances.
The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations.
Nope, not would ever make, but have ever made. The past and the future are still different. If you think you can reverse the time arrow, well, say so explicitly.
because I am aware that lots of human civilizations
Yes, you have many known to you examples so you can estimate the probability that one more, unknown to you, has or does not have certain features. But...
more reason to think that a different set of humans have done this already
...you can’t do this here. You know only a single (though diverse) set of humans. There is nothing to derive probabilities from. And if you want to use narrow sub-populations, well, we’re back to privileging the hypothesis again. Lots of humans believe and intend a lot of different things. Why pick this one?
Do you still not think this after reading this post?
Yep, still. If what the large number of people around believe affected me this much, I would be communing with my best friend Jesus instead :-P
why and how this has been frustrating
Hasn’t been frustrating at all. I like intellectual exercises in twisting, untwisting, bending, folding, etc.. :-) I don’t find this conversation unpleasant.
Not quite. In the sim case, we along with our world exist as multiple copies—one original along with some number of sims. It’s really important to make this distinction, it totally changes the relevant decision theory.
If our world is not simulated, there’s nothing we do can make it simulated. We can work towards other simulations, but that’s not us.
No—because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.
Whether it’s not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don’t.
Actually the sim argument doesn’t depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse.
we exist as a set of copies
Do you state this as a fact?
It is given in the sim scenario. I said this in reply to your statement “there’s nothing we do can make it simulated”.
The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.
If the identity isn’t smeared then our world—our specific world—is either simulated or not.
Sure. But we don’t know which copy we are, and all copies make the same decisions.
Uncertainty doesn’t grant the power to change the status from not-simulated to simulated.
Each individual copy is either simulated or not, and nothing each individual copy does can change that—true. However, all of the copies output the same decisions, and each copy can not determine it’s true existential status.
So the uncertainty is critically important—because the distribution itself can be manipulated by producing more copies. By creating simulations in the future, you alter the distribution by creating more sim copies such that it is thus more likely that one has been a sim the whole time.
Draw out the graph and perhaps it will make more sense.
It doesn’t actually violate physical causality—the acuasality is only relative—an (intentional) illusion due to lack of knowledge.
all copies make the same decisions … all of the copies output the same decisions
All copies might make the same decisions, but the originals make different decisions.
Remember how upthread you talked about copies being relative and imperfect images of the originals? This means that the set of copies and the singleton of originals are different.
As individual variants they may have slight differences (less so for more advanced sims constructed later), but that doesn’t matter.
The ‘decision’ we are talking about here is an abstract high level decision or belief concerning whether one will support the construction of historical sims (financially, politically, etc). The numerous versions of a person might occasionally make different decisions here and there for exactly what word to use or what not, but they will (necessarily by design) agree on major life decisions.
Remember how upthread you talked about copies being relative and imperfect images of the originals?
Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
Given all this I can’t see how you insist that copies make the same decisions as originals. In fact, in your quote you even have different copies making different decisions (“multiple versions”).
The different versions arise from multiverse considerations. The obvious basic route to sim capture is recreating very close copies that experience everything we remember having experienced—a recreation of our exact specific historical timeline/branch.
But even recreating other versions corresponding to other nearby branches in the multiverse could work and is potentially more computationally efficient. The net effect is the same: it raises the probabillity that we exist in a sim created by some other version/branch.
So there are two notions of historical ‘accuracy’. The first being accuracy in terms of exact match with a specific timeline, the other being accuracy in terms of matching only samples from the overall multiverse distribution.
Success only requires a high total probability that we are in a sim. It doesn’t matter much which specific historical timeline creates the sim.
The idea of decision agreement still applies across different versions in the multiverse. It doesn’t require exact agreement with every micro decision, only general agreement on the key decisions involving sim creation.
It can give evidence, though. Consider Hypothesis A: “Societies like ours will generally not decide, as their technological capabilities grow, to engage in massive simulation of their forebears” and Hypothesis B which omits the word “not”. Then:
The decisions made by, and ideas widely held in, our society, can be evidence favouring A or B.
We are more likely simulations if B is right than if A is right.
Similarly if the hypotheses are ”… to engage in massive simulation of their forebears, including blissful afterlives”, in which case we are more likely to have blissful simulated afterlives if B is right than if A is right. (Not necessarily more likely to have blissful afterlives simpliciter, though—perhaps, e.g., the truth of B would somehow make it less likely that we get blissful afterlives provided by gods.)
My opinion, for what it’s worth, is that either version of A is very much more likely than either version of B for multiple reasons, and that widespread interest in ideas like the one in this post would give only very weak evidence for A over B. So enthusiastic takeup of the ideas in this post would justify at most a tiny increase in our credence in an afterlife.
I think that the problem with this sort of arguments is that it’s like cooperating in prisoner’s dilemma hoping that superrationality will make the other player cooperate: It doesn’t work.
It seems that lots of people here conflate Newcomb’s problem, which is a very unusual single-player decision problem, with prisoner’s dilemma, which is the prototypical competitive game from game theory.
Also, I don’t see why I should consider an accurate simulation of me, from my birth to my death, ran after my real death as a form of afterlife. How would it be functionally different than screening a movie of my life?
My understanding is that the proposal here isn’t that an accurate simulation of your life should be counted as an afterlife; it’s that a somewhat-accurate simulation of lots of bits of your life might be a necessary preliminary to providing you with an afterlife (because they’d be needed to figure out what your brain, or at least your mind, was like in order to recreate it in whatever blissful—or for that matter torturous—afterlife might be provided for you).
As for Newcomb versus prisoners’ dilemma, see my comments elsewhere in the thread: I am not proposing that our decision whether to engage in large-scale ancestor simulation has any power to affect our past, only that it may provide some evidence bearing on what’s likely to have been in our past.
I just want to clarify in case you mean my proposal, as opposed to the proposal by jacobcannell. This is my reading of what jacobcannell said as well, but it is not at all a part of my argument. In fact, while I would be interested in reading jacobcannell’s thoughts on identity and the self, I share the same skeptical intuitions as other posters in this thread about this. I am open to being wrong, but on first impression I have an extremely difficult time imagining that it will be at all possible to simulate a person after they have died. I suspect that it would be a poor replica, and certainly would not contain the same internal life as the person. Again, I am open to being convinced, but nothing about that makes sense to me at the moment.
I think that I did a poor job of making this clear in my first post, and have added a short note at the end to clarify this. You might consider reading it as it should make my argument clearer.
My proposal is far less interesting, original, or involved then this, and drafts off of Nick Bostrom’s simulation argument in its entirety. What I was discussing was making simulations of new and unique individuals. These individuals would then have an afterlife after dying in which they would be reunited with the other sims from their world to live out a subjectively long, pleasant existence in their simulation computer. There would not be any attempt to replicate anyone in particular or to “join” the people in their simulation through a brain upload or anything else. The interesting and relevant feature would be that the creation of a large number of simulations like this, especially if these simulations could and did create their own simulations like this too, would increase our credence that we were not actually at the “basement level” and instead were ourselves in a simulation like the ones we made. This would increase our credence that dead loved ones had already been shifted over into the afterlife just as we shift people in the sims over into an afterlife after they die. This also circumvents teletransportation concerns (which would still exist if we were uploading ourselves into a simulation of our own!) since everything we are now would just be brought over to the afterlife part of the simulation fully intact.
Or they are just interested in the password needed to access the cute cat pictures on my phone. Seriously, we are in the realm of wild speculation, we can’t say that evidence points any particular way.
I hope I am not intercepting a series of questions when you were only interested in gjm’s response but I enjoyed your comment and wanted to add my thoughts.
I am not sure it is settled that it does not work, but I also do not think that most, or maybe any, of my argument relies on an assumption that it does. The first part of it does not even rely on an assumption that one-boxing is reasonable, let alone correct. All it says is that so long as some people play the game this way, as an empirical, descriptive reality of how they actually play, that we are more likely to see certain outcomes in situations that look like Newcomb. This looks like Newcomb.
There is also a second argument further down that suggests that under some circumstances with really high reward, and relatively little cost, that it might be worth trying to “cooperate on the prisoner’s dilemma” as a sort of gamble. This is more susceptible to game theoretic counterpoints, but it is also not put up as an especially strong argument so much as something worth considering more.
I am pretty sure I am not doing that, but if you wanted to expand on that, especially if you can show that I am, that would be fantastic.
So, just to be clear, this is not my point at all. I think I was not nearly clear enough on this in the initial post, and I have updated it with a short-ish edit that you might want to read. I personally find the teletransportation paradox pretty paralyzing, enough so that I would have sincere brain-upload concerns. What I am talking about is simulations of non-specific, unique, people in the simulation. After death, these people would be “moved” fully intact into the afterlife component of the simulation. This circumvents teletransportation. Having the vast majority of people “like us” exist in simulations should increase our credence that we are in a simulation just as they are (especially if they can run simulations of their own, or think they are running simulations of their own). The idea is that we will have more reason to think that it is likely one-boxer/altruist/acausal trade types “above” us have similarly created many simulations, of which we are one. Us doing it here should increase our sense that people “like us” have done it “above” us.
I wonder if you might expand on your thoughts on this a bit more. I tend to think that the odds of being in a simulation are quite low as well, but for me the issue is more the threat of extinction than a lack of will.
I can think of some reasons why, even if we could build such simulations, we might not, but I feel that this area is a bit fuzzy in my mind. Some ideas I already have: 1) Issues with the theory of identity 2) Issues with theory of mind 3) Issues with theory of moral value (creating lots high quality lives not seen as valuable, antinatalism, problem of evil) 4) Self-interest (more resources for existing individuals to upload into and utilize) 5) The existence of a convincing two-boxer “proof” of some sort
I also would like to know why an “enthusiastic takeup of the ideas in this post” would not increase your credence significantly? I think there is a very large chance of these ideas not being taken up enthusiastically, but if they were, I am not sure what, aside from extinction, would undermine them. If we get to the point where we can do it, and we want to do it, why would we not do it?
Thank you in advance for any insight, I have spent too long chewing on this without much detailed input, and I would really value it.
I’m not sure I have much to say that you won’t have thought of already. But: First of all, there seem to be lots of ways in which we might fail to develop such technology. We might go extinct or our civilization collapse or something of the kind (outright extinction seems really unlikely, but collapse of technological civilization much more likely). It might turn out that computational superpowers just aren’t really available—that there’s only so much processing power we have any realistic way of harnessing. It might turn out that such things are possible but we simply aren’t smart enough to find our way to them.
Second, if we (or more precisely our successors, whoever or whatever they are) develop such computational superpowers, why on earth use them for ancestor simulations? In this sort of scenario, maybe we’re all living in some kind of virtual universe; wouldn’t it be better to make other minds like ours sharing our glorious virtual universe rather than grubbily simulating our ancestors in their grotty early 21st-century world? Someone else—entirelyuseless? -- observed earlier in the thread that some such simulation might be necessary in order to figure out enough about our ancestors’ minds to simulate them anywhere else, so it’s just possible that grotty 21st-century ancestor sims might be a necessary precursor to glorious 25th-century ancestor sims; but why ancestors anyway? What’s so special about them, compared with all the other possible minds?
Third, supposing that we have computational superpowers and want to simulate our ancestors, I see no good reason to think it’s possible. The information it would take to simulate my great-great-grandparents is dispersed and tangled up with other information, and figuring out enough about my great-great-grandparents to simulate them will be no easier than locating the exact oxygen atoms that were in Julius Caesar’s last breath. All the relevant systems are chaotic, measurement is imprecise, and surely there’s just no reconstructing our ancestors at this point.
Fourth, it seems quite likely that our superpowered successors, if we have them, will be no more like us than we are like chimpanzees. Perhaps you find it credible that we might want to simulate our ancestors; do you think we would be interested in simulating our ancestors 5 million years ago who were as much like chimps as like us?
Absolutely. I think this is where this thing most likely fails. Somewhere in the first disjunct. My gut does not think I am in a simulation, and while that is not at all a valid way to acquire knowledge, it is the case that it leans me heavily into this.
So I am not saying that they WOULD do it, I actually can think of a lot of pretty compelling reasons why they MIGHT. If the people who are around then are at all like us, then I think that a subset of them would likely do it for the one-boxer reasons I mentioned in the first post (which I have since updated with a note at the bottom to clarify some things I should have included in the post originally.) Whether or not their intuitions are valid, there is an internal logic, based on these intuitions, which would push for this. Reasons include hedging against the teletransportation paradox (which also applies to self-uploading) and hoping to increase their credence of an afterlife in which those already dead can join in. This is clearer I think in my update. The main confusion is that I am not talking about attempting to simulate or recreate specific dead people, which I do not think is possible. The key to my argument is to create self-locating doubt.
Also, in my argument, the people who create the simulation are never joined with the people in the simulation. These people stay in their simulation computer. The idea is that we are “hoping” we are similarly in a simulation computer, and have been the whole time, and that when we die, we will be transferred (whole) into the simulations afterlife component along with everyone who died before us in our world. Should we be in a simulation, and yet develop some sort of “glorious virtual universe” that we upload into, there are several options. Two ones that quickly come to mind: 1) We might stay in it until we die, then go into the afterlife component, 2) We might at some point be “raptured” by the simulation out of our virtual universe into the existent “glorious virtual afterlife” of the simulation computer we are in.
As it is likely that the technology for simulations will come about at about the same time as for a “glorious virtual universe” we could even treat it as our last big hurrah before we upload ourselves. This makes sense as the people who exist when this technology becomes available will know a large number of loved ones who just missed it. They will also potentially be in especially imminent fear of the teletransportation paradox. I do not think there is any inherent conflict between doing both of these things.
Just to be clear, I am not talking about our actual individual ancestors. I actually avoided using the term intentionally as I think it is a bit confusing. I am pretty sure this is how Bostrom meant it as well in the original paper, with the word “ancestor” being used in the looser sense, like how we say “homo erectus where our ancestors.” That might be my misinterpretation, but I do not think so. While I could be convinced, I am personally, currently, very skeptical that it would be possible to do any meaningful sort of replication of a person after they die. I think the only way that someone who has already died has any chance of an afterlife is if we are already in a simulation. This is also why my personal, atheistic mind could be susceptible to donating to such a cause when in grief. I wrote an update to my original post at the bottom where I clarify this. The point of the simulation is to change our credence regarding our self-location. If the vast majority of “people like us” (which can be REALLY broadly construed) exist in simulations with afterlives, and do not know it, we have reason to think we might also exist in such a simulation. If this is still not clear after the update, please let me know, as I am trying to pin down something difficult and am not sure if I am continuing to privilege brevity to the detriment of clarity.
I agree with your point so strongly that I am a little surprised to have been interpreted as meaning this. I think that it seems theoretically feasible to simulate a world full of individual people as they advance their way up from simple stone tools onward, each with their own unique life and identity, each existing in a unique world with its own history. Trying to somehow make this the EXACT SAME as ours does not seem at all possible. I also do not see what the advantage of it would be, as it is not more informative or helpful for our purposes to know that we are the same or not as the people above us, so why would be try to “send that down” below us. We do not care about that as a feature of our world, and so would have no reason to try to instill it in the worlds below us. There is sort of a “golden rule” aspect to this in that you do to the simulation below you the best feasible, reality-conforming version of what you want done to you.
Maybe? I think that one of the interesting parts about this is where we would choose to draw policy lines around it. Do dogs go to the afterlife? How about fetuses? How about AI? What is heaven like? Who gets to decide this? These are all live questions. It could be that they take a consequential hedonistic approach that is mostly neutral between “who” gets the heaven. It could be that they feel obligated to go back further in gratitude of all those (“types”) who worked for advancement as a species and made their lives possible. It could be that we are actually not too far from superintelligent AI, and that this is going to become a live question in the next century or so, in which case “we” are that class of people they want to simulate in order to increase their credence of others similar to us (their relatives, friends who missed the revolution) being simulated.
As far as how far back you bother to simulate people, it might actually be easier to start off with some very small bands of people in a very primitive setting then to try to go through and make a complex world for people to “start” in without the benefit of cultural knowledge or tradition. It might even be that the “first people” are based on some survivalist hobby back-to-basics types who volunteered to be emulated, copied, and placed in different combinations in primitive earth environments in order to live simple hunter-gatherer lives and have their children go on to populate an earth (possible date of start? https://en.wikipedia.org/wiki/Population_bottleneck). That said, this is deep into the weeds of extremely low-probability speculation. Fun to do, but increasingly meaningless.
Yes, but that it isn’t enough to defeat simulations. One successful future can create a huge number of sims. Observational selection effects thus make survival fare more likely than otherwise expected.
Even without quantum computing or reversible computing, even just using sustainable resources on earth (solar) - even with those limitations—there are plenty of resources to create large numbers of sims.
The cost is about the same either way. So the question is one of economic preferences. When people can use their wealth to create either new children or bring back the dead, what will they do? You are thus assuming there will be very low demand for resurrecting the dead vs creating new children. This is rather obviously unlikely.
This technology probably isn’t that far away—it is a 21st century tech, not 25th. It almost automatically follows AGI, as AGI is actually just the tech to create minds—nothing less. Many people alive today will still be alive when these sims are built. They will bring back their loved ones, who then will want to bring back theirs, and so on.
Most people won’t understand or believe it until it happens. But likewise very few people actually understand how modern advanced rendering engines work—which would seem like magic to someone from just 50 years ago.
It’s an approximate inference problem. The sim never needs anything even remotely close to atomic information. In terms of world detail levels it only requires a little more than current games. The main new tech required is just the large scale massive inference supercomputing infrastructure that AGI requires anyway.
It’s easier to understand if you just think of a human brain sim growing up in something like the Matrix, where events are curiously staged and controlled behind the scenes by AIs.
The opinion-to-reasons ratio is quite high in both your comment and mine to which it’s replying, which is probably a sign that there’s only limited value in exploring our disagreements, but I’ll make a few comments.
One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as “why would it create any?”.)
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it’s easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it’s possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where’s the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that’s one more reason to think that resurrecting them is terribly expensive.)
(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they’ll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)
You say with apparent confidence that “this technology probably isn’t that far away”. Of course that could be correct, but my guess is that you’re wronger than a very wrong thing made of wrong. We can’t even simulate C. elegans yet, even though that only has about 1k neurons and they’re always wired up the same way (which we know).
Yes, it’s an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I’m unconvinced that “the sim never needs anything even remotely close to atomic information” given that the (simulated or not) world we’re in appears to contain particle accelerators and the like, but let’s suppose you’re right and that nothing finer-grained than simple neuron simulations is needed; you’re still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it’s worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from—so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it’s all just a colossal challenge and I really don’t think waving your hands and saying “just think of a human brain sim growing up in something like the Matrix” is on the same planet as the right ballpark for justifying a claim that it’s anywhere near within reach.
I’ve already answered this—because living people have a high interest in past dead people, and would like them to live again. It’s that simple.
True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual—very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds.
No, you don’t. For example the amount of information remaining about my grandfather who died in the 1950′s is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available.
So from my perspective the target is very wide. Personal identity is subjectively relative.
You wouldn’t. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim).
You could also say we can’t simulate bacteria, but neither is relevant. I’m not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn’t tell us much because only a tiny amount of resources have been spent on that.
Just to be clear—the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn’t that far away, it’s because AGI isn’t that far away, and this follows shortly thereafter.
Hardly. You are assuming naive encoding without compression. Neural nets—especially large biological brains -are enormously redundant and highly compressible.
Look—it’s really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upper bound, but the lower bound is much lower.
For the lower bound, consider compressing the inner monologue—which naturally includes everything a person has ever read, heard, and said (even to themselves).
200 wpm 500k words/year 8bits/word ~ 100 MB/year
So that gives a lower bound of 10^10 for a 100 year old. This doesn’t include visual information, but the visual cortex is also highly compressible due to translational invariance.
No—again naysayers will always be able to claim “these aren’t really the same people”. But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people, and the turing test for resurrection is entirely subjective, relative to their limited knowledge of the resurrectee.
But the answer you go on to repeat is one I already explained wasn’t relevant, in the sentence after the one you quoted.
I’m not sure what you’re arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don’t understand how this is any sort of counterargument to my suggestion that it’s a reason to simulate new minds rather than old.
You say that like it’s a good thing, but what it actually means is that almost certainly we can’t bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that’s all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can?
I’m not sure what that means. I’d expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won’t start going wrong just because you shut down the historical one.
Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that’s another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don’t believe you (if you’re estimating the parameters this way, you’ll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in that case surely there are anthropic reasons to find this scenario unlikely, because then we should be very surprised to find ourselves in the historical sim rather than the non-historical one that’s the real purpose.
Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you’re outrageously overconfident about what’s going to happen on what timescales. We don’t know whether, or when, AGI will be achieved. We don’t know whether when it is it will rapidly turn into way-superhuman intelligence, or whether that will happen much slower (e.g., depending on hardware technology development which may not be sped up much by slightly-superhuman AGI), or even whether actually the technological wins that would lead to very-superhuman AGI simply aren’t possible for some kind of fundamental physical reason we haven’t grasped. We don’t know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don’t value at all.
No, I am assuming that smarter encoding doesn’t buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.
Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.
What makes you think those are different people’s opinions? If you present me with a simulated person who purports to be my dead grandfather, and I learn that he’s reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather. Perhaps I will have no way of telling the difference (since my own reactions on interacting with this simulated person can be available to the optimization process—if I don’t mind hundreds of years of simulated-me being used for that purpose) but there’s a big difference between “I can’t prove it’s not him” and “I have good reason to think it’s him”.
I don’t really have a great deal of time to explain this so I”ll be brief. Basically this is something I’ve thought a great deal about and I have a rather detailed technical vision on how to achieve (At least to the extant that anyone can today. I’m an expert in the relevant fields—computer simulation/graphics and machine learning, and this is my long term life goal.). Fully explaining a rough roadmap would require a small book or long paper, so just keep that in mind.
Sorry—I meant a large constant, not a constant multiplier. Simulating a mind costs the same—doesn’t matter whether it’s in a historical sim world or a modern day sim or a futuristic sim or a fantasy sim … the cost of simulating the world to (our very crude ) sensory perception limits is always about the same.
The extra cost for an h-sim vs others is in the initial historical research/setup (a constant) and consistency guidance. The consistency enforcement can be achieved by replacing standard forward inference with a goal-directed hierarchical bidirectional inference. The cost ends up asymptotically about the same.
Instead of just a physical sim, or it’s more like a very deep hierarchy where at the highest levels of abstraction historical events are compressed down to text like form in some enormous evolving database written and rewritten by an army of historian AIs. Lower more detailed levels in the graph eventually resolve down into 3D objects and physical simulation sparsely as needed.
As I said earlier—you do not determine who is or is not my grandfather. Your beliefs have zero weight on that matter. This is such an enormously different perspective that it isn’t worth discussing more until you actually understand what I mean when I say personal identity is relative and subjective. Do you grok it?
Perhaps, but I’m not a random sample—not part of your ‘we’. I’ve spent a great deal of time researching the road to AGI. I’ve written a little about related issues in the past.
AGI will be achieved shortly after we have brain-scale machine learning models (such as ANNs) running on affordable (< 10K) machines. This is at most only about 5 years away. Today we can simulate a few tens of billions of synapses in real time on a single GPU, and another 1000x performance improvement is on the table in the near future—from some mix of software and hardware advances. In fact, it could very well happen in just a year. (I happen to be working on this directly, I know more about it than just about anyone).
AGI can mean many different things, so consider before arguing with the above.
Sure, but this whole conversation started with the assumption that we avoid such existential risks.
The number of parameters in the compressed model needs to be far less than the number of synapses—otherwise the model will overfit. Compression does not hurt performance, it improves it—enormously. More than that, it’s actually required at a fundamental level due to the connection between compression and prediction.
Obviously a model fitting a dataset of size 10^10 would need to compress that down even further to learn anything- so that’s an upper bound for the parameter bitsize.
Say you die tomorrow from some accident. You wakeup in ‘heaven’ - which you find out is really a sim in the year 2046. You discover that you are a sim (an AI really) recreated in a historical sim from the biological original. You have all the same memories, and your friends and family (or sims of them? admittedly confusing) still call you by the same name and consider you the same. Who are you?
Do you really think that in this situation you would say—“I’m not the same person! I’m just an AI simulacra. I don’t deserve to inherit any of my original’s wealth, status, or relationships! Just turn me off!”
Can you provide some links to your publications on the topic of machine learning?
Not yet. :) I meant expert only in “read up on the field”, not recognized academic expert. Besides, much industrial work is not published in academic journals for various reasons (time isn’t justified, secrecy, etc).
Historical versus other sims: I agree that if the simulation runs for infinitely long then the relevant difference is an additive rather than a multiplicative constant. But in practice it won’t do.
Yes, of course I understand your point that I don’t get to decide what counts as your grandfather; neither do you get to decide what counts as mine. You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors. I do not expect that. Not because I think I get to decide what counts as your grandfather, but because I don’t expect our successors to think in the way that you apparently expect them to think.
Yes, you’ll have terrible overfitting problems if you have too many parameters. But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down. If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately. I appreciate that you think something far cruder will suffice. I hope you appreciate that I disagree. (I also hope you don’t think I disagree because I’m an idiot.) Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y), and if Y<<X then the correct conclusion isn’t “excellent, our number of parameters[1] will be relatively small to avoid overfitting, so we don’t need to worry that the fitting process will take for ever”, it’s “damn, it turns out we can’t reconstruct this person”.
[1] It would be better to say something like “number of independent parameters”, of course; the right thing might be lots of parameters + regularization rather than few parameters.
I would expect a sim whose opinions resemble mine to say, on waking up in heaven, something like “well, gosh, this is nice, and I certainly don’t want it turned off, but do you really have good reason to think that I’m an accurate model of the person whose memories I think I have?”. Perhaps not out loud, since no doubt that sim would prefer not to be turned off. But the relevant point here isn’t about what the sim would want (and particularly not about whether the sim would want to be turned off, which I bet would generally not be the case even if they were convinced they weren’t an accurate model) but about whether for the people responsible for creating the sim a crude approximation was close enough to their ancestor for it to be worth a lot of extra trouble to create that sim rather than a completely new one.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.
Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
Right—again we know that it can’t be much more than 10^14 (number of synapses in human adult, it’s not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it’s been measured—the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn’t really matter because either way it isn’t that much.
No—it just doesn’t work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency—that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.
We really don’t remember that much at all—not accurately.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
That’s not how I was intending to use the word.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.
And how does that follow?
“Follow” is probably an exaggeration since this is pretty handwavy, but:
First of all, a clarification: I should really have written something like “We are more likely accurate ancestor-simulations …” rather than “We are more likely simulations”. I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.
Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.
So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.
So, if B then some instances of us are probably ancestor-sims, and if A then probably not.
So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).
Extreme case: if we somehow know not A but the much stronger A’: “A society just like ours will never lead to any sort of ancestor-sims” then we can be confident of not being accurate ancestor-sims.
(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn’t tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears’) descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)
Consider a different argument.
Our world is either simulated or not.
If our world is not simulated, there’s nothing we do can make it simulated. We can work towards other simulations, but that’s not us.
If our world is simulated, we are already simulated and there’s nothing we can do to increase our chance of being simulated because it’s already so.
That might be highly relevant[1] if I’d made any argument of the form “If we do X, we make it more likely that we are simulated”. But I didn’t make any such argument. I said “If societies like ours tend to do X, then it is more likely that we are simulated”. That differs in two important ways.
[1] Leaving aside arguments based on exotic decision theories (which don’t necessarily deserve to be left aside but are less obvious than the fact that you’ve completely misrepresented what I said).
You might want to think about downsizing that chip on your shoulder. My comment asks you to consider my argument. It says nothing—literally, not a single word—about what you have said.
But so as not to waste your righteous indignation, let me ask you a couple of questions that will surely completely misrepresent what you said. Those “societies like ours” that you mentioned, can you tell me a bit more about them? How many did you observe, on the basis of which features did you decide they are “like ours”, what did the ones that are not “like ours” look like?
Oh, and your comment seems to be truncated, did you lose the second part somewhere?
No chip so far as I can see. If you think your comment says nothing at all about what I said, go and look up conversational implicatures.
You can define “societies like ours” in lots of ways. Any reasonable way is likely to have the properties (1) that observing what our society does gives us (probabilistic) information about what societies like ours tend to do and (2) that information about what societies like ours tend to do gives (probabilistic) information about our future.
(Not very much information, so any argument of this sort is weak. But I already said that.)
Nope. Why do you think I might have? Because I didn’t say what the “two important ways” are? I thought that would be obvious, but I’ll make it explicit. (1) “If we do …” versus “If societies like ours tend to do …” (hence, since some of those societies may be in the past, no need for reverse causation etc.) (2) “we make it more likely that …” versus “it is more likely that …” (hence, since not a claim about what “we” do, no question about what we have power to do).
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.
If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.
If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.
Yes, of course.
I don’t think this is true. The correct version is your following sentence:
People on LW, of course, are not terribly representative of people in general.
I agree that such people exist.
Hold on, hold on. What is this “type of species” thing? What types are there, what are our options?
Nope, sorry, I don’t find this reasoning valid.
Still nope. If you think that people wishing to be in a simulation has “evidential value” for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have “evidential value”? Are you going to cherry-pick “right” beliefs and “wrong” beliefs?
LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2⁄3 of people two box. Nozick, who popularized this, said he thought it was about 50⁄50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.
Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?
As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….
If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?
I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.
If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?
Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?
One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.
Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part—Omega is basically God, so do you try to contest His knowledge..?
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster—so what?
My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about “types”—one can certainly imagine them, but that has nothing to do with reality.
Which new information?
Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?
You are conflating here two very important concepts, that is, “present” and “future”.
People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.
Correct.
My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it’s not affecting any decisions I’m making. I still don’t see why the number of one-boxers around should cause me to update this probability to anything more significant.
By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.
For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.
For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1⁄52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1⁄50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.
Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.
One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.
I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.
Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.
Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/
The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)
As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.
For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.
If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.
Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….
Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.
Why do you talk in terms of credence? In Bayesianism your belief of how likely something is is just a probability, so we’re talking about probabilities, right?
Sure, OK.
Aren’t you doing some rather severe privileging of the hypothesis?
The world has all kinds of people. Some want to destroy the world (and that should increase my credence that the world will get destroyed); some want electronic heavens (and that should increase my credence that there will be simulated heavens); some want break out of the circle of samsara (and that should increase my credence that any death will be truly final); some want a lot of beer (and that should increase my credence that the future will be full of SuperExtraSpecialBudLight), etc. etc. And as the Egan’s Law says, “It all adds up to normality”.
I think you’re being very Christianity-centric and Christians are only what, about a third of the world’s population? I still don’t know why people would create imprecise simulations of those who lived and died long ago.
Locate this statement on a timeline. Let’s go back a couple of hundred years: do humans want to make simulations of humans? No, they don’t.
Things change and eternal truths are rare. Future is uncertain and judgements of what people of far future might want to do or not to do are not reliable.
Easily enough. You assume—for no good reason known to me—that a simulation must mimic the real world to the best of its ability. I don’t see why this should be so. A petri dish, in way, is a controlled simulation of, say, the growth and competition between different strains of bacteria (or yeast, or mold, etc.). Imagine an advanced (post-human or, say, alien) civilization doing historical research through simulations, running A/B tests on the XXI-century human history. If we change X, will the history go in the Y direction? Let’s see. That’s a petri dish—or a video game, take your pick.
That’s not a comforting thought. From what I know about human nature, people will want to make simulations where the simulation-makers are Gods.
And since I two-box, I still say that they can “act out” anything they want, it’s not going to change their circumstances.
Nope, not would ever make, but have ever made. The past and the future are still different. If you think you can reverse the time arrow, well, say so explicitly.
Yes, you have many known to you examples so you can estimate the probability that one more, unknown to you, has or does not have certain features. But...
...you can’t do this here. You know only a single (though diverse) set of humans. There is nothing to derive probabilities from. And if you want to use narrow sub-populations, well, we’re back to privileging the hypothesis again. Lots of humans believe and intend a lot of different things. Why pick this one?
Yep, still. If what the large number of people around believe affected me this much, I would be communing with my best friend Jesus instead :-P
Hasn’t been frustrating at all. I like intellectual exercises in twisting, untwisting, bending, folding, etc.. :-) I don’t find this conversation unpleasant.
Nah, it’s not you who is Exhibit A here :-/
Not quite. In the sim case, we along with our world exist as multiple copies—one original along with some number of sims. It’s really important to make this distinction, it totally changes the relevant decision theory.
No—because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.
Whether it’s not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don’t.
Do you state this as a fact?
Actually the sim argument doesn’t depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse.
It is given in the sim scenario. I said this in reply to your statement “there’s nothing we do can make it simulated”.
The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.
If the identity isn’t smeared then our world—our specific world—is either simulated or not.
Uncertainty doesn’t grant the power to change the status from not-simulated to simulated.
Sure. But we don’t know which copy we are, and all copies make the same decisions.
Each individual copy is either simulated or not, and nothing each individual copy does can change that—true. However, all of the copies output the same decisions, and each copy can not determine it’s true existential status.
So the uncertainty is critically important—because the distribution itself can be manipulated by producing more copies. By creating simulations in the future, you alter the distribution by creating more sim copies such that it is thus more likely that one has been a sim the whole time.
Draw out the graph and perhaps it will make more sense.
It doesn’t actually violate physical causality—the acuasality is only relative—an (intentional) illusion due to lack of knowledge.
All copies might make the same decisions, but the originals make different decisions.
Remember how upthread you talked about copies being relative and imperfect images of the originals? This means that the set of copies and the singleton of originals are different.
As individual variants they may have slight differences (less so for more advanced sims constructed later), but that doesn’t matter.
The ‘decision’ we are talking about here is an abstract high level decision or belief concerning whether one will support the construction of historical sims (financially, politically, etc). The numerous versions of a person might occasionally make different decisions here and there for exactly what word to use or what not, but they will (necessarily by design) agree on major life decisions.
I never said “imperfect images”—that’s your beef.
Let me quote you:
Given all this I can’t see how you insist that copies make the same decisions as originals. In fact, in your quote you even have different copies making different decisions (“multiple versions”).
The different versions arise from multiverse considerations. The obvious basic route to sim capture is recreating very close copies that experience everything we remember having experienced—a recreation of our exact specific historical timeline/branch.
But even recreating other versions corresponding to other nearby branches in the multiverse could work and is potentially more computationally efficient. The net effect is the same: it raises the probabillity that we exist in a sim created by some other version/branch.
So there are two notions of historical ‘accuracy’. The first being accuracy in terms of exact match with a specific timeline, the other being accuracy in terms of matching only samples from the overall multiverse distribution.
Success only requires a high total probability that we are in a sim. It doesn’t matter much which specific historical timeline creates the sim.
The idea of decision agreement still applies across different versions in the multiverse. It doesn’t require exact agreement with every micro decision, only general agreement on the key decisions involving sim creation.
Knowledge of which decisions we actually make is information which we can update our worldviews on.
Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.
What do you mean, “works in practice”?