“Follow” is probably an exaggeration since this is pretty handwavy, but:
First of all, a clarification: I should really have written something like “We are more likely accurate ancestor-simulations …” rather than “We are more likely simulations”. I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.
Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.
So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.
So, if B then some instances of us are probably ancestor-sims, and if A then probably not.
So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).
Extreme case: if we somehow know not A but the much stronger A’: “A society just like ours will never lead to any sort of ancestor-sims” then we can be confident of not being accurate ancestor-sims.
(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn’t tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears’) descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)
That might be highly relevant[1] if I’d made any argument of the form “If we do X, we make it more likely that we are simulated”. But I didn’t make any such argument. I said “If societies like ours tend to do X, then it is more likely that we are simulated”. That differs in two important ways.
[1] Leaving aside arguments based on exotic decision theories (which don’t necessarily deserve to be left aside but are less obvious than the fact that you’ve completely misrepresented what I said).
the fact that you’ve completely misrepresented what I said
You might want to think about downsizing that chip on your shoulder. My comment asks you to consider my argument. It says nothing—literally, not a single word—about what you have said.
But so as not to waste your righteous indignation, let me ask you a couple of questions that will surely completely misrepresent what you said. Those “societies like ours” that you mentioned, can you tell me a bit more about them? How many did you observe, on the basis of which features did you decide they are “like ours”, what did the ones that are not “like ours” look like?
Oh, and your comment seems to be truncated, did you lose the second part somewhere?
No chip so far as I can see. If you think your comment says nothing at all about what I said, go and look up conversational implicatures.
You can define “societies like ours” in lots of ways. Any reasonable way is likely to have the properties (1) that observing what our society does gives us (probabilistic) information about what societies like ours tend to do and (2) that information about what societies like ours tend to do gives (probabilistic) information about our future.
(Not very much information, so any argument of this sort is weak. But I already said that.)
did you lose the second part somewhere?
Nope. Why do you think I might have? Because I didn’t say what the “two important ways” are? I thought that would be obvious, but I’ll make it explicit. (1) “If we do …” versus “If societies like ours tend to do …” (hence, since some of those societies may be in the past, no need for reverse causation etc.) (2) “we make it more likely that …” versus “it is more likely that …” (hence, since not a claim about what “we” do, no question about what we have power to do).
If our world is not simulated, there’s nothing we do can make it simulated. We can work towards other simulations, but that’s not us.
If our world is simulated, we are already simulated and there’s nothing we can do to increase our chance of being simulated because it’s already so.
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.
If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.
If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.
I am guessing you two-box in the Newcomb paradox as well, right?
Yes, of course.
a lot of people do not
I don’t think this is true. The correct version is your following sentence:
A lot of people on LW do not
People on LW, of course, are not terribly representative of people in general.
What matters, as an empirical matter, is that they exist.
I agree that such people exist.
If we want to belong to the type of species
Hold on, hold on. What is this “type of species” thing? What types are there, what are our options?
And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.
Nope, sorry, I don’t find this reasoning valid.
it will have evidential value still.
Still nope. If you think that people wishing to be in a simulation has “evidential value” for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have “evidential value”? Are you going to cherry-pick “right” beliefs and “wrong” beliefs?
I don’t think this is true. The correct version is your following sentence:
A lot of people on LW do not
People on LW, of course, are not terribly representative of people in general.
LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2⁄3 of people two box. Nozick, who popularized this, said he thought it was about 50⁄50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.
What matters, as an empirical matter, is that they exist.
I agree that such people exist.
Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?
If we want to belong to the type of species
Hold on, hold on. What is this “type of species” thing? What types are there, what are our options?
As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….
And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one.
Nope, sorry, I don’t find this reasoning valid.
If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?
I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.
If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?
it will have evidential value still.
Still nope. If you think that people wishing to be in a simulation has “evidential value” for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have “evidential value”? Are you going to cherry-pick “right” beliefs and “wrong” beliefs?
Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?
One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.
I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2⁄3 of people two box. Nozick, who popularized this, said he thought it was about 50⁄50.
Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part—Omega is basically God, so do you try to contest His knowledge..?
can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster—so what?
As a turn of phrase, I was referring two types.
My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about “types”—one can certainly imagine them, but that has nothing to do with reality.
I am asking you to adjust your credence based on new information.
Which new information?
Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?
Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq?
You are conflating here two very important concepts, that is, “present” and “future”.
People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.
As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated.
Correct.
However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.”
My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it’s not affecting any decisions I’m making. I still don’t see why the number of one-boxers around should cause me to update this probability to anything more significant.
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster—so what?
By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.
For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.
For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1⁄52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1⁄50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.
Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.
One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.
My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about “types”—one can certainly imagine them, but that has nothing to do with reality.
I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.
Which new information?
Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?
Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.
Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/
People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.
The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)
As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.
For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.
If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.
The probability is non-zero, but it’s not affecting any decisions I’m making. I still don’t see why the number of one-boxers around should cause me to update this probability to anything more significant.
Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….
Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.
Why do you talk in terms of credence? In Bayesianism your belief of how likely something is is just a probability, so we’re talking about probabilities, right?
I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate.
Sure, OK.
Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation?
Aren’t you doing some rather severe privileging of the hypothesis?
The world has all kinds of people. Some want to destroy the world (and that should increase my credence that the world will get destroyed); some want electronic heavens (and that should increase my credence that there will be simulated heavens); some want break out of the circle of samsara (and that should increase my credence that any death will be truly final); some want a lot of beer (and that should increase my credence that the future will be full of SuperExtraSpecialBudLight), etc. etc. And as the Egan’s Law says, “It all adds up to normality”.
want to create simulations with pleasant afterlives
I think you’re being very Christianity-centric and Christians are only what, about a third of the world’s population? I still don’t know why people would create imprecise simulations of those who lived and died long ago.
If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.
Locate this statement on a timeline. Let’s go back a couple of hundred years: do humans want to make simulations of humans? No, they don’t.
Things change and eternal truths are rare. Future is uncertain and judgements of what people of far future might want to do or not to do are not reliable.
How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge?
Easily enough. You assume—for no good reason known to me—that a simulation must mimic the real world to the best of its ability. I don’t see why this should be so. A petri dish, in way, is a controlled simulation of, say, the growth and competition between different strains of bacteria (or yeast, or mold, etc.). Imagine an advanced (post-human or, say, alien) civilization doing historical research through simulations, running A/B tests on the XXI-century human history. If we change X, will the history go in the Y direction? Let’s see. That’s a petri dish—or a video game, take your pick.
When we use what we know about human nature, we have reason to believe that people might make simulations.
That’s not a comforting thought. From what I know about human nature, people will want to make simulations where the simulation-makers are Gods.
that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one
And since I two-box, I still say that they can “act out” anything they want, it’s not going to change their circumstances.
The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations.
Nope, not would ever make, but have ever made. The past and the future are still different. If you think you can reverse the time arrow, well, say so explicitly.
because I am aware that lots of human civilizations
Yes, you have many known to you examples so you can estimate the probability that one more, unknown to you, has or does not have certain features. But...
more reason to think that a different set of humans have done this already
...you can’t do this here. You know only a single (though diverse) set of humans. There is nothing to derive probabilities from. And if you want to use narrow sub-populations, well, we’re back to privileging the hypothesis again. Lots of humans believe and intend a lot of different things. Why pick this one?
Do you still not think this after reading this post?
Yep, still. If what the large number of people around believe affected me this much, I would be communing with my best friend Jesus instead :-P
why and how this has been frustrating
Hasn’t been frustrating at all. I like intellectual exercises in twisting, untwisting, bending, folding, etc.. :-) I don’t find this conversation unpleasant.
Not quite. In the sim case, we along with our world exist as multiple copies—one original along with some number of sims. It’s really important to make this distinction, it totally changes the relevant decision theory.
If our world is not simulated, there’s nothing we do can make it simulated. We can work towards other simulations, but that’s not us.
No—because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.
Whether it’s not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don’t.
Actually the sim argument doesn’t depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse.
we exist as a set of copies
Do you state this as a fact?
It is given in the sim scenario. I said this in reply to your statement “there’s nothing we do can make it simulated”.
The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.
If the identity isn’t smeared then our world—our specific world—is either simulated or not.
Sure. But we don’t know which copy we are, and all copies make the same decisions.
Uncertainty doesn’t grant the power to change the status from not-simulated to simulated.
Each individual copy is either simulated or not, and nothing each individual copy does can change that—true. However, all of the copies output the same decisions, and each copy can not determine it’s true existential status.
So the uncertainty is critically important—because the distribution itself can be manipulated by producing more copies. By creating simulations in the future, you alter the distribution by creating more sim copies such that it is thus more likely that one has been a sim the whole time.
Draw out the graph and perhaps it will make more sense.
It doesn’t actually violate physical causality—the acuasality is only relative—an (intentional) illusion due to lack of knowledge.
all copies make the same decisions … all of the copies output the same decisions
All copies might make the same decisions, but the originals make different decisions.
Remember how upthread you talked about copies being relative and imperfect images of the originals? This means that the set of copies and the singleton of originals are different.
As individual variants they may have slight differences (less so for more advanced sims constructed later), but that doesn’t matter.
The ‘decision’ we are talking about here is an abstract high level decision or belief concerning whether one will support the construction of historical sims (financially, politically, etc). The numerous versions of a person might occasionally make different decisions here and there for exactly what word to use or what not, but they will (necessarily by design) agree on major life decisions.
Remember how upthread you talked about copies being relative and imperfect images of the originals?
Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
Given all this I can’t see how you insist that copies make the same decisions as originals. In fact, in your quote you even have different copies making different decisions (“multiple versions”).
The different versions arise from multiverse considerations. The obvious basic route to sim capture is recreating very close copies that experience everything we remember having experienced—a recreation of our exact specific historical timeline/branch.
But even recreating other versions corresponding to other nearby branches in the multiverse could work and is potentially more computationally efficient. The net effect is the same: it raises the probabillity that we exist in a sim created by some other version/branch.
So there are two notions of historical ‘accuracy’. The first being accuracy in terms of exact match with a specific timeline, the other being accuracy in terms of matching only samples from the overall multiverse distribution.
Success only requires a high total probability that we are in a sim. It doesn’t matter much which specific historical timeline creates the sim.
The idea of decision agreement still applies across different versions in the multiverse. It doesn’t require exact agreement with every micro decision, only general agreement on the key decisions involving sim creation.
And how does that follow?
“Follow” is probably an exaggeration since this is pretty handwavy, but:
First of all, a clarification: I should really have written something like “We are more likely accurate ancestor-simulations …” rather than “We are more likely simulations”. I hope that was understood, given that the actually relevant hypothesis is one involving accurate ancestor-simulations, but I apologize for not being clearer. OK, on with the show.
Let W be the world of our non-simulated ancestors (who may or may not actually be us, depending on whether we are ancestor-sims). W is (at least as regards the experiences of our non-simulated ancestors) like our world, either because it is our world or because our world is an accurate simulation of W. In particular, if A then W is such as generally not to lead to large-scale ancestor sims, and if B then W is such as generally to lead to large-scale ancestor sims.
So, if B then in addition to W there are probably ancestor-sims of much of W; but if A then there are probably not.
So, if B then some instances of us are probably ancestor-sims, and if A then probably not.
So, Pr(we are ancestor-sims | B) > Pr(we are ancestor-sims | A).
Extreme case: if we somehow know not A but the much stronger A’: “A society just like ours will never lead to any sort of ancestor-sims” then we can be confident of not being accurate ancestor-sims.
(I repeat that of course we could still be highly inaccurate ancestor-sims or non-ancestor sims, and A versus B doesn’t tell us much about that, but that the question at issue was specifically about accurate ancestor-sims since those are what might be required for our (non-simulated forebears’) descendants to give us (or our non-simulated forebears) an afterlife, if they were inclined to do so.)
Consider a different argument.
Our world is either simulated or not.
If our world is not simulated, there’s nothing we do can make it simulated. We can work towards other simulations, but that’s not us.
If our world is simulated, we are already simulated and there’s nothing we can do to increase our chance of being simulated because it’s already so.
That might be highly relevant[1] if I’d made any argument of the form “If we do X, we make it more likely that we are simulated”. But I didn’t make any such argument. I said “If societies like ours tend to do X, then it is more likely that we are simulated”. That differs in two important ways.
[1] Leaving aside arguments based on exotic decision theories (which don’t necessarily deserve to be left aside but are less obvious than the fact that you’ve completely misrepresented what I said).
You might want to think about downsizing that chip on your shoulder. My comment asks you to consider my argument. It says nothing—literally, not a single word—about what you have said.
But so as not to waste your righteous indignation, let me ask you a couple of questions that will surely completely misrepresent what you said. Those “societies like ours” that you mentioned, can you tell me a bit more about them? How many did you observe, on the basis of which features did you decide they are “like ours”, what did the ones that are not “like ours” look like?
Oh, and your comment seems to be truncated, did you lose the second part somewhere?
No chip so far as I can see. If you think your comment says nothing at all about what I said, go and look up conversational implicatures.
You can define “societies like ours” in lots of ways. Any reasonable way is likely to have the properties (1) that observing what our society does gives us (probabilistic) information about what societies like ours tend to do and (2) that information about what societies like ours tend to do gives (probabilistic) information about our future.
(Not very much information, so any argument of this sort is weak. But I already said that.)
Nope. Why do you think I might have? Because I didn’t say what the “two important ways” are? I thought that would be obvious, but I’ll make it explicit. (1) “If we do …” versus “If societies like ours tend to do …” (hence, since some of those societies may be in the past, no need for reverse causation etc.) (2) “we make it more likely that …” versus “it is more likely that …” (hence, since not a claim about what “we” do, no question about what we have power to do).
I am guessing you two-box in the Newcomb paradox as well, right? If you don’t then you might take a second to realize you are being inconsistent.
If you do two-box, realize that a lot of people do not. A lot of people on LW do not. A lot of philosophers who specialize in decision theory do not. It does not mean they are right, it just means that they do not follow your reasoning. They think that the right answer is to one box. They take an action, later in time, which does not seem causally determinative (at least as we normally conceive of causality). They may believe in retrocausality, the may believe in a type of ethics in which two-boxing would be a type of cheating or free-riding, they might just be superstitious, or they might just be humbling themselves in the face of uncertainty. For purposes of this argument, it does not matter. What matters, as an empirical matter, is that they exist. Their existence means that they will ignore or disbelieve that “there’s nothing we can do to increase our chance of being simulated” like they ignore the second box.
If we want to belong to the type of species where the vast majority of the species exists in a simulations with a long-duration, pleasant afterlife, we need to be the “type of species” who builds large numbers of simulations with long-duration, pleasant afterlives. And if we find ourselves building large numbers of these simulations, it should increase our credence that we are in one. Pending acausal trade considerations (probably for another post), two-boxers, and likely some one-boxers, will not think that their actions are causing anything, but it will have evidential value still.
Yes, of course.
I don’t think this is true. The correct version is your following sentence:
People on LW, of course, are not terribly representative of people in general.
I agree that such people exist.
Hold on, hold on. What is this “type of species” thing? What types are there, what are our options?
Nope, sorry, I don’t find this reasoning valid.
Still nope. If you think that people wishing to be in a simulation has “evidential value” for the proposition that we are in a simulation, for what proposition does the belief in, say, Jesus or astrology have “evidential value”? Are you going to cherry-pick “right” beliefs and “wrong” beliefs?
LW is not really my personal sample for this. I have spent about a year working this into conversations. I feel as though the split in my experience is something like 2⁄3 of people two box. Nozick, who popularized this, said he thought it was about 50⁄50. While it is again not representative, of the thousand people who answered the question in this survey, it was about equal (http://philpapers.org/surveys/results.pl). For people with PhD’s in Philosophy it was 458 two-boxers to 348 one-boxers. While I do not know what the actual number would be if there was a Pew Survey, I suspect, especially given the success of Calvinism, magical thinking, etc. that there are a substantial minority of people who would one-box.
Okay. Can you see how they might take the approach I have suggested they might? And if yes, can you concede that it is possible that there are people who might want to build simulations in the hope of being in one, even if you think it is foolish?
As a turn of phrase, I was referring two types. One that makes simulations meeting this description, and one that does not. It is like when people advocate for colonizing Mars, they are expressing a desire to be “that type of species.” Not sure what confused you here….
If you are in the Sleeping Beauty problem (https://wiki.lesswrong.com/wiki/Sleeping_Beauty_problem), and are woken up during the week, what is your credence that the coin has come up tails? How do you decide between the doors in the Monty Hall problem?
I am not asking you to think that the actual odds have changed in real time, I am asking you to adjust your credence based on new information. The order of cards has not changed in the deck, but now you know which ones have been discarded.
If it turns out simulations are impossible, I will adjust my credence about being in one. If a program begins plastering trillions of simulations across the cosmological endowment with von Neumann probes, I will adjust my credence upward. I am not saying that your reality changes, I am saying that the amount of information you have about the location of your reality has changed. If you do not find this valid, what do you not find valid? Why should your credence remain unchanged?
Beliefs can cause people to do things, whether that be go to war or build expensive computers. Why would the fact that some people believe in Salafi Jihadism and want to form a caliphate under ISIS be evidentially relevant to determining the future stability of Syria and Iraq? How can their “belief” in such a thing have any evidential value?
One-boxers wishing to be in a simulation are more likely to create a large number of simulations. The existence of a large number of simulations (especially if they can nest their own simulations) make it more likely that we are not at a “basement level” but instead are in a simulation, like the ones we create. Not because we are creating our own, but because it suggests the realistic possibility that our world was created a “level” above us. This is just about self-locating belief. As a two-boxer, you should have no sense that people in your world creating simulations means any change in your world’s current status as simulated or unsimulated. However, you should also update your own credence from “why would I possibly be in a simulation” to “there is a reason I might be in a simulation.” Same as if you were currently living in Western Iraq, you should update your credence from “why should I possibly leave my house, why would it not be safe” to “right, because there are people who are inspired by belief to take actions which make it unsafe.” Your knowledge about others’ beliefs can provide information about certain things that they may have done or may plan to do.
Interesting. Not what I expected, but I can always be convinced by data. I wonder to which degree the religiosity plays a part—Omega is basically God, so do you try to contest His knowledge..?
Sure, but how is that relevant? There are people who want to accelerate the destruction of the world because that would bring in the Messiah faster—so what?
My issue with this phrasing is that these two (and other) types are solely the product of your imagination. We have one (1) known example of intelligent species. That is very much insufficient to start talking about “types”—one can certainly imagine them, but that has nothing to do with reality.
Which new information?
Does the fact that we construct and play video games argue for the claim that we are NPCs in a video game? Does the fact that we do bio lab experiments argue for the claim that we live in a petri dish?
You are conflating here two very important concepts, that is, “present” and “future”.
People believing in Islam are very relevant to the chances of the future caliphate. People believing in Islam are not terribly relevant to the chances that in our present we live under the watchful gaze of Allah.
Correct.
My belief is that it IS possible that we live in a simulation but it has the same status as believing it IS possible that Jesus (or Allah, etc.) is actually God. The probability is non-zero, but it’s not affecting any decisions I’m making. I still don’t see why the number of one-boxers around should cause me to update this probability to anything more significant.
By analogy, what are some things that decrease my credence in thinking that humans will survive to a “post-human stage.” For me, some are 1) We seem terrible at coordination problems at a policy level, 2) We are not terribly cautious in developing new, potentially dangerous, technology, 3) some people are actively trying to end the world for religious/ideological reasons. So as I learn more about ISIS and its ideology and how it is becoming increasingly popular, since they are literally trying to end the world, it further decreases my credence that we will make it to a post-human stage. I am not saying that my learning information about them is actually changing the odds, just that it is giving me more information with which to make my knowledge of the already-existing world more accurate. It’s Bayesianism.
For another analogy, my credence for the idea that “NYC will be hit by a dirty bomb in the next 20 years” was pretty low until I read about the ideology and methods of radical Islam and the poor containment of nuclear material in the former Soviet Union. My reading about these people’s ideas did not change anything, however, their ideas are causally relevant, and my knowledge of this factor increase my credence of that as a possibility.
For one final analogy, if there is a stack of well-shuffled playing cards in front of me, what is my credence that the bottom card is a queen of hearts? 1⁄52. Now let’s say I flip the top two cards, and they are a 5 and a king. What is my credence now that the bottom card is a queen of hearts? 1⁄50. Now let’s say I go through the next 25 cards and none of them are the queen of hearts. What is my credence now that the bottom card is the queen of hearts? 1 in 25. The card at the bottom has not changed. The reality is in place. All I am doing is gaining information which helps me get a sense of location. I do want to clarify though, that I am reasoning with you as a two-boxer. I think one-boxers might view specific instances like this differently. Again, I am agnostic on who is correct for these purposes.
Now to bring it back to the point, what are some obstacles to your credence to thinking you are in a simulation? For me, the easy ones that come to mind are: 1) I do not know if it is physically possible, 2) I am skeptical that we will survive long enough to get the technology, 3) I do not know why people would bother making simulations.
One and two or unchanged by the one-box/Calvinism thing, but when we realize both that there are a lot of one-boxers, and that these one-boxers, when faced with an analogous decision, would almost certainly want to create simulations with pleasant afterlives, then I suddenly have some sense of why #3 might not be an obstacle.
I think you are reading something into what I said that was not meant. That said, I am still not sure what that was. I can say the exact thing in different language if it helps. “If some humans want to make simulations of humans, it is possible we are in a simulation made by humans. If humans do not want to make simulations of humans, there is no chance that we are in a simulation made by humans.” That was the full extent of what I was saying, with nothing else implied about other species or anything else.
Second point first. How could we be in a petri dish? How could we be NPCs in a video game? How would that fit with other observations and existing knowledge? My current credence is near zero, but I am open to new information. Hit me.
Now the first point. The new information is something like: “When we use what we know about human nature, we have reason to believe that people might make simulations. In particular, the existence of one-boxers who are happy to ignore our ‘common sense’ notions of causality, for whatever reason, and the existence of people who want an afterlife, when combined, suggests that there might be a large minority of people who will ‘act out’ creating simulations in the hope that they are in one.” A LW user sent me a message directing me to this post, which might help you understand my point: http://lesswrong.com/r/discussion/lw/l18/simulation_argument_meets_decision_theory/
The weird thing about trying to determine good self-locating beliefs when looking at the question of simulations is that you do not get the benefit of self-locating in time like that. We are talking about simulations of worlds/civilizations as they grow and develop into technological maturity. This is why Bostrom called them “ancestor simulations” in the original article (which you might read if you haven’t, it is only 12 pages, and if Bostrom is Newton, I am like a 7th grader half-assing an essay due tomorrow after reading the Wikipedia page.)
As for people believing in Allah making it more likely that he exists, I fully agree that that is nonsense. The difference here is that part of the belief in “Am I in a simulation made by people” relies CAUSALLY on whether or not people would ever make simulations. If they would not, the chance is zero. If they would, whether or not they should, the chance is something higher.
For an analogy again, imagine I am trying to determine my credence that the (uncontacted) Sentinelese people engage in cannibalism. I do not know anything about them specifically, but my credence is going to be something much higher than zero because I am aware that lots of human civilizations practice cannibalism. I have some relevant evidence about human nature and decision making that allows other knowledge of how people act to put some bounds on my credence about this group. Now imagine I am trying to determine my credence that the Sentinelese engage in widespread coprophagia. Again, I do not know anything about them. However, I do know that no other recorded human society has ever been recorded to do this. I can use this information about other peoples’ behavior and thought processes, to adjust my credence about the Sentinelese. In this case, giving me near certainty that they do not.
If we know that a bunch of people have beliefs that will lead to them trying to create “ancestor” simulations of humans, then we have more reason to think that a different set of humans have done this already, and we are in one of the simulations.
Do you still not think this after reading this post? Please let me know. I either need to work on communicating this a different way or try to pin down where this is wrong and what I am missing….
Also, thank you for all of the time you have put into this. I sincerely appreciate the feedback. I also appreciate why and how this has been frustrating, re: “cult,” and hope I have been able to mitigate the unpleasantness of this at least a bit.
Why do you talk in terms of credence? In Bayesianism your belief of how likely something is is just a probability, so we’re talking about probabilities, right?
Sure, OK.
Aren’t you doing some rather severe privileging of the hypothesis?
The world has all kinds of people. Some want to destroy the world (and that should increase my credence that the world will get destroyed); some want electronic heavens (and that should increase my credence that there will be simulated heavens); some want break out of the circle of samsara (and that should increase my credence that any death will be truly final); some want a lot of beer (and that should increase my credence that the future will be full of SuperExtraSpecialBudLight), etc. etc. And as the Egan’s Law says, “It all adds up to normality”.
I think you’re being very Christianity-centric and Christians are only what, about a third of the world’s population? I still don’t know why people would create imprecise simulations of those who lived and died long ago.
Locate this statement on a timeline. Let’s go back a couple of hundred years: do humans want to make simulations of humans? No, they don’t.
Things change and eternal truths are rare. Future is uncertain and judgements of what people of far future might want to do or not to do are not reliable.
Easily enough. You assume—for no good reason known to me—that a simulation must mimic the real world to the best of its ability. I don’t see why this should be so. A petri dish, in way, is a controlled simulation of, say, the growth and competition between different strains of bacteria (or yeast, or mold, etc.). Imagine an advanced (post-human or, say, alien) civilization doing historical research through simulations, running A/B tests on the XXI-century human history. If we change X, will the history go in the Y direction? Let’s see. That’s a petri dish—or a video game, take your pick.
That’s not a comforting thought. From what I know about human nature, people will want to make simulations where the simulation-makers are Gods.
And since I two-box, I still say that they can “act out” anything they want, it’s not going to change their circumstances.
Nope, not would ever make, but have ever made. The past and the future are still different. If you think you can reverse the time arrow, well, say so explicitly.
Yes, you have many known to you examples so you can estimate the probability that one more, unknown to you, has or does not have certain features. But...
...you can’t do this here. You know only a single (though diverse) set of humans. There is nothing to derive probabilities from. And if you want to use narrow sub-populations, well, we’re back to privileging the hypothesis again. Lots of humans believe and intend a lot of different things. Why pick this one?
Yep, still. If what the large number of people around believe affected me this much, I would be communing with my best friend Jesus instead :-P
Hasn’t been frustrating at all. I like intellectual exercises in twisting, untwisting, bending, folding, etc.. :-) I don’t find this conversation unpleasant.
Nah, it’s not you who is Exhibit A here :-/
Not quite. In the sim case, we along with our world exist as multiple copies—one original along with some number of sims. It’s really important to make this distinction, it totally changes the relevant decision theory.
No—because we exist as a set of copies which always takes the same actions. If we (in the future) create simulations of our past selves, then we are already today (also) those simulations.
Whether it’s not quite or yes quite depends on whether one accepts you idea of the identity as relative, fuzzy, and smeared out over a lot of copies. I don’t.
Do you state this as a fact?
Actually the sim argument doesn’t depend on fuzzy smeared out identity. The copy issue is orthogonal and it arises in any type of multiverse.
It is given in the sim scenario. I said this in reply to your statement “there’s nothing we do can make it simulated”.
The statement is incorrect because we are uncertain on our true existential state. And moreover, we have the power to change that state. The first original version of ourselves can create many other copies.
If the identity isn’t smeared then our world—our specific world—is either simulated or not.
Uncertainty doesn’t grant the power to change the status from not-simulated to simulated.
Sure. But we don’t know which copy we are, and all copies make the same decisions.
Each individual copy is either simulated or not, and nothing each individual copy does can change that—true. However, all of the copies output the same decisions, and each copy can not determine it’s true existential status.
So the uncertainty is critically important—because the distribution itself can be manipulated by producing more copies. By creating simulations in the future, you alter the distribution by creating more sim copies such that it is thus more likely that one has been a sim the whole time.
Draw out the graph and perhaps it will make more sense.
It doesn’t actually violate physical causality—the acuasality is only relative—an (intentional) illusion due to lack of knowledge.
All copies might make the same decisions, but the originals make different decisions.
Remember how upthread you talked about copies being relative and imperfect images of the originals? This means that the set of copies and the singleton of originals are different.
As individual variants they may have slight differences (less so for more advanced sims constructed later), but that doesn’t matter.
The ‘decision’ we are talking about here is an abstract high level decision or belief concerning whether one will support the construction of historical sims (financially, politically, etc). The numerous versions of a person might occasionally make different decisions here and there for exactly what word to use or what not, but they will (necessarily by design) agree on major life decisions.
I never said “imperfect images”—that’s your beef.
Let me quote you:
Given all this I can’t see how you insist that copies make the same decisions as originals. In fact, in your quote you even have different copies making different decisions (“multiple versions”).
The different versions arise from multiverse considerations. The obvious basic route to sim capture is recreating very close copies that experience everything we remember having experienced—a recreation of our exact specific historical timeline/branch.
But even recreating other versions corresponding to other nearby branches in the multiverse could work and is potentially more computationally efficient. The net effect is the same: it raises the probabillity that we exist in a sim created by some other version/branch.
So there are two notions of historical ‘accuracy’. The first being accuracy in terms of exact match with a specific timeline, the other being accuracy in terms of matching only samples from the overall multiverse distribution.
Success only requires a high total probability that we are in a sim. It doesn’t matter much which specific historical timeline creates the sim.
The idea of decision agreement still applies across different versions in the multiverse. It doesn’t require exact agreement with every micro decision, only general agreement on the key decisions involving sim creation.
Knowledge of which decisions we actually make is information which we can update our worldviews on.
Acausal reasoning seems wierd, but it works in practice and dominates classical causal reasoning.
What do you mean, “works in practice”?