Just to make sure I understand, let me restate your scenario: there’s a world (“Meta-1 Earth”) which contains a simulation (“Sim-Earth”), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there’s a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?
So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.
You haven’t said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there’s some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.
I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that’s a different question.
Subtle distinctions. We have no knowledge about Meta-1 Earth. We only have the types of highly persuasive but technically circumstantial evidence provided; Omega exists in this scenario and is known by name, but he is silent on the question of whether the inscription on the massive solid gold tablet is truthful. The doomseday button is known to be real.
What would evidence regarding the existence of M1E look like?
(Also:4/6 chance of a 3 or higher. I don’t think the exact odds are critical.)
Well, if there are grounds for confidence that the button destroys the world, but no grounds for confidence in anything about the Meta-1 Earth stuff, then a sensible decision theory chooses not to press the button.
(Oh, right. I can do basic mathematics, honest! I just can’t read. :-( )
“Suppose that literally everything I observe is a barely imperfect simulation made by IBM, as evidenced by the observation that a particular particle interaction leaves traces which reliably read “World sim version 7.00.1.5 build 11/11/11 Copyright IBM, special thanks JKR” instead of the expected particle traces. Also, invoking certain words and gestures allows people with a certain genetic expression to break various physical laws.”
I was content to accept that supposition, not so much because I think I would necessarily be convinced of it by experiencing that, as because it seems plausible enough for a thought experiment and I didn’t want to fight the hypothetical.
But now it sounds like you’ve changed the question completely? Or am I deeply confused? In any case, I’ve lost the thread of whatever point you’re making.
Anyway, to answer your question, I’m not sure what would be compelling evidence for or against being in a simulation per se. For example, I can imagine discovering that physical constants encode a complex message under a plausible reading frame, and “I’m in a simulation” is one of the theories which accounts for that, but not the only one. I’m not sure how I would disambiguate “I’m in a simulation” from “there exists an intelligent entity with the power to edit physical constants” from “there exists an intelligent entity with the power to edit the reported results of measurements of physical constants.” Mostly, I would have to accept I was confused and start rethinking everything I used to believe about the universe.
Here’s a better way of looking at the problem: Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?
Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?
How can moral imperatives point towards things which are existence-agnostic?
Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?
We may need to further define “realize.” Supposing that it is possible to run a simulation which is indistinguishable from reality in the first place, it’s certainly possible for something which develops within the simulation to believe it is in a simulation, just like it’s possible for people in reality to do so.
Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?
Within a simulation that is indistinguishable from reality, it is of course not possible for a resident to distinguish the simulation from reality.
How can moral imperatives point towards things which are existence-agnostic?
I have no idea what this question means. Can you give me some examples of proposed moral imperatives that are existence-agnostic?
A moral imperative which references something which may or may not be exemplified; it doesn’t change if that which it references does not exist.
“Maximize the density of the æther.” is such an imperative.
“Include God when maximizing total utility.” is the version I think you are using (with ‘God’ being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)
So, if I’m understanding you: when my father was alive, I endorsed “Don’t kill your father.” When he died I continued to endorse it just as I had before. That makes “Don’t kill your father” a moral imperative which points towards something existence-agnostic, on your account… yes?
I have no idea what you’re on about by bringing God into this.
“Maximize the amount of gyration and gimbling of slithy toves” would be a better example.
I’m using God as a shorthand for the people running the simulation. I’m not introducing anything from religion but the name for something with that power.
I don’t think a moral imperative can meaningfully include a meaningless term. I do think a moral imperative can meaningfully include a meaningful term whose referent doesn’t currently exist in the world.
Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I’ve been poisoned and that the pill in my hand contains an antidote, but in fact I haven’t been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can’t know that.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).
That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.
For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.
But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn’t the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.
I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Is it immoral to refuse an irresponsible bet that would have paid off?
Same reasoning.
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
Just to make sure I understand, let me restate your scenario: there’s a world (“Meta-1 Earth”) which contains a simulation (“Sim-Earth”), and I get to choose whether to destroy Sim-Earth or not. If I refuse, there’s a 50% chance of both Sim-Earth and Meta-1 Earth being destroyed. Right?
So, the consequentialist thing to do is compare the value of Sim-Earth (V1) to the value of Meta-1 Earth (V2), and destroy Sim-Earth iff V2/2 > V1.
You haven’t said much about Meta-1 Earth, but just to pick an easily calculated hypothetical, if Omega further informs me that there are ten other copies of World sim version 7.00.1.5 build 11/11/11 running on machines in Meta-1 Earth (not identical to Sim-Earth, because there’s some randomness built into the sim, but roughly equivalent), I would conclude that destroying Sim-Earth is the right thing to do if everything is as Omega has represented it.
I might not actually do that, in the same way that I might not kill myself to save ten other people, or even give up my morning latte to save ten other people, but that’s a different question.
Subtle distinctions. We have no knowledge about Meta-1 Earth. We only have the types of highly persuasive but technically circumstantial evidence provided; Omega exists in this scenario and is known by name, but he is silent on the question of whether the inscription on the massive solid gold tablet is truthful. The doomseday button is known to be real.
What would evidence regarding the existence of M1E look like?
(Also:4/6 chance of a 3 or higher. I don’t think the exact odds are critical.)
Well, if there are grounds for confidence that the button destroys the world, but no grounds for confidence in anything about the Meta-1 Earth stuff, then a sensible decision theory chooses not to press the button.
(Oh, right. I can do basic mathematics, honest! I just can’t read. :-( )
What would evidence for or against being in a simulation look like?
I’m really puzzled by this question.
You started out by saying:
I was content to accept that supposition, not so much because I think I would necessarily be convinced of it by experiencing that, as because it seems plausible enough for a thought experiment and I didn’t want to fight the hypothetical.
But now it sounds like you’ve changed the question completely? Or am I deeply confused? In any case, I’ve lost the thread of whatever point you’re making.
Anyway, to answer your question, I’m not sure what would be compelling evidence for or against being in a simulation per se. For example, I can imagine discovering that physical constants encode a complex message under a plausible reading frame, and “I’m in a simulation” is one of the theories which accounts for that, but not the only one. I’m not sure how I would disambiguate “I’m in a simulation” from “there exists an intelligent entity with the power to edit physical constants” from “there exists an intelligent entity with the power to edit the reported results of measurements of physical constants.” Mostly, I would have to accept I was confused and start rethinking everything I used to believe about the universe.
Here’s a better way of looking at the problem: Is it possible to run a simulation which is both indistinguishable from reality (from within the simulation) and such that something which develops within the simulation will realize that it is in a simulation?
Is it possible, purely from within a simulation, for a resident to differentiate the simulation from reality, regardless of the quality of the simulation?
How can moral imperatives point towards things which are existence-agnostic?
We may need to further define “realize.” Supposing that it is possible to run a simulation which is indistinguishable from reality in the first place, it’s certainly possible for something which develops within the simulation to believe it is in a simulation, just like it’s possible for people in reality to do so.
Within a simulation that is indistinguishable from reality, it is of course not possible for a resident to distinguish the simulation from reality.
I have no idea what this question means. Can you give me some examples of proposed moral imperatives that are existence-agnostic?
A moral imperative which references something which may or may not be exemplified; it doesn’t change if that which it references does not exist.
“Maximize the density of the æther.” is such an imperative.
“Include God when maximizing total utility.” is the version I think you are using (with ‘God’ being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)
So, if I’m understanding you: when my father was alive, I endorsed “Don’t kill your father.” When he died I continued to endorse it just as I had before. That makes “Don’t kill your father” a moral imperative which points towards something existence-agnostic, on your account… yes?
I have no idea what you’re on about by bringing God into this.
No- because fathers exist.
“Maximize the amount of gyration and gimbling of slithy toves” would be a better example.
I’m using God as a shorthand for the people running the simulation. I’m not introducing anything from religion but the name for something with that power.
OK; thanks for the clarification.
I don’t think a moral imperative can meaningfully include a meaningless term.
I do think a moral imperative can meaningfully include a meaningful term whose referent doesn’t currently exist in the world.
Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I’ve been poisoned and that the pill in my hand contains an antidote, but in fact I haven’t been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can’t know that.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.
Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).
That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.
For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.
But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn’t the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.
I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Also, thinking about this some more:
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Same reasoning.
All right.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
I agree, but this seems entirely tangential to the points either of us were making.