A moral imperative which references something which may or may not be exemplified; it doesn’t change if that which it references does not exist.
“Maximize the density of the æther.” is such an imperative.
“Include God when maximizing total utility.” is the version I think you are using (with ‘God’ being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)
So, if I’m understanding you: when my father was alive, I endorsed “Don’t kill your father.” When he died I continued to endorse it just as I had before. That makes “Don’t kill your father” a moral imperative which points towards something existence-agnostic, on your account… yes?
I have no idea what you’re on about by bringing God into this.
“Maximize the amount of gyration and gimbling of slithy toves” would be a better example.
I’m using God as a shorthand for the people running the simulation. I’m not introducing anything from religion but the name for something with that power.
I don’t think a moral imperative can meaningfully include a meaningless term. I do think a moral imperative can meaningfully include a meaningful term whose referent doesn’t currently exist in the world.
Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I’ve been poisoned and that the pill in my hand contains an antidote, but in fact I haven’t been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can’t know that.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).
That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.
For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.
But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn’t the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.
I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Is it immoral to refuse an irresponsible bet that would have paid off?
Same reasoning.
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
A moral imperative which references something which may or may not be exemplified; it doesn’t change if that which it references does not exist.
“Maximize the density of the æther.” is such an imperative.
“Include God when maximizing total utility.” is the version I think you are using (with ‘God’ being the creator of the simulation; I think that the use of the religious referent is appropriate because they have the same properties.)
So, if I’m understanding you: when my father was alive, I endorsed “Don’t kill your father.” When he died I continued to endorse it just as I had before. That makes “Don’t kill your father” a moral imperative which points towards something existence-agnostic, on your account… yes?
I have no idea what you’re on about by bringing God into this.
No- because fathers exist.
“Maximize the amount of gyration and gimbling of slithy toves” would be a better example.
I’m using God as a shorthand for the people running the simulation. I’m not introducing anything from religion but the name for something with that power.
OK; thanks for the clarification.
I don’t think a moral imperative can meaningfully include a meaningless term.
I do think a moral imperative can meaningfully include a meaningful term whose referent doesn’t currently exist in the world.
Also, it can be meaningful to make a moral assertion that depends on an epistemically unreachable state. For example, if I believe (for whatever reason) that I’ve been poisoned and that the pill in my hand contains an antidote, but in fact I haven’t been poisoned and the pill is poison, taking the pill is in fact the wrong thing to do, even though I can’t know that.
I prefer to have knowable morality- I must make decisions without information about the world, but only with my beliefs.
For example, it is wrong to pull the trigger of a gun aimed at an innocent person without knowing if it is loaded. The expected outcome is what matters, not the actual outcome.
Well, I certainly agree that we make decisions based on our beliefs (I would also say that our beliefs are, or at least can be, based on information about the world, but I understand you here to be saying that we must make decisions without perfect information about the world, which I agree with).
That said, I think you are eliding morality and decision procedures, which I think elides an important distinction.
For example, if at time T1 the preponderance of the evidence I have indicates the pill is an antidote, and at some later time T2 the preponderance of the evidence indicates that the pill is poison, a sensible decision theory says (at T1) to take the pill and (at T2) not to take the pill.
But to say that taking the pill is morally right at T1 and not-taking the pill is morally right at T2 seems no more justified to me than to say that the pill really is an antidote at T1 and is poison at T2. That just isn’t the case, and a morality or an ontology that says it is the case is simply mistaken. The pill is always poison, and taking the pill is therefore the wrong thing to do, whether I know it or not.
I guess you could say that I prefer that my morality, like my ontology, be consistent to having it be knowable.
So then it is nonsense to claim that someone did the right thing, but had a bad outcome?
If you see someone drowning and are in a position where you can safely do nothing or risk becoming another victim by assisting, you should assist iff your assistance will be successful, right?
Is it moral to bet irresponsibly if you win? Is it immoral to refuse an irresponsible bet that would have paid off?
I can’t see the practical use of a system where the morality of a choice is very often unknowable.
Also, thinking about this some more:
Suppose I have two buttons, one red and one green. I know that one of those buttons (call it “G”) creates high positive utility and the other (“B”) creates high negative utility. I don’t know whether G is red and B green, or the other way around.
On your account, if I understand you correctly, to say “pressing G is the right thing to do” is meaningless, because I can’t know which button is G. Pressing G, pressing B, and pressing neither are equally good acts on your account, even though one of them creates high positive utility and the other creates high negative utility. Is that right?
On my account, I would say that the choice between red and green is a question of decision theory, and the choice between G and B is a question of morality. Pressing G is the right thing to do, but I don’t know how to do it.
‘Pressing a button’ is one act, and ‘pressing both buttons’ and ‘pressing neither button’ are two others. If you press a button randomly, it isn’t morally relevant which random choice you made.
What does it mean to choose between G and B, when you have zero relevant information?
(shrug) It means that I do something that either causes G to be pressed, or causes B to be pressed. It means that the future I experience goes one way or another as a consequence of my act.
I have trouble believing that this is unclear; I feel at this point that you’re asking rhetorical questions by way of trying to express your incredulity rather than to genuinely extract new knowledge.Either way, I think we’ve gotten as far as we’re going to get here; we’re just going in circles.
I prefer a moral system in which the moral value of an act relative to a set of values is consistent over time, and I accept that this means it’s possible for there to be a right thing to do even when I don’t happen to have any way of knowing what the right thing to do is… that it’s possible to do something wrong out of ignorance. I understand you reject such a system, and that’s fine; I’m not trying to convince you to adopt it.
I’m not sure there’s anything more for us to say on the subject.
Well, it’s not nonsense, but it’s imprecise.
One thing that can mean is that the action had a net positive result globally, but negative results in various local frames. I assume that’s not what you mean here though, you mean had a bad outcome overall.
Another thing that can mean is that someone decided correctly, because they did the thing that had the highest expected value, but that led to doing the wrong thing because their beliefs about the world were incorrect and led them to miscalculate expected value. I assume that’s what you mean here.
Again, the language is ambiguous:
Moral “should”—yes, I should assist iff my assistance will be successful (assuming that saving the person’s life is a good thing).
Decision-theory “should”—I should assist if the expected value of my assistance is sufficiently high.
Assuming that winning the bet is moral, then betting irresponsibly was the morally right thing to do, though I could not have known that, and it was therefore an incorrect decision to make with the data I had.
Same reasoning.
All right.
Again, not quite. It’s possible for someone to accurately determine the expected results of a decision, but the actual results to vary significantly from the expected. Take a typical parimutuel gambling-for-cash scenario; the expected outcome is typically that the house gets a little richer, and all of the gamblers get a little poorer. That outcome literally never happens, according to the rules of the game.
I agree, but this seems entirely tangential to the points either of us were making.