We are looking at very long time scales here, so how wide should our scope be? If we use a very wide scope like this, we get issues, but if we widen it still further we might get even more. Suppose the extent of reality were unlimited, and that the scope of effect of an individual action were unlimited, so that if you do something it affects something, which affects something else, which affects something else, and so on, without limit. This doesn’t necessarily need infinite time: We might imagine various cosmologies where the scope could be widened in other ways. Where would that leave the ethical value of any action we commit?
I will give an analogy, which we can call “Almond’s Puppies” (That’s a terrible name really, but it is too late now.)
Suppose we are standing at the end of two lines of boxes. Each line continues without end, and each box contains a puppy—so each line contains an infinity of puppies. You can choose to press a button to blow up the first box or another button to spare it. After you press the button, some mechanism, that you can’t predict, will decide to blow up the second box or spare it, based on your decision, and then it will decide to blow up the third box or spare it, based on your decision, and so on. So you press that button, and either the first box is blown up or spared, and then boxes get blown up or spared right along the line, with no end to it.
You have to press a button to start one line off. You choose to press the button to spare the first puppy. Someone else chooses to press the button to blow up the first puppy. The issue now is: Did the other person do a bad thing? If so, why? Did he kill more puppies than you? Does the fact that he was nicer to the nearby puppies matter? Does it matter that the progress of the wave of puppy explosions along the line of boxes will take time, and at any instant of time, only a finite number of puppies will have been blown up, even though there is no end to it in the future?
If we are looking at distant future scenarios, we might ask if we are sure that reality is limited.
I don’t understand your Puppies question. When you say:
You can choose to press a button to blow up the first box or another button to spare it. After you press the button, some mechanism, that you can’t predict, will decide to blow up the second box or spare it, based on your decision, and then it will decide to blow up the third box or spare it, based on your decision, and so on.
.… what do you mean by “based on your decision”? They decide the same as you did? The opposite? There’s a relationship to your decision but you don’t know which one.
I am really quite confused, and don’t see what moral dilemma there is supposed to be beyond “should I kill a puppy or not?”—which on the grand scale of things isn’t a very hard Moral Dilemma :P
“There’s a relationship to your decision but you don’t know which one”. You won’t see all the puppies being spared or all the puppies being blown up. You will see some of the puppies being spared and some of them being blown up, with no obvious pattern—however you know that your decision ultimately caused whatever sequence of sparing/blowing up the machine produced.
We are looking at very long time scales here, so how wide should our scope be? If we use a very wide scope like this, we get issues, but if we widen it still further we might get even more. Suppose the extent of reality were unlimited, and that the scope of effect of an individual action were unlimited, so that if you do something it affects something, which affects something else, which affects something else, and so on, without limit. This doesn’t necessarily need infinite time: We might imagine various cosmologies where the scope could be widened in other ways. Where would that leave the ethical value of any action we commit?
I will give an analogy, which we can call “Almond’s Puppies” (That’s a terrible name really, but it is too late now.)
Suppose we are standing at the end of two lines of boxes. Each line continues without end, and each box contains a puppy—so each line contains an infinity of puppies. You can choose to press a button to blow up the first box or another button to spare it. After you press the button, some mechanism, that you can’t predict, will decide to blow up the second box or spare it, based on your decision, and then it will decide to blow up the third box or spare it, based on your decision, and so on. So you press that button, and either the first box is blown up or spared, and then boxes get blown up or spared right along the line, with no end to it.
You have to press a button to start one line off. You choose to press the button to spare the first puppy. Someone else chooses to press the button to blow up the first puppy. The issue now is: Did the other person do a bad thing? If so, why? Did he kill more puppies than you? Does the fact that he was nicer to the nearby puppies matter? Does it matter that the progress of the wave of puppy explosions along the line of boxes will take time, and at any instant of time, only a finite number of puppies will have been blown up, even though there is no end to it in the future?
If we are looking at distant future scenarios, we might ask if we are sure that reality is limited.
I don’t understand your Puppies question. When you say:
.… what do you mean by “based on your decision”? They decide the same as you did? The opposite? There’s a relationship to your decision but you don’t know which one.
I am really quite confused, and don’t see what moral dilemma there is supposed to be beyond “should I kill a puppy or not?”—which on the grand scale of things isn’t a very hard Moral Dilemma :P
“There’s a relationship to your decision but you don’t know which one”. You won’t see all the puppies being spared or all the puppies being blown up. You will see some of the puppies being spared and some of them being blown up, with no obvious pattern—however you know that your decision ultimately caused whatever sequence of sparing/blowing up the machine produced.