I care about possible people. My child, if I ever have one, is one of them, and it seems monstrous not to care about one’s children.
I think you may have found one of the quickest ways to tempt me into downvoting a post without reading further (it wasn’t quite successful—I read all the way through before downvoting). Poor reasoning and stereotypical appeal to emotion are probably not the ideal opener.
Beyond that, you never made clear what the purpose of the following arguments is and gave them really confusing titles.
I’m not sure in what way argument 1 shows the multiverse to be “Sadistic”, or what position I am supposed to have held for it to be relevant to me. I guess if I cared about all hypothetical people, you may have showed that there is some subset of them I can’t affect?
I’m going to assume by “obtains” you mean “occurs”. With that in mind, I still have trouble understanding how this is relevant to anything. I guess you take “X-Risk” as an example of “generally accepted bad thing”, and that any bad thing would work? As far as I can tell, this line of reasoning doesn’t actually lead to paralysis, as if you can’t affect the non-actual worlds, obviously you can safely make all your decisions while disregarding them. On the other hand, if you think there is some way you think you can “break out” and affect all the other worlds, you may be motivated to attempt it at nearly any cost, but I don’t see this as problematic (assuming you have “solved” Pascal’s Mugging). Also, while I haven’t read Bostrom’s paper, I’m pretty sure infinity-related paralyses hardly ever occur if you just use surreal numbers (for example).
I basically don’t understand the third argument. You mention worlds 1, 2, 3 without explaining what the relationship is between them. I’m also not sure what you mean by “the morally relevant thing” being “qualitative” (and possible not sure what you mean by “quantifiable”). I am also not sure in what way the conclusion is different from argument 1 (there are things we can’t affect that ideally we’d want to affect).
Short version: it’s confusing and unclear/not relevant to my beliefs (which is fine, if it’s relevant to the beliefs of someone on here at least)/really confusing/terrible opener.
I think you may have found one of the quickest ways to tempt me into downvoting a post without reading further (it wasn’t quite successful—I read all the way through before downvoting). Poor reasoning and stereotypical appeal to emotion are probably not the ideal opener.
Beyond that, you never made clear what the purpose of the following arguments is and gave them really confusing titles.
I’m not sure in what way argument 1 shows the multiverse to be “Sadistic”, or what position I am supposed to have held for it to be relevant to me. I guess if I cared about all hypothetical people, you may have showed that there is some subset of them I can’t affect?
I’m going to assume by “obtains” you mean “occurs”. With that in mind, I still have trouble understanding how this is relevant to anything. I guess you take “X-Risk” as an example of “generally accepted bad thing”, and that any bad thing would work? As far as I can tell, this line of reasoning doesn’t actually lead to paralysis, as if you can’t affect the non-actual worlds, obviously you can safely make all your decisions while disregarding them. On the other hand, if you think there is some way you think you can “break out” and affect all the other worlds, you may be motivated to attempt it at nearly any cost, but I don’t see this as problematic (assuming you have “solved” Pascal’s Mugging). Also, while I haven’t read Bostrom’s paper, I’m pretty sure infinity-related paralyses hardly ever occur if you just use surreal numbers (for example).
I basically don’t understand the third argument. You mention worlds 1, 2, 3 without explaining what the relationship is between them. I’m also not sure what you mean by “the morally relevant thing” being “qualitative” (and possible not sure what you mean by “quantifiable”). I am also not sure in what way the conclusion is different from argument 1 (there are things we can’t affect that ideally we’d want to affect).
Short version: it’s confusing and unclear/not relevant to my beliefs (which is fine, if it’s relevant to the beliefs of someone on here at least)/really confusing/terrible opener.
Very strongly seconding the first part of this.