I’ve spent enough time looking at the specific arguments for and against many of these propositions to have the contents of those arguments overwhelm my expertise priors in both directions, such that I just don’t see a whole lot of value in discussing anything but the arguments themselves, when my goal (and yours) is to figure out the level of merit of the arguments.
You don’t want the outcome to be biased by the availability of the arguments, right? Really, I think you do not account for the fact that the available arguments are merely samples from the space of possible arguments (which make different speculative assumptions, in a very large space of possible speculations). Picked non uniformly, too, as arguments for one side may be more available, or their creation may maximize personal present-day utility of more agents. Individual samples can’t be particularly informative in such a situation.
It’s at least a little worthwhile to create people with awesome lives, even if they should get weighted less than currently existent people.
The issue is that the number of people you can speculate you affect grows much faster than the prior for the speculation decreases. Constant factors do not help with that, they just push the problem a little further.
A strict rule ‘Counterfactual People Have Absolutely No Value’ leads to absurd conclusions, e.g., it’s not worthwhile to create an infinite number of infinitely happy and well-off people if the cost is that your shoulder itches for a few seconds.
I don’t see that as problematic. Ponder the alternative for a moment: you may be ok with a shoulder itch, but are you OK with 10 000 years of the absolutely worst torture imaginable, for the sake of creation of 3^^^3 or 3^^^^^3 or however many really happy people? What’s about your death vs their creation?
edit: also you might have the value of those people to yourself (as potential mates and whatnot) leaking in.
You don’t want the outcome to be biased by the availability of the arguments, right? Really, I think you do not account for the fact that the available arguments are merely samples from the space of possible arguments (which make different speculative assumptions, in a very large space of possible speculations). Picked non uniformly, too, as arguments for one side may be more available, or their creation may maximize personal present-day utility of more agents. Individual samples can’t be particularly informative in such a situation.
The issue is that the number of people you can speculate you affect grows much faster than the prior for the speculation decreases. Constant factors do not help with that, they just push the problem a little further.
I don’t see that as problematic. Ponder the alternative for a moment: you may be ok with a shoulder itch, but are you OK with 10 000 years of the absolutely worst torture imaginable, for the sake of creation of 3^^^3 or 3^^^^^3 or however many really happy people? What’s about your death vs their creation?
edit: also you might have the value of those people to yourself (as potential mates and whatnot) leaking in.