My own experience is that I independently came up with a lot of arguments from the Sequences, but didn’t take them sufficiently seriously, push them hard enough, or examine them in enough detail.
But we are speaking of credentialed people. They’re fairly driven.
Furthermore, general non acceptance of an idea is evidence that the idea is not good. You can’t seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views. (The biggest problem with “Bayesianism” is dis-balanced/selective updates)
First off, if one can support existential risk for non Pascal’s wager type reasons then enormous utility of the future should not be relevant. If it is actually a requirement then I don’t think there’s anything to discuss here.
Secondarily, the most common norm of morality (Assuming we ignore things like Sharia), as specified in the laws of progressive countries, or as extrapolation of legal progress in less progressive ones, is to value the future people (we disapprove of smoking while pregnant), but not value counter-factual creation of future people (we allow abortion, and especially when the child would be disadvantaged and not have a fair chance). Rather than inferring the prevailing morality from the law and discussing it, various bad ideas are invented and discussed to make the argument appear stronger than it really is.
It is not that I am not exposed to this worldview. I am. It is that choosing between A: hurt someone, but a large number of happy people will be created, and B: not hurt someone, but a large number of happy people will not be created (with the deliberate choice having the causal impact on the hurting and creation), A is both illegal and immoral.
general non acceptance of an idea is evidence that the idea is not good. You can’t seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views.
When I hear that Joe has a new argument against a belief of mine, then my confidence in my belief lowers a bit, and my confidence in Joe’s competence also lowers a bit. If I then go on to actually evaluate the argument in detail and discover that it’s an extraordinarily poor one, this should generally increase my confidence to higher than it was before I heard that Joe had an argument, and it should further lower my confidence in Joe’s competence.
I’ve spent enough time looking at the specific arguments for and against many of these propositions to have the contents of those arguments overwhelm my expertise priors in both directions, such that I just don’t see a whole lot of value in discussing anything but the arguments themselves, when my goal (and yours) is to figure out the level of merit of the arguments.
if one can support existential risk for non Pascal’s wager type reasons then enormous utility of the future should not be relevant.
It sounds like you’re committing the Pascal’s Wager Fallacy Fallacy. If you aren’t, then I’m not understanding your point. Large future utilities should count more than small future utilities, and multiplying by low probabilities is fine if the probabilities aren’t vanishingly low.
Choosing between A: hurt someone, but a large number of happy people get created, and B: not hurt someone, but a large number of happy people do not get created, A is both illegal and immoral.
I think there’s a quantitative tradeoff between the happiness of currently existent people and the happiness of possibly-created people. A strict rule ‘Counterfactual People Have Absolutely No Value’ leads to absurd conclusions, e.g., it’s not worthwhile to create an infinite number of infinitely happy and well-off people if the cost is that your shoulder itches for a few seconds. It’s at least a little worthwhile to create people with awesome lives, even if they should get weighted less than currently existent people.
I’ve spent enough time looking at the specific arguments for and against many of these propositions to have the contents of those arguments overwhelm my expertise priors in both directions, such that I just don’t see a whole lot of value in discussing anything but the arguments themselves, when my goal (and yours) is to figure out the level of merit of the arguments.
You don’t want the outcome to be biased by the availability of the arguments, right? Really, I think you do not account for the fact that the available arguments are merely samples from the space of possible arguments (which make different speculative assumptions, in a very large space of possible speculations). Picked non uniformly, too, as arguments for one side may be more available, or their creation may maximize personal present-day utility of more agents. Individual samples can’t be particularly informative in such a situation.
It’s at least a little worthwhile to create people with awesome lives, even if they should get weighted less than currently existent people.
The issue is that the number of people you can speculate you affect grows much faster than the prior for the speculation decreases. Constant factors do not help with that, they just push the problem a little further.
A strict rule ‘Counterfactual People Have Absolutely No Value’ leads to absurd conclusions, e.g., it’s not worthwhile to create an infinite number of infinitely happy and well-off people if the cost is that your shoulder itches for a few seconds.
I don’t see that as problematic. Ponder the alternative for a moment: you may be ok with a shoulder itch, but are you OK with 10 000 years of the absolutely worst torture imaginable, for the sake of creation of 3^^^3 or 3^^^^^3 or however many really happy people? What’s about your death vs their creation?
edit: also you might have the value of those people to yourself (as potential mates and whatnot) leaking in.
It sounds like you’re committing the Pascal’s Wager Fallacy Fallacy. If you aren’t, then I’m not understanding your point. Large future utilities should count more than small future utilities, and multiplying by low probabilities is fine if the probabilities aren’t vanishingly low.
If the probabilities aren’t vanishingly low, you reach basically same conclusions without requiring extremely large utilities. 7 billion people dying is quite a lot, too. If you see extremely large utilities on a list of requirements for caring about the issue, when you already have at least 7 billion lives at stake, then it is a Pascal’s wager.
Actually, I don’t see vanishingly small probabilities problematic, I see small probabilities where the bulk of probability mass is unaccounted for, problematic. E.g. response to low risk from a specific asteroid is fine, because it’s alternative positions in space are accounted for (and you have assurance you won’t put it on an even worse trajectory)
Furthermore, general non acceptance of an idea is evidence that the idea is not good. You can’t seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views. (The biggest problem with “Bayesianism” is dis-balanced/selective updates)
Updating on someone else’s decision to accept or reject a position should depend on their reason for their position. Information cascades is relevant.
Yes, of course. But also keep in mind that wrong positions are often rejected by the mechanism that generates positions, rather than the mechanism that checks the generated positions.
But we are speaking of credentialed people. They’re fairly driven.
Furthermore, general non acceptance of an idea is evidence that the idea is not good. You can’t seriously be listing general non acceptance of your ideas by the relevant experts as the reason why you are superior to those experts, because same non acceptance lowers the probability that those ideas are correct, proportionally to how much it raises how exceptional you are for holding those views. (The biggest problem with “Bayesianism” is dis-balanced/selective updates)
In particular, when it comes to the interview that he linked for reasons why value the future…
First off, if one can support existential risk for non Pascal’s wager type reasons then enormous utility of the future should not be relevant. If it is actually a requirement then I don’t think there’s anything to discuss here.
Secondarily, the most common norm of morality (Assuming we ignore things like Sharia), as specified in the laws of progressive countries, or as extrapolation of legal progress in less progressive ones, is to value the future people (we disapprove of smoking while pregnant), but not value counter-factual creation of future people (we allow abortion, and especially when the child would be disadvantaged and not have a fair chance). Rather than inferring the prevailing morality from the law and discussing it, various bad ideas are invented and discussed to make the argument appear stronger than it really is.
It is not that I am not exposed to this worldview. I am. It is that choosing between A: hurt someone, but a large number of happy people will be created, and B: not hurt someone, but a large number of happy people will not be created (with the deliberate choice having the causal impact on the hurting and creation), A is both illegal and immoral.
When I hear that Joe has a new argument against a belief of mine, then my confidence in my belief lowers a bit, and my confidence in Joe’s competence also lowers a bit. If I then go on to actually evaluate the argument in detail and discover that it’s an extraordinarily poor one, this should generally increase my confidence to higher than it was before I heard that Joe had an argument, and it should further lower my confidence in Joe’s competence.
I’ve spent enough time looking at the specific arguments for and against many of these propositions to have the contents of those arguments overwhelm my expertise priors in both directions, such that I just don’t see a whole lot of value in discussing anything but the arguments themselves, when my goal (and yours) is to figure out the level of merit of the arguments.
It sounds like you’re committing the Pascal’s Wager Fallacy Fallacy. If you aren’t, then I’m not understanding your point. Large future utilities should count more than small future utilities, and multiplying by low probabilities is fine if the probabilities aren’t vanishingly low.
I think there’s a quantitative tradeoff between the happiness of currently existent people and the happiness of possibly-created people. A strict rule ‘Counterfactual People Have Absolutely No Value’ leads to absurd conclusions, e.g., it’s not worthwhile to create an infinite number of infinitely happy and well-off people if the cost is that your shoulder itches for a few seconds. It’s at least a little worthwhile to create people with awesome lives, even if they should get weighted less than currently existent people.
You don’t want the outcome to be biased by the availability of the arguments, right? Really, I think you do not account for the fact that the available arguments are merely samples from the space of possible arguments (which make different speculative assumptions, in a very large space of possible speculations). Picked non uniformly, too, as arguments for one side may be more available, or their creation may maximize personal present-day utility of more agents. Individual samples can’t be particularly informative in such a situation.
The issue is that the number of people you can speculate you affect grows much faster than the prior for the speculation decreases. Constant factors do not help with that, they just push the problem a little further.
I don’t see that as problematic. Ponder the alternative for a moment: you may be ok with a shoulder itch, but are you OK with 10 000 years of the absolutely worst torture imaginable, for the sake of creation of 3^^^3 or 3^^^^^3 or however many really happy people? What’s about your death vs their creation?
edit: also you might have the value of those people to yourself (as potential mates and whatnot) leaking in.
forgot to address this:
If the probabilities aren’t vanishingly low, you reach basically same conclusions without requiring extremely large utilities. 7 billion people dying is quite a lot, too. If you see extremely large utilities on a list of requirements for caring about the issue, when you already have at least 7 billion lives at stake, then it is a Pascal’s wager.
Actually, I don’t see vanishingly small probabilities problematic, I see small probabilities where the bulk of probability mass is unaccounted for, problematic. E.g. response to low risk from a specific asteroid is fine, because it’s alternative positions in space are accounted for (and you have assurance you won’t put it on an even worse trajectory)
Updating on someone else’s decision to accept or reject a position should depend on their reason for their position. Information cascades is relevant.
Yes, of course. But also keep in mind that wrong positions are often rejected by the mechanism that generates positions, rather than the mechanism that checks the generated positions.