I once read that some people don’t vote because they believe that they can’t influence the outcome enough to outweigh the time it takes to vote (decide who to vote for etc.). Other reasons include the perceived inability to judge which candidate will be better. That line of reasoning seems to be even more relevant when it comes to existential risk charities. Not only might your impact turn out to be negligible but it seems even more difficult to judge the best charity. Are people who contribute money to existential risk charities also voting on presidential elections?
The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you’ve still had a dramatically bigger impact than if you spent it on yourself.
Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there’s a relatively simple way to figure out which is better (see the Value of Information stuff). or
b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for “charity in general.”
It’s also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don’t direct their giving thoughtfully and there’s a lot of low hanging fruit. (GiveWell has plenty of articles on this).
Second stupid question: There is a lot of talk about ethics on lesswrong. I still don’t understand why people talk about ethics and not just about what they want. Whatever morality is or is not, shouldn’t it be implied by what we want and the laws of thought?
Yes, it should. That’s what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn’t equal to what we want, so it’s worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?
Third stupid question: I still don’t get how expected utility maximization doesn’t lead to the destruction of complex values. Even if your utility-function is complex, some goals will yield more utility than others and don’t hit diminishing marginal returns. Bodily sensations like happiness for example don’t seem to run into diminishing returns.
Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can’t maximize your utility.
As for the second part of your question: hadn’t thought of that. I’ll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that’s what I’ll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don’t particularly care to get as many resources as possible to create as many humans as possible. I also don’t really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?
These aren’t stupid questions, by the way; they’re relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don’t matter that I’ve encountered.
The obvious difference between voting in an election and giving money to the best charity is that voting is zero-sum. If you vote for Candidate A and it turns out that Candidate B was a better candidate (by your standards, whatever they are), then your vote actually had a negative impact. But if you give money to Charity A and it turns out Charity B was slightly more efficient, you’ve still had a dramatically bigger impact than if you spent it on yourself.
Even if you have no idea which charity is better, the only case in which you would be justified in not donating to either is if a) there’s a relatively simple way to figure out which is better (see the Value of Information stuff). or
b) you think that giving money to charity is likely enough to be counterproductive that the expected value is negative. Which seems plausible for some forms of African aid, possible for FAI, and demonstrably false for “charity in general.”
It’s also worth noting that the expected value of donating to a good charity is a lot higher than the expected value of voting, since the vast majority of people don’t direct their giving thoughtfully and there’s a lot of low hanging fruit. (GiveWell has plenty of articles on this).
Yes, it should. That’s what people are talking about, for the most part, when they talk about ethics. Note that even though ethics is (probably) implied by what we want, it isn’t equal to what we want, so it’s worth having a separate word to distinguish between what we should want if we were better informed, etc. and what we actually want right now. This strikes me as so obvious I think I might be missing the point of your question. Do you want to clarify?
Well, since I value all that complex stuff, happiness has negative marginal returns as soon as it starts to interfere with my ability to have novelty, challenge, etc. I would rather be generally happier, but I would not rather be a wirehead, so somewhere between my current happiness state and wireheading, the return on happiness turns negative (assuming for a moment that my preferences now are a good guide to my extrapolated preferences). If your utility function is complex, and you value preserving all of its components, then maximizing one aspect can’t maximize your utility.
As for the second part of your question: hadn’t thought of that. I’ll let my smarter post-Singularity self evaluate my options and make the best decision it can, and if the utility-maximizing choice is to devote all resources to trying to beat entropy or something, then that’s what I’ll do. My current instinct, though, is that preserving existing lives is more important than creating new ones, so I don’t particularly care to get as many resources as possible to create as many humans as possible. I also don’t really understand what you are trying to get at. Is this an argument-from-consequences opposing x-risk prevention? Or are you arguing that utility-maximization generally is bad?
These aren’t stupid questions, by the way; they’re relevant and thought provoking, and the fact that you did extremely poorly on an IQ test is some of the strongest evidence that IQ tests don’t matter that I’ve encountered.