Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I’ve missed the reason then. Seriously, I’d love to read up on it now.
As a result, sober calculations suggest that the lifetime risk of dying from an asteroid strike is about the same as the risk of dying in a commercial airplane crash. Yet we spend far less on avoiding the former risk than the latter.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW?
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain.
… but unfortunately only asked for a link for the ‘scope insensivity’ part, not a link to a ‘marginal utility’ tutorial. I’ve had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
You should just be discounting expected utilities by the probability of the claims being true...
That’s another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I’m not trying to be a nuisance here, but it is the only point I’m making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
I’m sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I’m inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?
I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I’ve missed the reason then. Seriously, I’d love to read up on it now.
Here is an example of what I want:
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
… but unfortunately only asked for a link for the ‘scope insensivity’ part, not a link to a ‘marginal utility’ tutorial. I’ve had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
That’s another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I’m not trying to be a nuisance here, but it is the only point I’m making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
I’m sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I’m inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?
I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.