I’m not asking for arguments. I know them. I donate. I’m asking for more now. I’m using the same kind of anti-argumentation that academics would use against your arguments. Which I’ve encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? “I skimmed over it, but there were no references besides some sound argumentation, an internal logic.”, “You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for.”
Pardon my bluntness, but I don’t believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.
For example if you already understood the arguments for, or basic explanation of why ‘putting all your eggs in one basket’ is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn’t?
Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you’d end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It’s obvious, I don’t see how someone wouldn’t get this.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don’t care to save or that doesn’t need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?
I’m starting to doubt that anyone actually read my OP...
Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you’d end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It’s obvious, I don’t see how someone wouldn’t get this.
I know this is just a tangent… but that isn’t actually the reason.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don’t care to save or that doesn’t need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
Just to be clear, I’m not objecting to this. That’s a reasonable point.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I’ve missed the reason then. Seriously, I’d love to read up on it now.
As a result, sober calculations suggest that the lifetime risk of dying from an asteroid strike is about the same as the risk of dying in a commercial airplane crash. Yet we spend far less on avoiding the former risk than the latter.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW?
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
You should just be discounting expected utilities by the probability of the claims being true, and then putting all your eggs into the basket that has the highest marginal expected utility per dollar, unless you have enough resources to invest that the marginal utility goes down. This is straightforward to anyone who knows about expected utility and economics, and anyone who knows about scope insensitivity knows why this result is counterintuitive to the human brain.
… but unfortunately only asked for a link for the ‘scope insensivity’ part, not a link to a ‘marginal utility’ tutorial. I’ve had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
You should just be discounting expected utilities by the probability of the claims being true...
That’s another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I’m not trying to be a nuisance here, but it is the only point I’m making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
I’m sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I’m inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?
I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.
I’m not asking for arguments. I know them. I donate. I’m asking for more now. I’m using the same kind of anti-argumentation that academics would use against your arguments. Which I’ve encountered myself a few times while trying to convince them to take a look at the inscrutable archives of posts and comment that is LW. What do they say? “I skimmed over it, but there were no references besides some sound argumentation, an internal logic.”, “You make strong claims, mere arguments and conclusions extrapolated from a few premises are insufficient to get what you ask for.”
Pardon my bluntness, but I don’t believe you, and that disbelief reflects positively on you. Basically, if you do know the arguments then a not insignificant proportion of your discussion here would amount to mere logical rudeness.
For example if you already understood the arguments for, or basic explanation of why ‘putting all your eggs in one basket’ is often the rational thing to do despite intuitions to the contrary then why on earth would you act like you didn’t?
Oh crap, the SIAI was just a punching bag. Of course I understand the arguments for why it makes sense not to split your donations. If you have a hundred babies but only food for 10, you are not going to portion it to all of the hundred babies but feed the strongest 10. Otherwise you’d end up having a hundred dead babies in which case you could as well have eaten the food yourself before wasting it like that. It’s obvious, I don’t see how someone wouldn’t get this.
I used that idiom to illustrate that given my preferences and current state of evidence I could as well eat all the food myself rather than wasting it on something I don’t care to save or that doesn’t need to be saved in the first place because I missed the fact that all the babies are puppets and not real.
I asked, are the babies real babies that need food and is the expected utility payoff of feeding them higher than eating the food myself right now?
I’m starting to doubt that anyone actually read my OP...
I know this is just a tangent… but that isn’t actually the reason.
Just to be clear, I’m not objecting to this. That’s a reasonable point.
Ok. Is there a paper, article, post or comment that states the reason or is it spread all over LW? I’ve missed the reason then. Seriously, I’d love to read up on it now.
Here is an example of what I want:
Good question. If not there should be. It is just basic maths when handling expected utilities but it crops up often enough. Eliezer gave you a partial answer:
… but unfortunately only asked for a link for the ‘scope insensivity’ part, not a link to a ‘marginal utility’ tutorial. I’ve had a look and I actually cant find such a reference on LW. A good coverage of the subject can be found in an external paper, Heuristics and biases in charity. Section 1.1.3 Diversification covers the issue well.
That’s another point. As I asked, what are the variables, where do I find the data? How can I calculate this probability based on arguments to be found on LW?
This IS NOT sufficient to scare people up to the point of having nightmares and ask them for most of their money.
I’m not trying to be a nuisance here, but it is the only point I’m making right now, and the one that can be traced right back through the context. It is extremely difficult to make progress in a conversation if I cannot make a point about a specific argument without being expected to argue against an overall position that I may or may not even disagree with. It makes me feel like my arguments must come armed as soldiers.
I’m sorry, I perceived your comment to be mainly about decision making regarding charities. Which is completely marginal since the SIAI is the only charity concerned with the risk I’m inquiring about. Is the risk in question even real and does its likelihood justify the consequences and arguments for action?
I inquired about the decisions making regarding charities because you claimed that what I stated about egg allocation is not the point being made. But I do not particularly care about that question as it is secondary.