Yeah, I intend to donate a good portion to Village Reach after I do some more thorough research on charity. I don’t have that much of an income yet, anyway.
Lol, good point. What I meant to say was “I probably intend to donate a good portion to Village Reach, if I don’t encounter anything in my research to change my mind.” It’s still probably a biased approach, but I can’t pretend I don’t already have a base point for my donations.
If you already know your decision the value of the research is nil.
No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!
No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!
No. Information really is useful for influencing others independently of its use for actually making decisions. It is only the decision making component that is useless after you have already made up your mind.
Nah; if your credence in X went up when you read the new reasons, and more importantly if it would have gone down if the opposite of these reasons were true, it’s kosher.
If someone challenges your post and you think “Crap, my case doesn’t look impressive enough” and selectively search for citations, you’re lying.
A grey area is when you believe X because you heard it somewhere but you don’t remember where except that it sounded trustworthy. You can legitimately be pretty confident that X is true and that good sources exist, but you still have to learn a new fact before you can point to them. The reason this isn’t an outright lie is that trust chains need occasional snapping. There’s an odd and interesting effect—Alice distorts things just a tiny bit when she tells Bob, which basically doesn’t affect anything, but Bob doesn’t know exactly what the distortions where so the distorsions he adds when he tells Carol can be huge, though his beliefs are basically correct! (A big source is that uncertainty is hard to communicate, so wild guesses often turn into strong claims.)
If someone challenges your post and you think “Crap, my case doesn’t look impressive enough” and selectively search for citations, you’re lying.
“Selectively” is the keyword here. Searching for additional arguments for your position is legitimate if you would retract on discovering negative evidence IMO.
Nah, you can’t choose to un-donate. Whereas you can always make up for lost time. So giving is a case where some mild indecision may be worthwhile.
Obviously the current expected value of your action should be the same as what you expect to think in the future. But getting more info can increase the expected value of your information thus:
Let’s say you have a charity budget, and two charities, A and B. Since your budget is a small fraction of the budget of each charity, assume your utility is linear in this decision, so you’ll give all your money to the better charity. You think there’s a 60% chance that charity A produces 1000 utilons from your donation and charity B produces 100, and a 40% chance that A only produces 10 and B still produces 100. The expected utility of giving to A is 60% 1000 + 40% 10 = 604. The expected utility of giving to B is 60% 100 + 40% 100 = 100, so you are planning to give to A.
But let’s say that by doing some small amount of research (assuming it’s costless for simplicity), you can expect to become, correctly, nearly certain (one situation has probability of ~1, the other has probability of ~0). Now if you become certain that A produces 1000 utilons (which you expect to happen 60% of the time), your choice is the same. But if you become certain that A produces only 10 utilons, you give to B instead. So your expected utility is now 60% 1000 + 40% 100 = 640, a net gain of 36 expected utilons.
Not if you can remake the decision. I read “I intend [...]” to mean “I expect to make this decision, based on the evidence available now, but will gather more evidence first, which may change my mind.”
But people don’t do very well at thinking when there’s already an expected outcome, so peter_hurford should either give up or work on becoming more curious.
And for something in the developing world aid space, Village Reach is generally considered to be the most efficient.
http://www.givewell.org/international/top-charities/villagereach
Yeah, I intend to donate a good portion to Village Reach after I do some more thorough research on charity. I don’t have that much of an income yet, anyway.
If you already know your decision the value of the research is nil.
Lol, good point. What I meant to say was “I probably intend to donate a good portion to Village Reach, if I don’t encounter anything in my research to change my mind.” It’s still probably a biased approach, but I can’t pretend I don’t already have a base point for my donations.
No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!
A worthy endeavour!
Are you being sarcastic here?
No. Information really is useful for influencing others independently of its use for actually making decisions. It is only the decision making component that is useless after you have already made up your mind.
Okay, thanks.
Citing evidence that didn’t influence you before you wrote your bottom line is lying.
So if:
Something causes me to believe in X
I post in public that I believe in X
I read up more on X and find even more reasons to believe in it
Somebody challenges my public post and I respond, citing both the old reason and the new ones
Then I’m lying? I don’t think that’s quite right.
Nah; if your credence in X went up when you read the new reasons, and more importantly if it would have gone down if the opposite of these reasons were true, it’s kosher.
If someone challenges your post and you think “Crap, my case doesn’t look impressive enough” and selectively search for citations, you’re lying.
A grey area is when you believe X because you heard it somewhere but you don’t remember where except that it sounded trustworthy. You can legitimately be pretty confident that X is true and that good sources exist, but you still have to learn a new fact before you can point to them. The reason this isn’t an outright lie is that trust chains need occasional snapping. There’s an odd and interesting effect—Alice distorts things just a tiny bit when she tells Bob, which basically doesn’t affect anything, but Bob doesn’t know exactly what the distortions where so the distorsions he adds when he tells Carol can be huge, though his beliefs are basically correct! (A big source is that uncertainty is hard to communicate, so wild guesses often turn into strong claims.)
“Selectively” is the keyword here. Searching for additional arguments for your position is legitimate if you would retract on discovering negative evidence IMO.
Yeah, but that’s a weird thing to do. Why not give your current evidence, then do more research and come back to announce the results?
Added some exclamation marks to bring out the sarcasm.
No. It just isn’t. But adopting your ontology briefly I will assert that ‘lying’ is morally virtuous in all sorts of situations.
Nah, you can’t choose to un-donate. Whereas you can always make up for lost time. So giving is a case where some mild indecision may be worthwhile.
Obviously the current expected value of your action should be the same as what you expect to think in the future. But getting more info can increase the expected value of your information thus:
Let’s say you have a charity budget, and two charities, A and B. Since your budget is a small fraction of the budget of each charity, assume your utility is linear in this decision, so you’ll give all your money to the better charity. You think there’s a 60% chance that charity A produces 1000 utilons from your donation and charity B produces 100, and a 40% chance that A only produces 10 and B still produces 100. The expected utility of giving to A is 60% 1000 + 40% 10 = 604. The expected utility of giving to B is 60% 100 + 40% 100 = 100, so you are planning to give to A.
But let’s say that by doing some small amount of research (assuming it’s costless for simplicity), you can expect to become, correctly, nearly certain (one situation has probability of ~1, the other has probability of ~0). Now if you become certain that A produces 1000 utilons (which you expect to happen 60% of the time), your choice is the same. But if you become certain that A produces only 10 utilons, you give to B instead. So your expected utility is now 60% 1000 + 40% 100 = 640, a net gain of 36 expected utilons.
You seem to have missed the point.
The point being what?
All information gained after making a decision is irrelevant for the purpose of making said decision. See also: The Bottom Line.
Not if you can remake the decision. I read “I intend [...]” to mean “I expect to make this decision, based on the evidence available now, but will gather more evidence first, which may change my mind.”
But people don’t do very well at thinking when there’s already an expected outcome, so peter_hurford should either give up or work on becoming more curious.
Your rebuttal is irrelevant to my point. Did you mean to reply to something else?