If everyone was to take Landsburg’s argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI. If everyone only donated to the SIAI, would something like Wikipedia even exist? I suppose the SIAI would have created Wikipedia if it was necessary. I’m just wondering how much important stuff out there was spawned by irrational contributions and how the world would look like if such contributions would have never been made. I’m also not sure how venture capitalist growth funding differs from the idea to diversify one’s contributions to charity.
Note that I do not doubt the correctness of Landsburg’s math. I’m just not sure if it would have worked out given human shortcomings (even if everyone was maximally rational). If nobody was to diversify, contributing to what seems to be the most rational option given the current data, then being wrong would be a catastrophe. Even maximally rational humans can fail after all. This wouldn’t likely be a problem if everyone contributed to a goal that could be verified rather quickly, but something like the SIAI could eat up the resources of the planet and still turn out to be not even wrong in the end. Since everyone would have concentrated on that one goal (no doubt being the most rational choice at the moment), might such a counterfactual world have been better off diversifying its contributions or would the SIAI have turned into some kind of financial management allocating those contributions and subsequently become itself a venture capitalist?
People don’t make their decisions simultaneously and instantaneously; once SIAI suffers diminishing returns to the extent that it’s no longer the best option, people can observe this and donate elsewhere.
It’s consistent with Landsburg’s analysis that everyone has their own utility function that emphasizes what that particular person considers important. So if everyone were a Landsburgian and donated only to a single charity, they would still donate all over the map to different charities—because, even if they all knew about SIAI, they either wouldn’t care as much about SIAI’s goals as other goals, or they would estimate SIAI’s effectiveness in reaching those goals as very low.
There probably would still be adverse impact to many charities which are second-choice for most their donors—and I’m sure there are many such—but not as catastrophic as you’re outlining, I think.
Personally, I believe that if everyone was presented with Landsburg’s argument, most people would fail to be Landsburgians not because they couldn’t stomach the math, or because they’d be wary of the more technical assumptions I wrote about in my post, but simply because they wouldn’t agree to characterize their charitable utility in unified single-currency utilons.
I know, the Karma system made me overcompensate. I noticed that questions are often voted down so I tried to counter that by making it sound more agreeable. It was something that bothered me so I thought LW and this post would be the best place to get some feedback. I was unable to read the OP or Landsburg’s proof but was still seeking answers before learning enough to come up with my own answers. I’m often trying to get feedback from experts without first studying that field myself. If I have an astronomy question I ask on an astronomy forum. Luckily most of the time people are kind enough to not demand that you first become an expert before you can ask questions. It would be pretty daunting if you would have to become a heart surgeon before you could ask about your heart surgery. But that’s how it is on LW, I have to learn that the price you pay for any uninformed questions are downvotes. I acknowledge that the Karma system and general attitude here makes me dishonest in what I write and apologize for that, I know that it is wrong.
At this point, I figure you have earned the right to say more-or-less whatever you like, for quite a while, without bothering too much about keeping score.
When I’m reading comments, I often skip over the ones that have low or negative score. I imagine other people do the same thing. So if you think your point is important enough to be read by more than a few people, you do want to try to have it voted up (but of course you shouldn’t significantly compromise your other values/interests to do so).
I looked at the context, but it seemed to me that Xi was just being sloppy. (Of course Landsburg’s argument implies rational agents should donate solely to SIAI, if SIAI offers the greatest marginal return. A~>B, A, Q.E.D., B.)
If Xi is being sloppy or stupid, then he should pay attention to what his karma is saying. That’s what it’s for! If you want to burn karma, it ought to be for something difficult that you’re very sure about, where the community is wrong and you’re right.
You shouldn’t take it as an axiom that the SIAI is the most-beneficial charity in the world. You imply that anyone who thinks otherwise is irrational.
...was questioning XiXiDu’s:
If everyone was to take Landsburg’s argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI.
...but it isn’t clear that the SIAI is the best charity in the world!!! They are in an interesting space—but maybe they are attacking the problem all wrong, lacking in the required skills, occupying the niche of better players—or failing in other ways.
XiXiDu justified making this highly-dubious claim by saying he was trying to avoid getting down-voted—and so wrote something which made his post “sound more agreeable”.
SIAI would probably be at least in competition for best charity in the world even if their chance for direct success was zero and their only actual success raising awareness of the problem.
I did a wildly guessing back of the envelope type calculation on that a while ago and even with very conservative estimations of the chance of a negative singularity and completely discounting any effect on the far future as well as any possibility of a positive singularity SIAI scored about 1 saved life per $1000.
For games where there are multiple agents interacting, the optimal strategy will usually involve some degree of weighted randomness. If there are noncommunicating rational agents A, B, C each with (an unsplittable) $1, and charities 1 and 2 - both of which fulfil a vital function but 1 requires $2 to function and 2 requires $1 to function, I would expect the agents to donate to 1 with p = 2⁄3.
A rational agent is aware that other rational agents exist, and will take account of their actions.
The entire resources of the world are somewhat large compared to a single person’s donation. I expect the argument wouldn’t apply in that situation (but you need TDT-like reasoning to realize that’s relevant, or for the donations to be spread in time so each person can condition on what donations all previous people made.)
If everyone was to take Landsburg’s argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI. If everyone only donated to the SIAI, would something like Wikipedia even exist? I suppose the SIAI would have created Wikipedia if it was necessary. I’m just wondering how much important stuff out there was spawned by irrational contributions and how the world would look like if such contributions would have never been made. I’m also not sure how venture capitalist growth funding differs from the idea to diversify one’s contributions to charity.
Note that I do not doubt the correctness of Landsburg’s math. I’m just not sure if it would have worked out given human shortcomings (even if everyone was maximally rational). If nobody was to diversify, contributing to what seems to be the most rational option given the current data, then being wrong would be a catastrophe. Even maximally rational humans can fail after all. This wouldn’t likely be a problem if everyone contributed to a goal that could be verified rather quickly, but something like the SIAI could eat up the resources of the planet and still turn out to be not even wrong in the end. Since everyone would have concentrated on that one goal (no doubt being the most rational choice at the moment), might such a counterfactual world have been better off diversifying its contributions or would the SIAI have turned into some kind of financial management allocating those contributions and subsequently become itself a venture capitalist?
People don’t make their decisions simultaneously and instantaneously; once SIAI suffers diminishing returns to the extent that it’s no longer the best option, people can observe this and donate elsewhere.
How would you observe that, what are the expected indications?
It’s consistent with Landsburg’s analysis that everyone has their own utility function that emphasizes what that particular person considers important. So if everyone were a Landsburgian and donated only to a single charity, they would still donate all over the map to different charities—because, even if they all knew about SIAI, they either wouldn’t care as much about SIAI’s goals as other goals, or they would estimate SIAI’s effectiveness in reaching those goals as very low. There probably would still be adverse impact to many charities which are second-choice for most their donors—and I’m sure there are many such—but not as catastrophic as you’re outlining, I think.
Personally, I believe that if everyone was presented with Landsburg’s argument, most people would fail to be Landsburgians not because they couldn’t stomach the math, or because they’d be wary of the more technical assumptions I wrote about in my post, but simply because they wouldn’t agree to characterize their charitable utility in unified single-currency utilons.
You shouldn’t take it as an axiom that the SIAI is the most-beneficial charity in the world. You imply that anyone who thinks otherwise is irrational.
I know, the Karma system made me overcompensate. I noticed that questions are often voted down so I tried to counter that by making it sound more agreeable. It was something that bothered me so I thought LW and this post would be the best place to get some feedback. I was unable to read the OP or Landsburg’s proof but was still seeking answers before learning enough to come up with my own answers. I’m often trying to get feedback from experts without first studying that field myself. If I have an astronomy question I ask on an astronomy forum. Luckily most of the time people are kind enough to not demand that you first become an expert before you can ask questions. It would be pretty daunting if you would have to become a heart surgeon before you could ask about your heart surgery. But that’s how it is on LW, I have to learn that the price you pay for any uninformed questions are downvotes. I acknowledge that the Karma system and general attitude here makes me dishonest in what I write and apologize for that, I know that it is wrong.
You do have over 2000 karma.
At this point, I figure you have earned the right to say more-or-less whatever you like, for quite a while, without bothering too much about keeping score.
When I’m reading comments, I often skip over the ones that have low or negative score. I imagine other people do the same thing. So if you think your point is important enough to be read by more than a few people, you do want to try to have it voted up (but of course you shouldn’t significantly compromise your other values/interests to do so).
I’m curious why Tim’s comment got downvoted 3 times.
Karma isn’t a license to act like a dick, make bad arguments, be sloppy, or commit sins of laziness.
/checks karma; ~3469, good.
Which should be obvious, you purblind bescumbered fen-sucked measle.
Right—but the context was “the Karma system and general attitude here makes me dishonest”.
If you are not short of Karma, sugar-coating for the audience at the expense of the truth seems to be largely unnecessary.
I looked at the context, but it seemed to me that Xi was just being sloppy. (Of course Landsburg’s argument implies rational agents should donate solely to SIAI, if SIAI offers the greatest marginal return. A~>B, A, Q.E.D., B.)
If Xi is being sloppy or stupid, then he should pay attention to what his karma is saying. That’s what it’s for! If you want to burn karma, it ought to be for something difficult that you’re very sure about, where the community is wrong and you’re right.
Phil’s:
...was questioning XiXiDu’s:
...but it isn’t clear that the SIAI is the best charity in the world!!! They are in an interesting space—but maybe they are attacking the problem all wrong, lacking in the required skills, occupying the niche of better players—or failing in other ways.
XiXiDu justified making this highly-dubious claim by saying he was trying to avoid getting down-voted—and so wrote something which made his post “sound more agreeable”.
SIAI would probably be at least in competition for best charity in the world even if their chance for direct success was zero and their only actual success raising awareness of the problem.
I did a wildly guessing back of the envelope type calculation on that a while ago and even with very conservative estimations of the chance of a negative singularity and completely discounting any effect on the far future as well as any possibility of a positive singularity SIAI scored about 1 saved life per $1000.
Accepting the logical validity of an argument, and flatly denying its soundness, is not an interesting or worthwhile or even good contribution.
What? Where are you suggesting that someone is doing that?
If you are talking about me and your logical argument, that is just not what was being discussed.
The correctness of the axiom concerning charity quality was what was in dispute from the beginning—not any associated logical reasoning.
Downvoted.
For games where there are multiple agents interacting, the optimal strategy will usually involve some degree of weighted randomness. If there are noncommunicating rational agents A, B, C each with (an unsplittable) $1, and charities 1 and 2 - both of which fulfil a vital function but 1 requires $2 to function and 2 requires $1 to function, I would expect the agents to donate to 1 with p = 2⁄3.
A rational agent is aware that other rational agents exist, and will take account of their actions.
The entire resources of the world are somewhat large compared to a single person’s donation. I expect the argument wouldn’t apply in that situation (but you need TDT-like reasoning to realize that’s relevant, or for the donations to be spread in time so each person can condition on what donations all previous people made.)