SI/LW sometimes gives the impression of being a doomsday cult...
I certainly never had this impression. The worst that can be said about SI/LW is that some use inappropriately strong language with respect to risks from AI.
What I endorse:
Risks from AI (including WBE) are an underfunded research area and might currently be the best choice for anyone who seeks to do good by contribute money to an important cause.
What I think is unjustified:
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us.
I would have to assign a +90% probability to risks from AI, to pose an existential risk, to endorse the second stance. I would further have to be highly confident that we will have to face associated risks within this century and that the model uncertainty associated with my estimates is low.
You might argue that I would endorse the second stance if NASA told me that there was a 20% chance of an asteroid hitting Earth and that they need money to deflect it. I would indeed. But that seems like a completely different scenario to me.
That intuition might stem from the possibility that any estimates regarding risks from AI are very likely to be wrong, whereas in the example case of an asteroid collision one could be much more confident in the 20% estimate. As the latter is based on empirical evidence while the former is inference based and therefore error prone.
What I am saying is that I believe that SI is probably the top charity right now but that it is not as far ahead of other causes as some people here seem to think. I don’t think that the evidence allows anyone to claim that trying to mitigate risks from AI is the best one could do and be highly confident about it. I think that it is currently the leading cause, but only slightly. And I am highly skeptical about using the expected value of a galactic civilization to claim otherwise.
Charitable giving in the US in 2010: ~$290,890,000,000
SI’s annual budget for 2010: ~$500,000
I am not sure what you are trying to tell me by those numbers. I think that there are a few valid criticisms regarding SI as an organization. It is also not clear that they could usefully spend more than ~$500,000 at this time.
In other words, even if risks from AI was the by far (not just slightly) most important cause, it is not clear that contributing money to SI is better than withholding funds from it it at this point.
If for example they can’t usefully spend more money at this point, and there is nothing medium probable that you yourself can do against AI risk right now, then you should move on to the next most important cause that needs funding and support it instead.
I certainly never had this impression. The worst that can be said about SI/LW is that some use inappropriately strong language with respect to risks from AI.
What I endorse:
Risks from AI (including WBE) are an underfunded research area and might currently be the best choice for anyone who seeks to do good by contribute money to an important cause.
What I think is unjustified:
This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us.
I would have to assign a +90% probability to risks from AI, to pose an existential risk, to endorse the second stance. I would further have to be highly confident that we will have to face associated risks within this century and that the model uncertainty associated with my estimates is low.
You might argue that I would endorse the second stance if NASA told me that there was a 20% chance of an asteroid hitting Earth and that they need money to deflect it. I would indeed. But that seems like a completely different scenario to me.
That intuition might stem from the possibility that any estimates regarding risks from AI are very likely to be wrong, whereas in the example case of an asteroid collision one could be much more confident in the 20% estimate. As the latter is based on empirical evidence while the former is inference based and therefore error prone.
What I am saying is that I believe that SI is probably the top charity right now but that it is not as far ahead of other causes as some people here seem to think. I don’t think that the evidence allows anyone to claim that trying to mitigate risks from AI is the best one could do and be highly confident about it. I think that it is currently the leading cause, but only slightly. And I am highly skeptical about using the expected value of a galactic civilization to claim otherwise.
Charitable giving in the US in 2010: ~$290,890,000,000
SI’s annual budget for 2010: ~$500,000
US Peace Corps volunteers in 2010 (3 years of service in a foreign country for sustenance wages): ~8,655
SI volunteers in 2010 (work from home or California hot spots): like 5?
I am not sure what you are trying to tell me by those numbers. I think that there are a few valid criticisms regarding SI as an organization. It is also not clear that they could usefully spend more than ~$500,000 at this time.
In other words, even if risks from AI was the by far (not just slightly) most important cause, it is not clear that contributing money to SI is better than withholding funds from it it at this point.
If for example they can’t usefully spend more money at this point, and there is nothing medium probable that you yourself can do against AI risk right now, then you should move on to the next most important cause that needs funding and support it instead.
You think SI is “probably the top charity right now”.
SI is smaller than the rounding error in US charitable giving.
You think they might have more than enough money
Those don’t add up.
I think it’s funny.
I think you misread “top charity” as “biggest charity” instead of “most important charity”.
No, I didn’t.