Okay, let’s try again. My current belief is that at present, donations to SIAI are a less cost effective way of accomplishing good than donating to a charity like VillageReach or StopTB which improves health in the developing world.
My internal reasoning is as follows:
Roughly speaking the potential upside of donating to SIAI (whatever research SIAI would get done) is outwieghed by the potential downside (the fact that SIAI could divert funding away from future existential risk organizations). By way of contrast, I’m reasonably confident that there’s some upside to improving health in the developing world (keep in mind that historically, development has been associated with political stability and getting more smart people in the pool of people thinking about worthwhile things) and giving to accountable effectiveness oriented organizations will raise the standard for accountability across the philanthropic world (including existential risk charities).
I wish that there were better donation opportunities than VillageReach and StopTB and I’m moderately optimistic that some will emerge in the near future (e.g. over the next ten years) but I don’t see any at the moment.
So we both agree that a more-accountable set of existential risk organizations would (all else equal) be the best way to spend money, better than third world charity certainly.
The disagreement is about this idea of current existential risk organizations diverting money away from future organizations that are better.
My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.
However, something like Friendly AI theory requires a virtually complete re-education of a person (that is if they are very smart to start with. If not, they’ll just never understand it). If it were easy to understand, it would be something for which charity was not required: governments would be doing it, not out of charity, but out of self-interest.
Given this difference, your idea of demanding high levels of accountability might itself need some scrutiny. My personal position is to require nothing in terms of accountability, competence or performance unless and until it is demonstrated that there are, in fact, other groups who want to start an existential risk charity, and to begin the process of competition by funding those other groups, should they in fact arise.
I am currently working for the Lifeboat foundation, by the way, which is such an “other group”, and is, in fact, funded to the tune of $200k. But three is still pretty darn small, and the number of people involved is tiny.
•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.
•I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.
For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I’m not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don’t think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.
•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn’t demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.
I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn’t demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.
Maybe, but more likely rich individuals will see the benefits long before the public does, then the “establishment” will organize a secret AGI project. Though this doesn’t even seem remotely close to happening: the whole thing pattern matches for some kind of craziness/scam.
•I agree that there’s gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.
•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.
•I presently believe that it’s not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be “on the clock” because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.
I presently believe that it’s not desirable for general AI research to be done in secret.
There are shades between complete secrecy and blurting it out on the radio. Right now, human-universal cognitive biases keep it effectively secret, but in the future we may find that the military closes in on it like knowledge of how to build nuclear weapons.
That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that’s the composition of Greek fire. Someone is going to leak.
Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.
Note that if uFAI is >> easier than FAI, then the size of the research community must be kept small, otherwise FAI research may acquire a Klaus Fuchs who goes and builds a uFAI for fun and vengeance.
I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.
Do you buy the argument that we should take the ~10^50 future people the universe could support into account in our expected utility calculations?
If so, then it is hard to see how anything other than existential risks matters, i.e. all money devoted to the third world, animal welfare, poor people, diseases, etc, would ideally be redirected to the goal of ensuring a positive (rather then negative) singularity.
Of course this point is completely academic, because the vast majority of people won’t ever believe it, but I’d be interested to hear if you buy it.
Do you buy the argument that we should take the ~10^50 future people the universe could support into our expected utility calculations?
Yes, I buy this argument.
If so, then it is hard to see how anything other than existential risks matters.
The question is just whether donating to an existential risk charity is the best way to avert existential risk.
•I believe that political instability is conducive to certain groups desperately racing to produce and utilize powerful technologies. This points in the direction of promotion of political stability reducing existential risk.
•I believe that when people are leading lives that they find more fulfilling, they make better decisions, so that improving quality of life reduces existential risk
•I believe that (all else being equal), economic growth reduces “existential risk in the broad sense.” By this I mean that economic growth may prevent astronomical waste.
Of course, as a heuristic it’s more important that technologies develop safely than that they develop quickly, but one could still imagine that at some point, the marginal value of an extra dollar spent on existential risk research drops so low that speeding up economic growth is a better use of money.
•Of the above three points, the first two are more compelling than the third, but the third could still play a role, and I believe that there’s a correlation between each pair of political stability, quality of life, and economic growth, so that it’s possible to address the three simultaneously.
•As I said above, at the margin I think that a good charity devoted to studying existential risk should be getting more funding, but at present I do not believe that a good charity devoted to studying existential risk could cost effectively absorb arbitrarily many dollars.
Do you buy the argument that we should take the ~10^50 future people the universe could support into account in our expected utility calculations?
I do. In fact, I assign a person certain to be born a million years from now about the same intrinsic value as a person who exists today though there are a lot of ways in which my doing good for a person who exist today has significant insttrumental value which doing good for a person certain to be born a million years does not.
My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.
I suspect helping dead states efficiently and sustainably is very difficult, possibly more so than developing FAI as a shortcut. Of course, it’s a completely different kind of challenge.
I disagree strongly. You can repeatedly get it it wrong with failed states, and learn from your mistakes. The utility cost for each failure is additive, whereas the first FAI failure is fatal. Also, third world development is a process that might spontaneously solve itself via economic development and cultural change. Much to the chagrin of many charities, that might even be the optimal way to solve the problem given our resource constraints. In fact the development of the west is a particular example of this; we started out as medieval third world nations.
I disagree strongly. You can repeatedly get it it wrong with failed states, and learn from your mistakes. The utility cost for each failure is additive, whereas the first FAI failure is fatal.
Distinguish the difficulty of developing an adequate theory, from the difficulty of verifying that a theory is adequate. It’s the failure with the latter that might lead to disaster, while not failing requires a lot of informed rational caution. On the other hand, not inventing an adequate theory doesn’t directly lead to a disaster, and failure to invent an adequate theory of FAI is something you can learn from (the story of my life for the last three years).
Okay, let’s try again. My current belief is that at present, donations to SIAI are a less cost effective way of accomplishing good than donating to a charity like VillageReach or StopTB which improves health in the developing world.
My internal reasoning is as follows:
Roughly speaking the potential upside of donating to SIAI (whatever research SIAI would get done) is outwieghed by the potential downside (the fact that SIAI could divert funding away from future existential risk organizations). By way of contrast, I’m reasonably confident that there’s some upside to improving health in the developing world (keep in mind that historically, development has been associated with political stability and getting more smart people in the pool of people thinking about worthwhile things) and giving to accountable effectiveness oriented organizations will raise the standard for accountability across the philanthropic world (including existential risk charities).
I wish that there were better donation opportunities than VillageReach and StopTB and I’m moderately optimistic that some will emerge in the near future (e.g. over the next ten years) but I don’t see any at the moment.
What about the comparison of a donor advised existential risks fund versus StopTB?
Good question. I haven’t considered this point—thanks for bringing it to my consideration!
So we both agree that a more-accountable set of existential risk organizations would (all else equal) be the best way to spend money, better than third world charity certainly.
The disagreement is about this idea of current existential risk organizations diverting money away from future organizations that are better.
My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.
However, something like Friendly AI theory requires a virtually complete re-education of a person (that is if they are very smart to start with. If not, they’ll just never understand it). If it were easy to understand, it would be something for which charity was not required: governments would be doing it, not out of charity, but out of self-interest.
Given this difference, your idea of demanding high levels of accountability might itself need some scrutiny. My personal position is to require nothing in terms of accountability, competence or performance unless and until it is demonstrated that there are, in fact, other groups who want to start an existential risk charity, and to begin the process of competition by funding those other groups, should they in fact arise.
I am currently working for the Lifeboat foundation, by the way, which is such an “other group”, and is, in fact, funded to the tune of $200k. But three is still pretty darn small, and the number of people involved is tiny.
•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.
•I’m very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).
Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.
For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I’m not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don’t think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.
•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn’t demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.
•Thanks for mentioning the Lifeboat foundation.
Maybe, but more likely rich individuals will see the benefits long before the public does, then the “establishment” will organize a secret AGI project. Though this doesn’t even seem remotely close to happening: the whole thing pattern matches for some kind of craziness/scam.
•I agree that there’s gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.
•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.
•I presently believe that it’s not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be “on the clock” because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.
There are shades between complete secrecy and blurting it out on the radio. Right now, human-universal cognitive biases keep it effectively secret, but in the future we may find that the military closes in on it like knowledge of how to build nuclear weapons.
That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that’s the composition of Greek fire. Someone is going to leak.
Note that if uFAI is >> easier than FAI, then the size of the research community must be kept small, otherwise FAI research may acquire a Klaus Fuchs who goes and builds a uFAI for fun and vengeance.
This makes it all a lot harder.
Do you buy the argument that we should take the ~10^50 future people the universe could support into account in our expected utility calculations?
If so, then it is hard to see how anything other than existential risks matters, i.e. all money devoted to the third world, animal welfare, poor people, diseases, etc, would ideally be redirected to the goal of ensuring a positive (rather then negative) singularity.
Of course this point is completely academic, because the vast majority of people won’t ever believe it, but I’d be interested to hear if you buy it.
Yes, I buy this argument.
The question is just whether donating to an existential risk charity is the best way to avert existential risk.
•I believe that political instability is conducive to certain groups desperately racing to produce and utilize powerful technologies. This points in the direction of promotion of political stability reducing existential risk.
•I believe that when people are leading lives that they find more fulfilling, they make better decisions, so that improving quality of life reduces existential risk
•I believe that (all else being equal), economic growth reduces “existential risk in the broad sense.” By this I mean that economic growth may prevent astronomical waste.
Of course, as a heuristic it’s more important that technologies develop safely than that they develop quickly, but one could still imagine that at some point, the marginal value of an extra dollar spent on existential risk research drops so low that speeding up economic growth is a better use of money.
•Of the above three points, the first two are more compelling than the third, but the third could still play a role, and I believe that there’s a correlation between each pair of political stability, quality of life, and economic growth, so that it’s possible to address the three simultaneously.
•As I said above, at the margin I think that a good charity devoted to studying existential risk should be getting more funding, but at present I do not believe that a good charity devoted to studying existential risk could cost effectively absorb arbitrarily many dollars.
I do. In fact, I assign a person certain to be born a million years from now about the same intrinsic value as a person who exists today though there are a lot of ways in which my doing good for a person who exist today has significant insttrumental value which doing good for a person certain to be born a million years does not.
I suspect helping dead states efficiently and sustainably is very difficult, possibly more so than developing FAI as a shortcut. Of course, it’s a completely different kind of challenge.
I disagree strongly. You can repeatedly get it it wrong with failed states, and learn from your mistakes. The utility cost for each failure is additive, whereas the first FAI failure is fatal. Also, third world development is a process that might spontaneously solve itself via economic development and cultural change. Much to the chagrin of many charities, that might even be the optimal way to solve the problem given our resource constraints. In fact the development of the west is a particular example of this; we started out as medieval third world nations.
Distinguish the difficulty of developing an adequate theory, from the difficulty of verifying that a theory is adequate. It’s the failure with the latter that might lead to disaster, while not failing requires a lot of informed rational caution. On the other hand, not inventing an adequate theory doesn’t directly lead to a disaster, and failure to invent an adequate theory of FAI is something you can learn from (the story of my life for the last three years).