Okay, we had this back and forth before and I didn’t understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it’s not the question usually posed.
As far as I can tell, working to build an AGI right now now makes sense only if AGI is actually near (a few decades away).
Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn’t work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it’s not the question usually posed.
I agree that in general people should be more concerned about existential risk and that it’s worthwhile to promote general awareness of existential risk.
But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.
More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I’m seriously concerned about this issue.
If Eliezer can’t explain why it’s pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like “I believe that AGI will be developed over the next 100 years but it’s hard for me to express why so it’s understandable that people don’t believe me” or “I’m uncertain as to whether or not AGI will be developed over the next 100 years”
When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he’s actively damaging the cause of existential risk.
Thanks, I’ll check this out when I get a chance. I don’t know whether I’ll agree with your conclusions, but it looks like you’ve at least attempted to answer one of my main questions concerning the feasibility of SIAI’s approach.
Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.
Hm, I had not heard about that. SIAI doesn’t seem to do a very good job of publicizing its projects or perhaps doesn’t do a good job of finishing and releasing them.
Okay, we had this back and forth before and I didn’t understand you then and now I do. I guess I was being dense before. Anyway, the probability of current action leading to FAI might still be sufficiently small so that it makes sense to focus on other existential risks for the moment. And my other points remain.
This is the same zero-sum thinking as in your previous post: people are currently not deciding between different causes, they are deciding whether to take a specific cause seriously. If you already contribute everything you could to a nanotech-risk-prevention organization, then we could ask whether switching to SIAI will do more good. But it’s not the question usually posed.
Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all. SIAI doesn’t work on building AGI right now, no no no. We need understanding, not robots. Like this post, say.
I agree that in general people should be more concerned about existential risk and that it’s worthwhile to promote general awareness of existential risk.
But there is a zero-sum aspect to philanthropic efforts. See the GiveWell blog entry titled Denying The Choice.
More to the point, I think that one of the major factors keeping people away from studying existential risk is the fact that the many of the people who are interested in existential risk (including Eliezer) have low credibility on account expressing confident, apparently sensationalist claims without supporting them with careful, well reasoned arguments. I’m seriously concerned about this issue.
If Eliezer can’t explain why it’s pretty obvious to him that AGI will be developed within the next century, then he should explicitly say something like “I believe that AGI will be developed over the next 100 years but it’s hard for me to express why so it’s understandable that people don’t believe me” or “I’m uncertain as to whether or not AGI will be developed over the next 100 years”
When he makes unsupported claims that sound like the sort of thing that somebody would say just to get attention, he’s actively damaging the cause of existential risk.
Re: “AGI will be developed over the next 100 years”
I list various estimates from those interested enough in the issue to bother giving probabality density functions at the bottom of:
http://alife.co.uk/essays/how_long_before_superintelligence/
Thanks, I’ll check this out when I get a chance. I don’t know whether I’ll agree with your conclusions, but it looks like you’ve at least attempted to answer one of my main questions concerning the feasibility of SIAI’s approach.
Those surveys suffer from selection bias. Nick Bostrom is going to try to get a similar survey instrument administered to a less-selected AI audience. There was also a poll at the AI@50 conference.
http://www.engagingexperience.com/2006/07/ai50_first_poll.html
If the raw data was ever published, that might be of some interest.
Any chance of piggybacking questions relevant to Maes-Garreau on that survey? As you point out on that page, better stats are badly needed.
And indeed, I suggested to SIAI folk that all public record predictions of AI timelines be collected for that purpose, and such a project is underway.
Hm, I had not heard about that. SIAI doesn’t seem to do a very good job of publicizing its projects or perhaps doesn’t do a good job of finishing and releasing them.
It just started this month, at the same time as Summit preparation.
Re: “Working to build AGI right now is certainly a bad idea, at best leading nowhere, at worst killing us all.”
The marginal benefit of making machines smarter seems large—e.g. see automobile safety applications: http://www.youtube.com/watch?v=I4EY9_mOvO8
I don’t really see that situation changing much anytime soon—there will probably be such marginal benefits for a long time to come.