Not really. Someone (I forgot who) wrote that I helped them see the race to create AI as a potential existential risk. I promoted the book on numerous radio shows and I hope I convinced at least a few people to do further research and perhaps donate money to MIRI, but this is just a hope.
Why do you think that it is so hard to get through to people?
Not only you, but others involved in this, and myself, have all found that intelligent people will listen and even understand what you are telling them -- I probe for inferential gaps, and if they exist they are not obvious.
Yet almost no one gets on board with the MIRI/FHI program.
I have thought a lot about this. Possible reasons: most humans don’t care about the far future or people who are not yet born, most things that seem absurd are absurd and are not worth investigating and the singularity certainly superficially seems absurd, the vast majority is right and you and I are incorrect to worry about a singularity, it’s impossible for people to imagine an intelligence AI that doesn’t have human-like emotions, the Fermi paradox implies that civilizations such as ours are not going to be able to rationally think about the far future, and an ultra-AI would be a god and so is disallowed by most peoples’ religious beliefs.
Your question is related to why so few signup for cryonics.
I agree with you that a lot of people think that way, but I have spoken to quite a few smart people who understand all the points—I probe to figure out if there are any major inferential gaps—and they still don’t get on the bandwagon.
Another point is simply that we cannot all devote time to all important things; they simply choose not to prioritize this.
Not really. Someone (I forgot who) wrote that I helped them see the race to create AI as a potential existential risk. I promoted the book on numerous radio shows and I hope I convinced at least a few people to do further research and perhaps donate money to MIRI, but this is just a hope.
Why do you think that it is so hard to get through to people?
Not only you, but others involved in this, and myself, have all found that intelligent people will listen and even understand what you are telling them -- I probe for inferential gaps, and if they exist they are not obvious.
Yet almost no one gets on board with the MIRI/FHI program.
Why?
I have thought a lot about this. Possible reasons: most humans don’t care about the far future or people who are not yet born, most things that seem absurd are absurd and are not worth investigating and the singularity certainly superficially seems absurd, the vast majority is right and you and I are incorrect to worry about a singularity, it’s impossible for people to imagine an intelligence AI that doesn’t have human-like emotions, the Fermi paradox implies that civilizations such as ours are not going to be able to rationally think about the far future, and an ultra-AI would be a god and so is disallowed by most peoples’ religious beliefs.
Your question is related to why so few signup for cryonics.
I don’t know about anyone else, but I find it hard to believe that provable Friendliness is possible.
On the other hand, I think high-probability Friendliness might be possible.
I agree with you that a lot of people think that way, but I have spoken to quite a few smart people who understand all the points—I probe to figure out if there are any major inferential gaps—and they still don’t get on the bandwagon.
Another point is simply that we cannot all devote time to all important things; they simply choose not to prioritize this.