No. I actually pretty much agree with it. My whole point is that to reduce risks from AI you have to convince people who do not already share most of your beliefs. I wanted to make it abundantly clear that people who want to hone their arguments shouldn’t do so by asking people if they agree with them who are closely associated with the SI/LW memeplex. They have to hone their arguments by talking to people who actually disagree and figure out at what point their arguments fail.
See, it is very simple. If you are saying that all AI researchers and computer scientists agree with you, then risks from AI are pretty much solved insofar that everyone who could possible build an AGI is already aware of the risks and probably takes precautions (which is not enough of course, but that isn’t the point).
I am saying that you might be fooling yourself if you say, “I’ve been to the Singularity Summit and talked to a lot of smart people at LW meetups and everyone agreed with me on risks from AI, nobody had any counter-arguments”. Wow, no shit? I mean, what do you anticipate if you visit a tea party meeting arguing how Obama is doing a bad job?
I believe that I have a pretty good idea on what arguments would be perceived to be weak or poorly argued since I am talking to a lot of people that disagree with SI/LW on some important points. And if I tell you that your arguments are weak then that doesn’t mean that I disagree or that you are all idiots. It just means that you’ve to hone your arguments if you want to convince others.
But maybe you believe that there are no important people left who it would be worthwhile to have on your side. Then of course what I am saying is unnecessary. But I doubt that this is the case. And even if it is the case, honing your arguments might come in handy once you are forced to talk to politicians or other people with a large inferential distance.
No. I actually pretty much agree with it. My whole point is that to reduce risks from AI you have to convince people who do not already share most of your beliefs. I wanted to make it abundantly clear that people who want to hone their arguments shouldn’t do so by asking people if they agree with them who are closely associated with the SI/LW memeplex. They have to hone their arguments by talking to people who actually disagree and figure out at what point their arguments fail.
See, it is very simple. If you are saying that all AI researchers and computer scientists agree with you, then risks from AI are pretty much solved insofar that everyone who could possible build an AGI is already aware of the risks and probably takes precautions (which is not enough of course, but that isn’t the point).
I am saying that you might be fooling yourself if you say, “I’ve been to the Singularity Summit and talked to a lot of smart people at LW meetups and everyone agreed with me on risks from AI, nobody had any counter-arguments”. Wow, no shit? I mean, what do you anticipate if you visit a tea party meeting arguing how Obama is doing a bad job?
I believe that I have a pretty good idea on what arguments would be perceived to be weak or poorly argued since I am talking to a lot of people that disagree with SI/LW on some important points. And if I tell you that your arguments are weak then that doesn’t mean that I disagree or that you are all idiots. It just means that you’ve to hone your arguments if you want to convince others.
But maybe you believe that there are no important people left who it would be worthwhile to have on your side. Then of course what I am saying is unnecessary. But I doubt that this is the case. And even if it is the case, honing your arguments might come in handy once you are forced to talk to politicians or other people with a large inferential distance.