More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.
EY wasn’t arguing “My IQ is so damn high that I just have to be right.
You keep posting screenshots from the deleted Roko’s post, with the “forbidden” parts blacked-out. I agree that the whole matter could have been handled much better, but I don’t see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy’s post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they’reeverywhere.
XiXiDu argues: ”… your smart friends at Less Wrong and favorite rationalists like EY are not remotely close to the rationality standards of other people out there (yeah, there are other smart people, believe it or not), and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.”
You keep telling me that my arguments are no evidence for what I’m trying to prove. Other people asked me several times not to make up fantasies of AI-Gods kicking their testicles. But if you want to be upovted the winning move is just to go think about something else. So take my word for it, I know more than you do, no really I do, and SHUT UP.
More specifically my point regarding other peoples beliefs was that there are people who know about the topic of superhuman AI and related risks but, judged by their less or non-existing campaigns to prevent the risks, came to different conclusions.
Reference: The Singularity: An Appraisal (Video) - Alastair Reynolds, Vernor Vinge, Charles Stross, Karl Schroeder
In the case of AI researchers like Marvin Minsky, amongst others, the knowledge of possible risks should be reasonable to infer from their overall familiarity with the topic.
I disagree based on the following evidence:
http://xixidu.net/lw/05.png
“At present I do not know of any other person who could do that.” (Reference)
Hypothesis based on shaky conclusions, not on previous evidence.
You keep posting screenshots from the deleted Roko’s post, with the “forbidden” parts blacked-out. I agree that the whole matter could have been handled much better, but I don’t see how it or the other quoted line bears on the interpretation of the sentence quoted at the top of jimmy’s post. Also, people have asked you several times to stop reminding them of the deleted post and the need for quotes proving that EY thinks highly of his intelligence can be satisfied without doing that. Seriously, they’re everywhere.
XiXiDu argues: ”… your smart friends at Less Wrong and favorite rationalists like EY are not remotely close to the rationality standards of other people out there (yeah, there are other smart people, believe it or not), and you will no longer think it anywhere near as plausible that their differing opinion is because they know some incredible secret knowledge you don’t.”
You keep telling me that my arguments are no evidence for what I’m trying to prove. Other people asked me several times not to make up fantasies of AI-Gods kicking their testicles. But if you want to be upovted the winning move is just to go think about something else. So take my word for it, I know more than you do, no really I do, and SHUT UP.
I actually feel embarrassed just from reading that.
See the edit to the original comment.