I have to admit that I should have read the “Brief Introduction” link. That answered a lot of my objections.
In the end all I can say is that I got a misleading idea about the aspirations of SIAI, and that this was my fault. With this better understanding of the goals of SIAI, though, (which are implied to be limited to the mitigation of accidents caused by commercially developed AIs) I have to say that I remain unconvinced that FAI is a high-priority matter. I am particularly unimpressed by Yudkowski’s cynical opinion of their motivations behind AAAI’s dismissal of singularity worries in their panel report. (http://www.aaai.org/Organization/Panel/panel-note.pdf).
Since the evaluation of AI risks depends on the plausibility of AI disaster, (which would have to INCLUDE political and economic factors), I would have to wait until SIAI releases those reports to even consider accidental AI disaster a credible threat. (I am more worried about AIs intentionally designed for aggressive purposes, but it doesn’t seem like SIAI can do much about that type of threat.)
I am particularly unimpressed by and Yudkowski’s cynical (and more importantly, unsubstantiated) opinion of their motivations behind AAAI’s dismissal of singularity worries in their panel report.
“As far as I’m concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It’s not that I have specific reason to distrust these people—the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.”
I have to admit that I should have read the “Brief Introduction” link. That answered a lot of my objections.
In the end all I can say is that I got a misleading idea about the aspirations of SIAI, and that this was my fault. With this better understanding of the goals of SIAI, though, (which are implied to be limited to the mitigation of accidents caused by commercially developed AIs) I have to say that I remain unconvinced that FAI is a high-priority matter. I am particularly unimpressed by Yudkowski’s cynical opinion of their motivations behind AAAI’s dismissal of singularity worries in their panel report. (http://www.aaai.org/Organization/Panel/panel-note.pdf).
Since the evaluation of AI risks depends on the plausibility of AI disaster, (which would have to INCLUDE political and economic factors), I would have to wait until SIAI releases those reports to even consider accidental AI disaster a credible threat. (I am more worried about AIs intentionally designed for aggressive purposes, but it doesn’t seem like SIAI can do much about that type of threat.)
Where did he respond to that?
I was just looking for the link:
http://lesswrong.com/lw/1f4/less_wrong_qa_with_eliezer_yudkowsky_ask_your/197s
“As far as I’m concerned, these are eminent scientists from outside the field that I work in, and I have no evidence that they did anything more than snap judgment of my own subject material. It’s not that I have specific reason to distrust these people—the main name I recognize is Horvitz and a fine name it is. But the prior probabilities are not good here.”