I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
I don’t consider this a response to my point. My point was that “concern for safety” is not well correlated with “ability to perform safely”. It’s very likely that many or all AGI researchers are aware of “risks” regarding the outcomes of their research. However, I consider it very unlikely that they will think deeply enough about the topic to come up with, or even start on, solutions such as Friendliness.
It is very easy to dispel any such doubts, all he would have to do is publish some technical paper that manages to survive peer-review, thereby substantiate his claims and prove that he is qualified.
Why do these discussions constantly come down to the same people debating the same points? Because, as you said, there are no published technical papers such as those promised by last year’s donation drive. SIAI is operating internally and not revising their public information. Do you believe their thoughts have failed to change, on any detail, in the time since initial publication?
It’s very likely that many or all AGI researchers...very unlikely that they will...
If I had an extraordinary idea related to a field of expertise that I am not part of, I would humbly request some of the experts to review it, before claiming that I know something that they don’t know, if I don’t even know if my idea makes sense.
Has this happened? All I know about are derogatory comments about mainstream AGI research, the academia and peer-review in general.
In case it has happened, it seems that the idea was not received positively. Does that mean that the idea is bogus? No. Does that mean that you should be particularly confident in your idea? No. It means that you should reassess it and gather or wait for more evidence before telling everyone that the world is going to end, create a whole movement around it, ask for money and advice people to neglect any other ideas, because everyone else is below your epistemic level.
Why do these discussions constantly come down to the same people debating the same points?
Because nobody other than a school dropout like me cares to take a critical look at those points, points that haven’t been addressed enough to generate the slightest academic interest.
If I had an extraordinary idea related to a field of expertise that I am not part of, I would humbly request some of the experts to review it, before claiming that I know something that they don’t know, if I don’t even know if my idea makes sense.
Has this happened? All I know about are derogatory comments about mainstream AGI research, the academia and peer-review in general.
I don’t consider this a response to my point. My point was that “concern for safety” is not well correlated with “ability to perform safely”. It’s very likely that many or all AGI researchers are aware of “risks” regarding the outcomes of their research. However, I consider it very unlikely that they will think deeply enough about the topic to come up with, or even start on, solutions such as Friendliness.
Why do these discussions constantly come down to the same people debating the same points? Because, as you said, there are no published technical papers such as those promised by last year’s donation drive. SIAI is operating internally and not revising their public information. Do you believe their thoughts have failed to change, on any detail, in the time since initial publication?
If I had an extraordinary idea related to a field of expertise that I am not part of, I would humbly request some of the experts to review it, before claiming that I know something that they don’t know, if I don’t even know if my idea makes sense.
Has this happened? All I know about are derogatory comments about mainstream AGI research, the academia and peer-review in general.
In case it has happened, it seems that the idea was not received positively. Does that mean that the idea is bogus? No. Does that mean that you should be particularly confident in your idea? No. It means that you should reassess it and gather or wait for more evidence before telling everyone that the world is going to end, create a whole movement around it, ask for money and advice people to neglect any other ideas, because everyone else is below your epistemic level.
Because nobody other than a school dropout like me cares to take a critical look at those points, points that haven’t been addressed enough to generate the slightest academic interest.
Did you see the coverage in recent versions of “AI: A Modern Approach”? Peter Norvig is an actual expert in artificial intelligence. The End of The World As We Know It even gets a mention!
Cool, I admit I have been wrong there and herewith retract that point.