I don’t think that the argument, that people can be smart enough to create an AGI, that can take over the universe in a matter of hours, can be dumb enough not to recognize the dangers posed by such an AGI, is very strong. To fortify that argument you would either have to show that the people working for SIAI are vastly more intelligent than most AGI researchers, in which case they would be more likely to build the first AGI, or that the creation of an AGI, that is capable of explosive recursive self-improvement, demands much less intelligence and insight than which is necessary to recognize risks from AI.
I have some scepticism about that point as well. We do have some relevant history relating to engineering disaasters. There have been lots of engineering projects in history, and we know roughly how many people died in accidents, and as a result of negligence, or were screwed over in other ways.
Engineers do sometimes fail. The Titanic. The Tacoma Narrows Bridge Collapse.
Then there’s all the people killed by cars and in coal mines. Society wants the benefits, and individuals pay the price. This effect seems much more significant than accidents to me—in terms of number of deaths.
However, I think that engineers have a reasonable record. In a historical enginnering project with lives at stake, one would certainly not expect failure—or claim that failure is “the default case”. The case goes the other way: high technology and machines cause humans to thrive.
Of course, reference class forecasting has limitations, but we should at least attempt to learn from this type of data.
I have some scepticism about that point as well. We do have some relevant history relating to engineering disaasters. There have been lots of engineering projects in history, and we know roughly how many people died in accidents, and as a result of negligence, or were screwed over in other ways.
Engineers do sometimes fail. The Titanic. The Tacoma Narrows Bridge Collapse.
Then there’s all the people killed by cars and in coal mines. Society wants the benefits, and individuals pay the price. This effect seems much more significant than accidents to me—in terms of number of deaths.
However, I think that engineers have a reasonable record. In a historical enginnering project with lives at stake, one would certainly not expect failure—or claim that failure is “the default case”. The case goes the other way: high technology and machines cause humans to thrive.
Of course, reference class forecasting has limitations, but we should at least attempt to learn from this type of data.