though the safety of AGI is indeed an important issue, currently we don’t know enough about the subject to make any sure conclusion. Higher safety can only be achieved by more research on all related topics, rather than by pursuing approaches that have no solid scientific foundation.
Not sure how you can effectively argue with this.
Far from considering the argument irrefutable it struck me as superficial and essentially fallacious reasoning. The core of the argument is the claim ‘more research on all related topics is good’ and failing to include the necessary ceteris paribus clause and ignoring the details of the specific instance that suggest that all else is not, if fact, equal.
Specifically, we are considering a situation where there is one area of research (capability), the completion of which will approximately guarantee that the technology created will be implemented shortly after (especially given Wang’s assumption that such research should be done through empirical experimentation.) The second area of research (about how to ensure desirable behavior of an AI) is one that it is not necessary to complete in order for the first to be implemented. If both technologies need to have been developed at the time when the first is implemented in order to be safe then the second technology must be completed at the same time or earlier than when the technological capability for the first to be implemented is complete.
rather than by pursuing approaches that have no solid scientific foundation.
(And this part just translates to “I’m the cool one, not you”. The usual considerations on how much weight to place on various kinds of status and reputation of an individual or group apply.)
Far from considering the argument irrefutable it struck me as superficial and essentially fallacious reasoning. The core of the argument is the claim ‘more research on all related topics is good’ and failing to include the necessary ceteris paribus clause and ignoring the details of the specific instance that suggest that all else is not, if fact, equal.
Specifically, we are considering a situation where there is one area of research (capability), the completion of which will approximately guarantee that the technology created will be implemented shortly after (especially given Wang’s assumption that such research should be done through empirical experimentation.) The second area of research (about how to ensure desirable behavior of an AI) is one that it is not necessary to complete in order for the first to be implemented. If both technologies need to have been developed at the time when the first is implemented in order to be safe then the second technology must be completed at the same time or earlier than when the technological capability for the first to be implemented is complete.
(And this part just translates to “I’m the cool one, not you”. The usual considerations on how much weight to place on various kinds of status and reputation of an individual or group apply.)