I really think the “You’re just as likely to get results in the opposite direction” argument is on the priors overstated for most forms of research. Does Scott think that work we do today is just as likely to decrease our understanding of P/NP as increase it?
My own interpretation of Scott’s words here is that it’s unclear whether your research is actually helping in the “get Friendly AI before some idiot creates a powerful Unfriendly one” challenge. Fundamental progress in AI in general could just as easily benefit the fool trying to build a AGI without too much concern for Friendliness, as it could benefit you. Thus, whether fundamental research helps out avoiding the UFAI catastrophy is unclear.
I’m not sure that interpretation works, given that he also wrote:
suppose we conclude — as many Singularitarians have — that the greatest problem facing humanity today is how to ensure that, when superhuman AIs are finally built, those AIs will be “friendly” to human concerns. The difficulty is: given our current ignorance about AI, how on earth should we act on that conclusion? Indeed, how could we have any confidence that whatever steps we did take wouldn’t backfire, and increase the probability of an unfriendly AI?
Since Scott was addressing steps taken to act on the conclusion that friendliness was supremely important, presumably he did not have in mind general AGI research.
My own interpretation of Scott’s words here is that it’s unclear whether your research is actually helping in the “get Friendly AI before some idiot creates a powerful Unfriendly one” challenge. Fundamental progress in AI in general could just as easily benefit the fool trying to build a AGI without too much concern for Friendliness, as it could benefit you. Thus, whether fundamental research helps out avoiding the UFAI catastrophy is unclear.
I’m not sure that interpretation works, given that he also wrote:
Since Scott was addressing steps taken to act on the conclusion that friendliness was supremely important, presumably he did not have in mind general AGI research.