This seems right. One additional thing to note, however, is that while it looks quite likely that good papers lead to improvements at the margin, high-publicity bad work can harm a developing field’s prospects and reputation, and thus outsiders’ desire to affiliate with it. Robin Hanson emphasizes this point a lot.
Carl, are you saying that the non-SIAI-affiliated qualified academics among us should attempt to get high-publicity, bad papers published advocating anything-goes GAI design, without regard for safety?
Such things are very likely to backfire, and moreso than they seem; we live in a world of substantial transparency, and dirty laundry gets found
Being the kind of people who would do such things would have bad effects and sabotage friendly cooperation with the very AI folk whose cooperation is so important
There is already a lot of stuff along these lines
Folk actually in a position to do such things would better use their limited time, reputation, and commitment on other projects
Being the kind of people who would do such things would have bad effects and sabotage friendly cooperation with the very AI folk whose cooperation is so important
My impression is that the bridges are mostly burned there. For years, the SIAI has been campaigning against other projects, in the hope of denying them mindshare and funding.
We have Yudkowsky saying: “And if Novamente should ever cross the finish line, we all die.” and saying he will try to make various other AI projects “look merely stupid”.
I expect the SIAI looks to most others in the field like a secretive competing organisation, who likes to use negative marketing techniques. Implying that your rivals will destroy the world is an old marketing trick that takes us back to the Daisy Ad. This is not necessarily the kind of organisation one would want to affiliate with.
This seems right. One additional thing to note, however, is that while it looks quite likely that good papers lead to improvements at the margin, high-publicity bad work can harm a developing field’s prospects and reputation, and thus outsiders’ desire to affiliate with it. Robin Hanson emphasizes this point a lot.
Carl, are you saying that the non-SIAI-affiliated qualified academics among us should attempt to get high-publicity, bad papers published advocating anything-goes GAI design, without regard for safety?
No, for many reasons, including the following:
Such things are very likely to backfire, and moreso than they seem; we live in a world of substantial transparency, and dirty laundry gets found
Being the kind of people who would do such things would have bad effects and sabotage friendly cooperation with the very AI folk whose cooperation is so important
There is already a lot of stuff along these lines
Folk actually in a position to do such things would better use their limited time, reputation, and commitment on other projects
My impression is that the bridges are mostly burned there. For years, the SIAI has been campaigning against other projects, in the hope of denying them mindshare and funding.
We have Yudkowsky saying: “And if Novamente should ever cross the finish line, we all die.” and saying he will try to make various other AI projects “look merely stupid”.
I expect the SIAI looks to most others in the field like a secretive competing organisation, who likes to use negative marketing techniques. Implying that your rivals will destroy the world is an old marketing trick that takes us back to the Daisy Ad. This is not necessarily the kind of organisation one would want to affiliate with.