Agree, roughly, that AI safety and AI ethics positions are broadly aligned and that greater cooperation between the two would be beneficial to all, but it’s worth anticipating how the prescriptions here could backfire. A paragraph such as:
I am saddened by those in my community who have treated you harshly, but I just wanted to let you know, even when I don’t agree with everything you say, I value you and think your research is valuable. To say else wise would be irrational.
could itself easily be perceived, by someone who is predisposed to suspicion toward rationalists, as backhanded, overly familiar, and/or condescending, regardless of its actual intentions. As well, the language around the concluding suggestion toward joining AI safety and bias groups may very well be seen by a suspicious reader as a plan of action for infiltration of such groups, no matter what level of transparency is actually prescribed.
That said, I’ll offer that any worthwhile cooperation with ethics/bias groups (as, like other commenters pointed out, some in those circles simply won’t bother to engage in good faith) is unlikely to come from demonstrations of personal friendliness, but from demonstrations of willingness to take aligned action. The comment that safety/risk people could have a lot to learn from bias/risk people and should do so seems pretty sound. On that, some areas I think plenty of rationalists should (and probably do) have concern for:
The usage of current and under-development AI systems for surveillance and undermining of privacy rights.
The development of predictive policing, which in addition to privacy concerns, poses the problems of false positives, overzealousness due to false confidence, and discriminatory usage (getting at similar things here to Yudkowsky in the linked article on police reform).
The production and dissemination of misinformation, which could be used by malicious actors to stoke panic and destabilise sectors of society for outside political gain. (Shiri’s Scissor, anyone?)
This is not to say that anyone invested in safety/x-risk should deprioritise in favour of these issues. Rather, I believe that, as discussed, there is substantial alignment on these issues between safety and ethics groups already, that there is a lot of benefit to gain from greater cooperation between safety/risk actors and ethics/bias actors, and that a strategy of publicly pursuing research and action on these issues would result in an overall net gain of utility for all parties, including a good chance of reducing/mitigating multiple different kinds of AI risks.
Agree, roughly, that AI safety and AI ethics positions are broadly aligned and that greater cooperation between the two would be beneficial to all, but it’s worth anticipating how the prescriptions here could backfire. A paragraph such as:
could itself easily be perceived, by someone who is predisposed to suspicion toward rationalists, as backhanded, overly familiar, and/or condescending, regardless of its actual intentions. As well, the language around the concluding suggestion toward joining AI safety and bias groups may very well be seen by a suspicious reader as a plan of action for infiltration of such groups, no matter what level of transparency is actually prescribed.
That said, I’ll offer that any worthwhile cooperation with ethics/bias groups (as, like other commenters pointed out, some in those circles simply won’t bother to engage in good faith) is unlikely to come from demonstrations of personal friendliness, but from demonstrations of willingness to take aligned action. The comment that safety/risk people could have a lot to learn from bias/risk people and should do so seems pretty sound. On that, some areas I think plenty of rationalists should (and probably do) have concern for:
The usage of current and under-development AI systems for surveillance and undermining of privacy rights.
The development of predictive policing, which in addition to privacy concerns, poses the problems of false positives, overzealousness due to false confidence, and discriminatory usage (getting at similar things here to Yudkowsky in the linked article on police reform).
The production and dissemination of misinformation, which could be used by malicious actors to stoke panic and destabilise sectors of society for outside political gain. (Shiri’s Scissor, anyone?)
This is not to say that anyone invested in safety/x-risk should deprioritise in favour of these issues. Rather, I believe that, as discussed, there is substantial alignment on these issues between safety and ethics groups already, that there is a lot of benefit to gain from greater cooperation between safety/risk actors and ethics/bias actors, and that a strategy of publicly pursuing research and action on these issues would result in an overall net gain of utility for all parties, including a good chance of reducing/mitigating multiple different kinds of AI risks.