SI says they’ve succeeded in convincing a few high-profile AI researchers that AGI research is dangerous. If one of these researchers could be hired as a SI staff member, they could lend their expertise to the development of Friendliness theories and also enhance SI’s baseline credibility in the AI research community in general.
A related idea is to try to get these AI researchers to sign a petition making a statement about AI dangers.
Both of these ideas risk potentially unwanted publicity.
Note that both of these ideas are on SI’s radar; I mention them here so folks can comment.
Having a high-profile AI researcher join SI, or a number of high-profile AI researchers express concern with AI safety, could make an interesting headline for a wide variety of audiences. It’s not clear that encouraging commentary on AI safety from the general public is a good idea.
Hire a High-Profile AI Researcher
SI says they’ve succeeded in convincing a few high-profile AI researchers that AGI research is dangerous. If one of these researchers could be hired as a SI staff member, they could lend their expertise to the development of Friendliness theories and also enhance SI’s baseline credibility in the AI research community in general.
A related idea is to try to get these AI researchers to sign a petition making a statement about AI dangers.
Both of these ideas risk potentially unwanted publicity.
Note that both of these ideas are on SI’s radar; I mention them here so folks can comment.
Could you elaborate how those ideas could lead to unwanted publicity?
Having a high-profile AI researcher join SI, or a number of high-profile AI researchers express concern with AI safety, could make an interesting headline for a wide variety of audiences. It’s not clear that encouraging commentary on AI safety from the general public is a good idea.