There is one ‘mostly harmless’ for people who you think will fail at AGI. There is an entirely different ‘mostly harmless’ for actually have a research director who tries to make AIs that could kill us all. Why would I not think the SIAI is itself an existential risk if the criteria for director recruitment is so lax? Being absolutely terrified of disaster is the kind of thing that helps ensure appropriate mechanisms to prevent defection are kept in place.
What is the right thing to do here? Should we try to force an answer out of SIAI, for example by publicly accusing it of not taking existential risk seriously?
Yes. The SIAI has to convince us that they are mostly harmless.
There is one ‘mostly harmless’ for people who you think will fail at AGI. There is an entirely different ‘mostly harmless’ for actually have a research director who tries to make AIs that could kill us all. Why would I not think the SIAI is itself an existential risk if the criteria for director recruitment is so lax? Being absolutely terrified of disaster is the kind of thing that helps ensure appropriate mechanisms to prevent defection are kept in place.
Yes. The SIAI has to convince us that they are mostly harmless.