There are a lot of pretty credible arguments for them to try, especially with low risk estimates for AI disempowering humanity, and if their percentile of responsibility looks high within the industry.
One view is that the risk of AI turning against humanity is less than the risk of a nasty eternal CCP dictatorship if democracies relinquish AI unilaterally. You see this sort of argument made publicly by people like Eric Schmidt, and ‘the real risk isn’t AGI revolt, it’s bad humans’ is almost a reflexive take for many in online discussion of AI risk. That view can easily combine with the observation that there has been even less takeup of AI safety in China thus far than in liberal democracies, and mistrust of CCP decision-making and honesty, so it also reduces accident risk.
My thought: seems like a convincing demonstration of risk could be usefully persuasive.
I’ll make an even stronger statement: So long as the probabilities of a technological singularity isn’t too low, they can still rationally keep working on it even if they know the risk is high, because the expected utility is much greater still.
Carl S.
One view is that the risk of AI turning against humanity is less than the risk of a nasty eternal CCP dictatorship if democracies relinquish AI unilaterally. You see this sort of argument made publicly by people like Eric Schmidt, and ‘the real risk isn’t AGI revolt, it’s bad humans’ is almost a reflexive take for many in online discussion of AI risk. That view can easily combine with the observation that there has been even less takeup of AI safety in China thus far than in liberal democracies, and mistrust of CCP decision-making and honesty, so it also reduces accident risk.
My thought: seems like a convincing demonstration of risk could be usefully persuasive.
I’ll make an even stronger statement: So long as the probabilities of a technological singularity isn’t too low, they can still rationally keep working on it even if they know the risk is high, because the expected utility is much greater still.