I bet a lot of them are persuadable in the next 2 to 50 years.
They may be persuadable that, in a non-emergency situation, they should slow down when their AI seems like it’s teetering on the edge of recursive self-improvement. It’s much harder to persuade them to
1. not publish their research that isn’t clearly “here’s how to make an AGI”, and/or
2. not try to get AGI without a good theory of alignment, when “the other guys” seem only a few years away from AGI.
So ~everyone will keep adding to the big pool of ~public information and ideas about AI, until it’s not that hard to get the rest of the way to AGI, at which point some people showing restraint doesn’t help by that much.
They may be persuadable that, in a non-emergency situation, they should slow down when their AI seems like it’s teetering on the edge of recursive self-improvement. It’s much harder to persuade them to
1. not publish their research that isn’t clearly “here’s how to make an AGI”, and/or
2. not try to get AGI without a good theory of alignment, when “the other guys” seem only a few years away from AGI.
So ~everyone will keep adding to the big pool of ~public information and ideas about AI, until it’s not that hard to get the rest of the way to AGI, at which point some people showing restraint doesn’t help by that much.