The best answer is to impede existing AI research and development efforts, especially the efforts of teams like Deepmind. They are in the business of shrinking the lifespan of the world. If we really are in the endgame, then I genuinely think that’s what people who believe they can’t do alignment research should be focusing on. Even in the event we fail, it buys everybody else on the planet time.
Well, I don’t think that focusing on the most famous slightly-ahead organizations is actually all that useful. I’d expect that the next-best-in-line would just step forward. Impeding data centers around the world would likely be more generally helpful. But realistically for an individual, trying to be helpful to the AI safety community in a non-direct-work way is probably your best bet at contributing.
DeepMind is helping every other organization out by publishing research. It’s much more of a direct impediment to hamper deepmind than I think you’re expecting.
The best answer is to impede existing AI research and development efforts, especially the efforts of teams like Deepmind. They are in the business of shrinking the lifespan of the world. If we really are in the endgame, then I genuinely think that’s what people who believe they can’t do alignment research should be focusing on. Even in the event we fail, it buys everybody else on the planet time.
Well, I don’t think that focusing on the most famous slightly-ahead organizations is actually all that useful. I’d expect that the next-best-in-line would just step forward. Impeding data centers around the world would likely be more generally helpful. But realistically for an individual, trying to be helpful to the AI safety community in a non-direct-work way is probably your best bet at contributing.
DeepMind is helping every other organization out by publishing research. It’s much more of a direct impediment to hamper deepmind than I think you’re expecting.