Versions of this question have probably been asked before, but MIRI at least is getting more pessimistic and I’ve seen Eliezer express multiple times that he doesn’t know what actually useful advice to give people who aren’t in the field yet.
I don’t want the world to end, but I am only a decently intelligent person of medicore competence. I could try to read and grok alignment research until I can have productive thoughts, but I do not anticipate that helping (though I will probably start doing more reading anyways; I’m tentatively planning to try reading Jaynes). Should I go into some kind of advocacy? I don’t know how I would do productive work there either, really.
I would guess there are others in a roughly similar situation to mine. Does anyone have ideas?
The best answer is to impede existing AI research and development efforts, especially the efforts of teams like Deepmind. They are in the business of shrinking the lifespan of the world. If we really are in the endgame, then I genuinely think that’s what people who believe they can’t do alignment research should be focusing on. Even in the event we fail, it buys everybody else on the planet time.
Well, I don’t think that focusing on the most famous slightly-ahead organizations is actually all that useful. I’d expect that the next-best-in-line would just step forward. Impeding data centers around the world would likely be more generally helpful. But realistically for an individual, trying to be helpful to the AI safety community in a non-direct-work way is probably your best bet at contributing.
DeepMind is helping every other organization out by publishing research. It’s much more of a direct impediment to hamper deepmind than I think you’re expecting.
Have you considered local AI safety meeting building? The minimal version of this is just to organise a dinner every month and advertise it on Facebook and MeetUp.com. There’s no need for you to be an expert on AI safety.
You might consider volunteering for Stampy as well—https://stampy.ai/
Additional thoughts:
Have you considered booking a call with AI Safety support or applying to speak to 80,000 hours?
You can also express interest for the next round of the AGI Safety Fundamentals course.
Eliezer replied to a comment of mine recently, coming out in favor of going down the human augmentation path. I also think genetically engineered Von Neumann babies are too far off to be realistic.
If we can really crack human motivation, I expect possible productivity gains of maybe one or two OOM.
You don’t need to be a genius to make this happen.