Just brainstorming a few ways to contribute, assuming “regular” means “non-technical”:
Can you work at a non-technical role at an org that works in this space?
Can you identify a gap in the existing orgs which would benefit from someone (e.g. you) founding a new org?
Can you identify a need that AI safety researchers have, then start a company to fill that need? Bonus points if this doesn’t accelerate capabilities research.
Can you work on AI governance? My expectation is that coordination to avoid developing AGI is going to be really hard, but not impossible.
More generally, if you really want to go this route I’d suggest trying to form an inside view of (1) the AI safety space and (2) a theory for how you can make positive change in that space.
On the other hand, it is totally fine to work on other things. I’m not sure I would endorse moving from a job that’s a great personal fit to something that’s a much worse fit in AI safety.
Just brainstorming a few ways to contribute, assuming “regular” means “non-technical”:
Can you work at a non-technical role at an org that works in this space?
Can you identify a gap in the existing orgs which would benefit from someone (e.g. you) founding a new org?
Can you identify a need that AI safety researchers have, then start a company to fill that need? Bonus points if this doesn’t accelerate capabilities research.
Can you work on AI governance? My expectation is that coordination to avoid developing AGI is going to be really hard, but not impossible.
More generally, if you really want to go this route I’d suggest trying to form an inside view of (1) the AI safety space and (2) a theory for how you can make positive change in that space.
On the other hand, it is totally fine to work on other things. I’m not sure I would endorse moving from a job that’s a great personal fit to something that’s a much worse fit in AI safety.