This post is based on the common premise of having to ‘pick your battles’. I’m at an impasse in my life and believe this community could offer insights for reflection. I’m particularly interested in perspectives regarding my paradigm, though I hope to provide value for others with similiar problems. In general the question can crudely be phrased:
‘What’s a young persons middle ground for contributing to AI safety?’
The answers should prefferably therefore not ask my life’s worth in devotion.
Which battles should a young person choose to fight in the face of AI risks? The rapid changes in the world of AI — and the seeming lack of corresponding policy — deeply concern me. I’m pursuing a Bachelor of Science in Insurance Mathematics (with a ‘guaranteed’ entry to a Master’s programme in Statistics or Actuarial Science). While I’m satisfied with my field of study — I feel it doesn’t reflect my values and need for contribution.
In Lex Fridman’s interview with Eliezer Yudkowsky, Eliezer presents no compelling path forward — and paints the future as almost non-existent.
I understand the discussion, but struggle to reconcile it with my desire to take action.
Here are some of my personal assumptions:
• The probability of doom given the development of AGI, + the probability of solving aging given AGI, nearly equals 1.
• A future where aging is solved provides me (and humanity in general) with vast ‘amounts’ of utility compared to all other alternatives.
• The probability of solving aging with AGI is significant enough — for the scenario to play a significant role in a ‘mean’ utility calculation of my future utility.
I’m aware these assumptions are somewhat incomplete/il-defined, especially since utility isn’t typically modeled as a cardinal concept. However, they are just meant as context for understanding my value-judgements.
I live in Scandinavia and see no major (except for maybe EA dk?) political movements addressing these issues. I’m eager to make an impact but feel unsure about how to do so effectively without dedicating my entire life to AI risk.
Although the interview was some time ago, I’ve only recently delved into these thoughts. I’d appreciate any context or thoughts you might provide.
Disclaimer: I’m not in a state of distress. I’m simply seeking a middle ground for making a difference in these areas. Also the tags might be a bit off, so I would appreciate some help with those.
AI Safety Info’s answer to “I want to help out AI Safety without making major life changes. What should I do?” is currently:
It’s great that you want to help! Here are some ways you can learn more about AI safety and start contributing:
Learn More:
Learning more about AI alignment will provide you with good foundations for helping. You could start by absorbing content and thinking about challenges or possible solutions.
Consider these options:
Keep exploring our website.
Complete an online course. AI Safety Fundamentals is a popular option that offers courses for both alignment and governance. There is also Intro to ML Safety which follows a more empirical curriculum. Getting into these courses can be competitive, but all the material is also available online for self-study. More in the follow-up question.
Learn more by reading books (we recommend The Alignment Problem), watching videos, or listening to podcasts.
Join the Community:
Joining the community is a great way to find friends who are interested and will help you stay motivated.
Join the local group for AI Safety, Effective Altruism[1] or LessWrong. You can also organize your own!
Join online communities such as Rob Miles’s Discord or the AI Alignment Slack.
Write thoughtful comments on platforms where people discuss AI safety, such as LessWrong.
Attend an EAGx conference for networking opportunities.
Here’s a list of existing AI safety communities.
Donate, Volunteer, and Reach Out:
Donating to organizations or individuals working on AI safety can be a great way to provide support.
Donate to AI safety projects.
Help us write and edit the articles on this website so that other people can learn about AI alignment more easily. You can always ask on Discord for feedback on things you write.
Write to local politicians about policies to reduce AI existential risk
If you don’t know where to start, consider signing up for a navigation call with AI Safety Quest to learn what resources are out there and to find social support.
If you’re overwhelmed, you could look at our other article that offers more bite-sized suggestions.
Not all EA groups focus on AI safety; contact your local group to find out if it’s a good match. ↩︎
You should ignore the EY style “no future” takes when thinking about your future. This is because if the world is about to end, nothing you do will matter much. But if the world isn’t about to end, what you do might matter quite a bit—so you should focus on the latter.
One quick question to ask yourself is: are you more likely to have an impact on technology, or on policy? Either one is useful. (If neither seems great, then consider earning to give, or just find a way to add value in society in other ways.)
Once you figure that out, the next step is almost certainly building relevant skills, knowledge, and networks. Connect with senior folks with relevant roles, ask and otherwise try to figure out what skills and such are useful, try to get some experience by working or volunteering with great people or organizations.
Do that for a while and I bet some gaps and opportunities will become pretty clear. 😀
I strongly recommend the AI Safety Fundamentals Course (either technical or policy). Having a better understanding of the problem will help you contribute with whatever time or resources you choose to dedicate to the problem.