Hey all I found out about LessWrong through a confluence of factors over the past 6 years or so, starting with Rob Miles’ Computerphile videos and then his personal videos, seeing Aella make rounds on the internet, and hearing about Manifold, which all just sorta pointed me towards Eliezer and this website. I started reading the rationality a-z posts about a year ago and have gotten up to the value theory portion, but over the past few months I’ve started realizing just how much engaging content there is to read on here. I just graduated with my bachelor’s and I hope to get involved with AI alignment (but Eliezer paints a pretty bleak picture for a newcomer like myself (and I know not to take any one person’s word as gospel, but I’d be lying if I said it wasn’t a little disheartening)).
I’m not really sure how to break into the field of AI safety/alignment, given that college has left me without a lot of money and I don’t exactly have a portfolio or degree that scream machine learning. I fear that I would have to go back and get an even higher education to even attempt to make a difference. Maybe, however, this is where my lack of familiarity in the field shows, because I don’t actually know what qualifications are required for the positions I’d be interested in or if there’s even a formal path for helping with alignment work. Any direction would be appreciated.
Additional Context that I realized might be useful for anyone that wants to offer advice: I’m in my early 20′s, so when I say ‘portfolio’ there’s nothing really there outside of hobby projects that aren’t that presentable to employers, and my degree is like a mix of engineering and physics simulation. Additionally, I live in Austin, so that might help with opportunities, yet I’m not entirely sure where to look for those.
I’m not an AI safety specialist, but I get the sense that a lot of extra skillsets became useful over the last few years. What kind of positions would be interesting to you?
MIRI was looking for technical writers recently. Robert Miles makes youtube videos. Someone made the P(Doom) question well known enough to be mentioned in the senate. I hope there’s a few good contract lawyers looking over OpenAI right now. AISafety.Info is a collection of on-ramps, but it also takes ongoing web development and content writing work. Most organizations need operations teams and accountants no matter what they do.
You might also be surprised how much engineering and physics is a passable starting point. Again, this isn’t my field, but if you haven’t already done so it might be worth reading a couple recent ML papers and seeing if they make sense to you, or better yet if it looks like you see an idea for improvement or a next step you could jump in or try.
Put your own oxygen mask on though. Especially if you don’t have a cunning idea and can’t find a way to get started, grab a regular job and get good at that.
Hey all
I found out about LessWrong through a confluence of factors over the past 6 years or so, starting with Rob Miles’ Computerphile videos and then his personal videos, seeing Aella make rounds on the internet, and hearing about Manifold, which all just sorta pointed me towards Eliezer and this website. I started reading the rationality a-z posts about a year ago and have gotten up to the value theory portion, but over the past few months I’ve started realizing just how much engaging content there is to read on here. I just graduated with my bachelor’s and I hope to get involved with AI alignment (but Eliezer paints a pretty bleak picture for a newcomer like myself (and I know not to take any one person’s word as gospel, but I’d be lying if I said it wasn’t a little disheartening)).
I’m not really sure how to break into the field of AI safety/alignment, given that college has left me without a lot of money and I don’t exactly have a portfolio or degree that scream machine learning. I fear that I would have to go back and get an even higher education to even attempt to make a difference. Maybe, however, this is where my lack of familiarity in the field shows, because I don’t actually know what qualifications are required for the positions I’d be interested in or if there’s even a formal path for helping with alignment work. Any direction would be appreciated.
Additional Context that I realized might be useful for anyone that wants to offer advice:
I’m in my early 20′s, so when I say ‘portfolio’ there’s nothing really there outside of hobby projects that aren’t that presentable to employers, and my degree is like a mix of engineering and physics simulation. Additionally, I live in Austin, so that might help with opportunities, yet I’m not entirely sure where to look for those.
I’m not an AI safety specialist, but I get the sense that a lot of extra skillsets became useful over the last few years. What kind of positions would be interesting to you?
MIRI was looking for technical writers recently. Robert Miles makes youtube videos. Someone made the P(Doom) question well known enough to be mentioned in the senate. I hope there’s a few good contract lawyers looking over OpenAI right now. AISafety.Info is a collection of on-ramps, but it also takes ongoing web development and content writing work. Most organizations need operations teams and accountants no matter what they do.
You might also be surprised how much engineering and physics is a passable starting point. Again, this isn’t my field, but if you haven’t already done so it might be worth reading a couple recent ML papers and seeing if they make sense to you, or better yet if it looks like you see an idea for improvement or a next step you could jump in or try.
Put your own oxygen mask on though. Especially if you don’t have a cunning idea and can’t find a way to get started, grab a regular job and get good at that.
Sorry I don’t have a better answer.