along with “i’m not sure how i’d get paid (enough)”, “i don’t think i’m qualified” is the foremost reason i hear people who think AI alignment is important give for why they’re not doing technical AI alignment research themselves. here are some arguments as to why they might be wrong.
AI alignment researchers have a lot of overall confusion. the field of AI safety has 70 to 300 people depending on who you ask/how you count, and most of them are doing prosaic research, especially interpretability, which i don’t think is gonna end up being of much use. so the number of people working in the field is small, and the number of people contributing helpful novel stuff is even smaller.
i’m bad at math. i’m worse at machine learning. i just have a bachelor’s in compsci, and my background is in software engineering for game development. i’ve only been working on AI alignment seriously since last year. yet, i’ve come up with a variety of posts that are helpful for alignment, at least in my opinion — see for example 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
as is said in some of the recommended resources at the bottom of my intro to AI doom and alignment, such as the alignment research field guide or the “getting started in AI safety” talk, it is important to do backchaining: look at the problem and what pieces you think would be needed to solve it, and then continue backwards by thinking about what you need to get those pieces. it’s also important to just think about the problem and learn things only as you actually need them — you should not feel like if instead you have a whole pile of posts/books/etc you have to learn before thinking about solutions to the problem; you risk wasting time learning stuff that isn’t what’s useful to you, and you also risk losing some of your diversity value — something that i believe is still sorely needed, given how hopeless existing approaches are.
the field is small, the bar for helping is low, and alignment researchers are confused about many things. if you think you’re not qualified enough to make useful contributions to technical alignment research, there’s a good chance you’re wrong.
so you think you’re not qualified to do technical alignment research?
Link post
along with “i’m not sure how i’d get paid (enough)”, “i don’t think i’m qualified” is the foremost reason i hear people who think AI alignment is important give for why they’re not doing technical AI alignment research themselves. here are some arguments as to why they might be wrong.
AI alignment researchers have a lot of overall confusion. the field of AI safety has 70 to 300 people depending on who you ask/how you count, and most of them are doing prosaic research, especially interpretability, which i don’t think is gonna end up being of much use. so the number of people working in the field is small, and the number of people contributing helpful novel stuff is even smaller.
i’m bad at math. i’m worse at machine learning. i just have a bachelor’s in compsci, and my background is in software engineering for game development. i’ve only been working on AI alignment seriously since last year. yet, i’ve come up with a variety of posts that are helpful for alignment, at least in my opinion — see for example 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
as is said in some of the recommended resources at the bottom of my intro to AI doom and alignment, such as the alignment research field guide or the “getting started in AI safety” talk, it is important to do backchaining: look at the problem and what pieces you think would be needed to solve it, and then continue backwards by thinking about what you need to get those pieces. it’s also important to just think about the problem and learn things only as you actually need them — you should not feel like if instead you have a whole pile of posts/books/etc you have to learn before thinking about solutions to the problem; you risk wasting time learning stuff that isn’t what’s useful to you, and you also risk losing some of your diversity value — something that i believe is still sorely needed, given how hopeless existing approaches are.
the field is small, the bar for helping is low, and alignment researchers are confused about many things. if you think you’re not qualified enough to make useful contributions to technical alignment research, there’s a good chance you’re wrong.