I’m aware that a lot of AI Safety research is already of questionable quality. So my question is: how can I determine as quickly as possible whether I’m cut out for this?
My key comment here is that, to be an independent researcher, you will have to rely day-by-day on your own judgement on what has quality and what is valuable. So do you think you have such judgement and could develop it further?
To find out, I suggest you skim a bunch of alignment research agendas, or research overviews like this one, and then read some abstracts/first pages of papers mentioned in there, while trying apply your personal, somewhat intuitive judgement to decide
which agenda item/approach looks most promising to you as an actual method for improving alignment
which agenda item/approach you feel you could contribute most to, based on your own skills.
If your personal intuitive judgement tells you nothing about the above questions, if it all looks the same to you, then you are probably not cut out to be an independent alignment researcher.
My key comment here is that, to be an independent researcher, you will have to rely day-by-day on your own judgement on what has quality and what is valuable. So do you think you have such judgement and could develop it further?
To find out, I suggest you skim a bunch of alignment research agendas, or research overviews like this one, and then read some abstracts/first pages of papers mentioned in there, while trying apply your personal, somewhat intuitive judgement to decide
which agenda item/approach looks most promising to you as an actual method for improving alignment
which agenda item/approach you feel you could contribute most to, based on your own skills.
If your personal intuitive judgement tells you nothing about the above questions, if it all looks the same to you, then you are probably not cut out to be an independent alignment researcher.