There definitely seem to be (relative) grunt work positions in AI safety, like this, this or this. Unless you think these are harmful, it seems like it would be better to direct the Alec-est Alecs of the world that way instead of risking them never contributing.
I understand not wanting to shoulder responsibility for their career personally, and I understand wanting an unbounded culture for those who thrive under those conditions, but I don’t see the harm in having a parallel structure for those who do want/need guidance.
That seems maybe right if Alec isn’t *interested* in helping in non-”grunt” ways. (TBC “grunt” stuff can be super important; it’s just that we seem much more bottlenecked on 1. non-grunt stuff, and 2. grunt stuff for stuff that’s too weird for people like this to decide to work on.) I’m also saying that Alec might end up being able and willing to help in non-grunt ways, but not by taking orders, and rather by going off and learning how to do non-grunt stuff in a context with more clear feedback.
It could be harmful to Alec to give him orders to work on “grunt” stuff, for example by playing in to his delusion that doing some task is crucially important for the world not ending, which is an inappropriate amount of pressure and stress and more importantly probably is false. It could potentially be harmful of Alec if he’s providing labor for whoever managed to gain control of the narrative via fraud, because then fraudsters get lots of labor and are empower to do more fraud. It could be harmful of Alec if he feels he has to add weight to the narrative that what he’s doing matters, thereby amplifying information cascades.
There definitely seem to be (relative) grunt work positions in AI safety, like this, this or this. Unless you think these are harmful, it seems like it would be better to direct the Alec-est Alecs of the world that way instead of risking them never contributing.
I understand not wanting to shoulder responsibility for their career personally, and I understand wanting an unbounded culture for those who thrive under those conditions, but I don’t see the harm in having a parallel structure for those who do want/need guidance.
That seems maybe right if Alec isn’t *interested* in helping in non-”grunt” ways. (TBC “grunt” stuff can be super important; it’s just that we seem much more bottlenecked on 1. non-grunt stuff, and 2. grunt stuff for stuff that’s too weird for people like this to decide to work on.) I’m also saying that Alec might end up being able and willing to help in non-grunt ways, but not by taking orders, and rather by going off and learning how to do non-grunt stuff in a context with more clear feedback.
It could be harmful to Alec to give him orders to work on “grunt” stuff, for example by playing in to his delusion that doing some task is crucially important for the world not ending, which is an inappropriate amount of pressure and stress and more importantly probably is false. It could potentially be harmful of Alec if he’s providing labor for whoever managed to gain control of the narrative via fraud, because then fraudsters get lots of labor and are empower to do more fraud. It could be harmful of Alec if he feels he has to add weight to the narrative that what he’s doing matters, thereby amplifying information cascades.