I am currently job hunting, trying to get a job in AI Safety but it seems to be quite difficult especially outside of the US, so I am not sure if I will be able to do it.
This has to be taken as a sign that AI alignment research is funding constrained. At a minimum, technical alignment organizations should engage in massive labor hording to prevent the talent from going into capacity research.
This feels game-theoretically pretty bad to me, and not only abstractly, but I expect concretely that setting up this incentive will cause a bunch of people to attempt to go into capabilities (based on conversations I’ve had in the space).
For this incentives-reason, I wish hardcore-technical-AI-alignment had a greater support-infrastructure for independent researchers and students. Otherwise, we’re often gonna be torn between “learning/working for something to get a job” and “learning AI alignment background knowledge with our spare time/energy”.
Technical AI alignment is one of the few important fields that you can’t quite major in, and whose closest-related jobs/majors make the problem worse.
As much as agency is nice, plenty of (useful!) academics out there don’t have the kind of agency/risk-taking-ability that technical alignment research currently demands as the price-of-entry. This will keep choking us off from talent. Many of the best ideas will come from sheltered absentminded types, and only the LTFF and a tiny number of other groups give (temporary) support to such people.
Yes, important to get the incentives right. You could set the salary for AI alignment slightly below that of the worker’s market value. Also, I wonder about the relevant elasticity. How many people have the capacity to get good enough at programming to be able to contribute to capacity research + would have the desire to game my labor hording system because they don’t have really good employment options?
This has to be taken as a sign that AI alignment research is funding constrained. At a minimum, technical alignment organizations should engage in massive labor hording to prevent the talent from going into capacity research.
This feels game-theoretically pretty bad to me, and not only abstractly, but I expect concretely that setting up this incentive will cause a bunch of people to attempt to go into capabilities (based on conversations I’ve had in the space).
For this incentives-reason, I wish hardcore-technical-AI-alignment had a greater support-infrastructure for independent researchers and students. Otherwise, we’re often gonna be torn between “learning/working for something to get a job” and “learning AI alignment background knowledge with our spare time/energy”.
Technical AI alignment is one of the few important fields that you can’t quite major in, and whose closest-related jobs/majors make the problem worse.
As much as agency is nice, plenty of (useful!) academics out there don’t have the kind of agency/risk-taking-ability that technical alignment research currently demands as the price-of-entry. This will keep choking us off from talent. Many of the best ideas will come from sheltered absentminded types, and only the LTFF and a tiny number of other groups give (temporary) support to such people.
Yes, important to get the incentives right. You could set the salary for AI alignment slightly below that of the worker’s market value. Also, I wonder about the relevant elasticity. How many people have the capacity to get good enough at programming to be able to contribute to capacity research + would have the desire to game my labor hording system because they don’t have really good employment options?