For this incentives-reason, I wish hardcore-technical-AI-alignment had a greater support-infrastructure for independent researchers and students. Otherwise, we’re often gonna be torn between “learning/working for something to get a job” and “learning AI alignment background knowledge with our spare time/energy”.
Technical AI alignment is one of the few important fields that you can’t quite major in, and whose closest-related jobs/majors make the problem worse.
As much as agency is nice, plenty of (useful!) academics out there don’t have the kind of agency/risk-taking-ability that technical alignment research currently demands as the price-of-entry. This will keep choking us off from talent. Many of the best ideas will come from sheltered absentminded types, and only the LTFF and a tiny number of other groups give (temporary) support to such people.
For this incentives-reason, I wish hardcore-technical-AI-alignment had a greater support-infrastructure for independent researchers and students. Otherwise, we’re often gonna be torn between “learning/working for something to get a job” and “learning AI alignment background knowledge with our spare time/energy”.
Technical AI alignment is one of the few important fields that you can’t quite major in, and whose closest-related jobs/majors make the problem worse.
As much as agency is nice, plenty of (useful!) academics out there don’t have the kind of agency/risk-taking-ability that technical alignment research currently demands as the price-of-entry. This will keep choking us off from talent. Many of the best ideas will come from sheltered absentminded types, and only the LTFF and a tiny number of other groups give (temporary) support to such people.