When ML models get more competent, ML capabilities researchers will have strong incentives to build superhuman models. Finding superhuman training techniques would be the main thing they’d work on. Consequently, when the problem is more tractable, I don’t see why it’d be neglected by the capabilities community—it’d be unreasonable for profit maximizers not to have it as a top priority when it becomes tractable. I don’t see why alignment researchers have to work in this area with high externalities now and ignore other safe alignment research areas (in practice, the alignment teams with compute are mostly just working on this area). I’d be in favor of figuring out how to get superhuman supervision for specific things related to normative factors/human values (e.g., superhuman wellbeing supervision), but researching superhuman supervision simpliciter will be the aim of the capabilities community.
Don’t worry, the capabilities community will relentlessly maximize vanilla accuracy, and we don’t need to help them.
When ML models get more competent, ML capabilities researchers will have strong incentives to build superhuman models. Finding superhuman training techniques would be the main thing they’d work on. Consequently, when the problem is more tractable, I don’t see why it’d be neglected by the capabilities community—it’d be unreasonable for profit maximizers not to have it as a top priority when it becomes tractable. I don’t see why alignment researchers have to work in this area with high externalities now and ignore other safe alignment research areas (in practice, the alignment teams with compute are mostly just working on this area). I’d be in favor of figuring out how to get superhuman supervision for specific things related to normative factors/human values (e.g., superhuman wellbeing supervision), but researching superhuman supervision simpliciter will be the aim of the capabilities community.
Don’t worry, the capabilities community will relentlessly maximize vanilla accuracy, and we don’t need to help them.