One thing they could have achieved was dataset and leaderboard creation (MSCOCO, GLUE, and imagenet for example). These have tended to focus and help research and persist in usefulness for some time, as long as they are chosen wisely.
Predicting and extrapolating human preferences is a task which is part of nearly every AI Alignment strategy. Yet we have few datasets for it, the only ones I found are https://github.com/iterative/aita_dataset, https://www.moralmachine.net/
So this hypothetical ML Engineering approach to alignment might have achieved some simple wins like that.
EDIT Something like this was just released Aligning AI With Shared Human Values
One thing they could have achieved was dataset and leaderboard creation (MSCOCO, GLUE, and imagenet for example). These have tended to focus and help research and persist in usefulness for some time, as long as they are chosen wisely.
Predicting and extrapolating human preferences is a task which is part of nearly every AI Alignment strategy. Yet we have few datasets for it, the only ones I found are https://github.com/iterative/aita_dataset, https://www.moralmachine.net/
So this hypothetical ML Engineering approach to alignment might have achieved some simple wins like that.
EDIT Something like this was just released Aligning AI With Shared Human Values