What I’d like to see instead is more alignment research, and especially research of the form “this particular direction seems unlikely to succeed, but if it succeeds then it will in fact help a lot in mainline reality”
(Obviously even better would be if it seems likely to succeed and helps on the mainline. But ‘longshot that will help if it succeeds’ is second-best.)
(Obviously even better would be if it seems likely to succeed and helps on the mainline. But ‘longshot that will help if it succeeds’ is second-best.)