The field of AI alignment is definitely not a rigorous scientific field, but nor is it anything like a fictional-world-building exercise. It is a crash program to address an existential risk that appears to have a decent chance of happening, and soon in the timescale of civilization, let alone species.
By its very nature it should not be a scientific field in the Popperian sense. By the time we have any experimental data on how any artificial general superintelligence behaves, the field is irrelevant. If we could be sure that it wasn’t possible to happen soon, we could take more time to probe out the field and start the likely centuries-long process to make it more rigorous.
So I answer your question by rejecting it. You have presented a false dichotomy.
The field of AI alignment is definitely not a rigorous scientific field, but nor is it anything like a fictional-world-building exercise. It is a crash program to address an existential risk that appears to have a decent chance of happening, and soon in the timescale of civilization, let alone species.
By its very nature it should not be a scientific field in the Popperian sense. By the time we have any experimental data on how any artificial general superintelligence behaves, the field is irrelevant. If we could be sure that it wasn’t possible to happen soon, we could take more time to probe out the field and start the likely centuries-long process to make it more rigorous.
So I answer your question by rejecting it. You have presented a false dichotomy.
In my experience, doing something poorly takes more time than doing it properly.