If you thought the answers in that thread backed you up:
It’s a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
...
A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn’t make it pseudoscience. Falsifiability is the key to demarcation.
That summarizes a few answers.
I agree, I wouldn’t consider AI alignment to be scientific either. How is it a “problem” though?
If you thought the answers in that thread backed you up:
That summarizes a few answers.