I’ll own up to a downvote on the grounds that I think you added nothing to this conversation and were rude. In the proposed scoring system, I’d give you negative aim and negative truth-seeking. In addition, the post you linked isn’t an answer, but a question, so you didn’t even add information to the argument, so I’d give you negative correctness as well.
If you thought the answers in that thread backed you up:
It’s a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
...
A lot of the other work is pre-paradigmatic, as others have mentioned, but that doesn’t make it pseudoscience. Falsifiability is the key to demarcation.
That summarizes a few answers.
I agree, I wouldn’t consider AI alignment to be scientific either. How is it a “problem” though?
AI alignment is pseudo-science.
I’ll own up to a downvote on the grounds that I think you added nothing to this conversation and were rude. In the proposed scoring system, I’d give you negative aim and negative truth-seeking. In addition, the post you linked isn’t an answer, but a question, so you didn’t even add information to the argument, so I’d give you negative correctness as well.
Would you have downvoted the comment if it had been a simple link to what appeared to be a positive view of AI alignment?
Truth can be negative. Is this forum a cult that refuses to acknowledge alternative ways of approaching reality?
If you thought the answers in that thread backed you up:
That summarizes a few answers.