Thanks for your reply. Popper-falsifiable does not mean experiment-based in my books. Math is falsifiable—you can present a counterexample, error in reasoning, a paradoxical result, etc. Similarly to history, you can often falsify certain claims by providing evidence against. But you can not falsify a field where every definition is hand-waved and nothing is specified in detail. I agree that AI Alignment has pre-paradigmic features as far as Kuhn goes. But Kuhn also says that pre-paradigmic science is rarely rigorous or true, even though it might produce some results that will lead to something interesting in the future.
mocny-chlapik
Karma: 47
[Question] Is AI Alignment a pseudoscience?
Is it only technical achievements that are not getting celebrated anymore? Sometimes when you read old books you can read that certain celebrity was greeted by a huge crowd when it came to USA via boat. Can you imagine crowds waiting for celebrities nowadays? Sure, you can have some fans, but certainly not crowds waiting for someone. I believe that social media are simply replacing crowd celebrations and people have no need to actually go outside to celebrate anymore. You can see the event live with great video coverage (while you usually don’t see much in the crowd) and you can also interact with all your friends (not with a bunch of random onlookers). This makes social media much more comfortable and accessible.
Thanks for you reply. I am aware of that, but I didn’t want to reduce the discussion to particular papers. I was curious about how other people read this field as a whole and what’s their opinion about it. One particular example I had in mind is the Embedded Agency post often mentioned as a good introductory material into AI Alignment. The text often mentions complex mathematical problems, such as halt problem, Godel’s theorem, Goodhart’s law, etc. in a very abrupt fashion and use these concept to evoke certain ideas. But a lot is left unsaid, e.g. if Turing completeness is evoked, is there an assumption that AGI will be deterministic state machine? Is this an assumption for the whole paper or only for that particular passage? What about other types of computations, e.g. theoretical hypercomputers? I think it would be beneficial for the field if these assumptions would be stated somewhere in the writing. You need to know what are the limitations of individual papers, otherwise you don’t know what kind of questions were actually covered previously. E.g. if this paper covers only Turing-computable AGI, it should be clearly stated so others can work on other types of computations.