learning from just a few AI researchers who are most sympathetic to its current position
That’s some very serious bias and circular updates on cherry picked evidence.
Actually, you know what’s worst? Say, you discovered that your truth finding method shows both A and ~A . Normal reaction is to consider the truth finding method in question to be flawed—some of the premises are contradictory, set of axioms is flawed, the method is not rigorous enough, the understanding of concepts is too fuzzy, etc etc. If I were working on automatic proof system, or any automated reasoning really, and it would generate both A and ~A depending to the order of the search, I’d know I have a bug to fix (even if normally it only outputs A). The reaction here is instead to proudly announce refusal to check if your method also gives ~A when you have shown it gives A, and proud announcement of not giving up on the method that is demonstrably flawed (normally you move on to something less flawed, like being more rigorous)
On top of this—Dunning Kruger effect being what it is—it is expected that very irrational people would be irrational enough to believe themselves to be very rational, so if you claim to be very rational, there’s naturally two categories with excluded middle—very rational and know it, and very irrational and too irrational to know it. A few mistakes incompatible with the former go a long way.
That’s some very serious bias and circular updates on cherry picked evidence.
Actually, you know what’s worst? Say, you discovered that your truth finding method shows both A and ~A . Normal reaction is to consider the truth finding method in question to be flawed—some of the premises are contradictory, set of axioms is flawed, the method is not rigorous enough, the understanding of concepts is too fuzzy, etc etc. If I were working on automatic proof system, or any automated reasoning really, and it would generate both A and ~A depending to the order of the search, I’d know I have a bug to fix (even if normally it only outputs A). The reaction here is instead to proudly announce refusal to check if your method also gives ~A when you have shown it gives A, and proud announcement of not giving up on the method that is demonstrably flawed (normally you move on to something less flawed, like being more rigorous)
On top of this—Dunning Kruger effect being what it is—it is expected that very irrational people would be irrational enough to believe themselves to be very rational, so if you claim to be very rational, there’s naturally two categories with excluded middle—very rational and know it, and very irrational and too irrational to know it. A few mistakes incompatible with the former go a long way.