I wouldn’t use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It’s exciting precisely because the outcome confuses the heck out of people. I’m having trouble parsing this in Bayesian terms but I think you’re committing a rationalist sin by using an event that your model of reality couldn’t predict in advance as evidence that your model of reality is correct.
I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.
I wouldn’t use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It’s exciting precisely because the outcome confuses the heck out of people. I’m having trouble parsing this in Bayesian terms but I think you’re committing a rationalist sin by using an event that your model of reality couldn’t predict in advance as evidence that your model of reality is correct.
I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.