I believe I and others here probably have a lot to learn from Chris, and arguments of the form “Chris confidently believes false thing X,” are not really a crux for me about this.
Would you kindly explain this? Because you think some of his world-models independently throw out great predictions, even if other models of his are dead wrong?
More like illuminating ontologies than great predictions, but yeah.
Would you kindly explain this? Because you think some of his world-models independently throw out great predictions, even if other models of his are dead wrong?
More like illuminating ontologies than great predictions, but yeah.