Good point. Maybe I should say it is the only method for finding out general truths about the world. It’s not the only way to answer specific, narrow, practical questions like whether a particular building or road can be built.
Do you believe that philosophy is science? Do you believe it can be used to find out any general truths about the world?
A majority of knowledge that human experts in most fields have is implicit knowledge and not the kind of knowledge that you could write down in a book. Do you think that knowledge contains no general truths about the world?
Maybe “general truths” is still too broad. Let’s approach this a different way. I submit that science is the best and only method for establishing a certain class of truths. I’m not totally sure how to describe that class. They are general truths about the world, but maybe it’s narrower than that. But I’m pretty sure there is such a class. Do you agree? How would you describe the type of knowledge that science (and only science) can get us?
I think a key feature of science is that it’s about public knowledge as opposed to private knowledge. You can verify whether or not a scientific claim is true. If you are on the other hand dealing with a superforcaster you can verify whether or not the superforcaster overall has a good track record but you can’t verify whether or not specific claims are true in the way you can with scientific claims.
You can write down all your scientific knowledge in a textbook and then the knowledge is independent of the reader. An expert can’t write down his implicit knowledge in a similar way and a reader gets all the knowledge by reading it.
Science is inherently about using systematized ways to understand a subject. An expert that unsystematically explores the subject can still understand all the truths about the subject
One of the interesting things about LLMs is that people used to believe that an AI has to reason much more systematically to be truly intelligent. LLMs proved that wrong and show that a very unsystematic approach still leads to an AI that’s more intelligent than all the approaches to build AI in a more systematic way.
That means that you can’t easily verify whether the claims of the LLM are true but I still think that the LLM can learn “general truths” from the data it has access to.
Good point. Maybe I should say it is the only method for finding out general truths about the world. It’s not the only way to answer specific, narrow, practical questions like whether a particular building or road can be built.
Do you believe that philosophy is science? Do you believe it can be used to find out any general truths about the world?
A majority of knowledge that human experts in most fields have is implicit knowledge and not the kind of knowledge that you could write down in a book. Do you think that knowledge contains no general truths about the world?
Maybe “general truths” is still too broad. Let’s approach this a different way. I submit that science is the best and only method for establishing a certain class of truths. I’m not totally sure how to describe that class. They are general truths about the world, but maybe it’s narrower than that. But I’m pretty sure there is such a class. Do you agree? How would you describe the type of knowledge that science (and only science) can get us?
I think a key feature of science is that it’s about public knowledge as opposed to private knowledge. You can verify whether or not a scientific claim is true. If you are on the other hand dealing with a superforcaster you can verify whether or not the superforcaster overall has a good track record but you can’t verify whether or not specific claims are true in the way you can with scientific claims.
You can write down all your scientific knowledge in a textbook and then the knowledge is independent of the reader. An expert can’t write down his implicit knowledge in a similar way and a reader gets all the knowledge by reading it.
Science is inherently about using systematized ways to understand a subject. An expert that unsystematically explores the subject can still understand all the truths about the subject
One of the interesting things about LLMs is that people used to believe that an AI has to reason much more systematically to be truly intelligent. LLMs proved that wrong and show that a very unsystematic approach still leads to an AI that’s more intelligent than all the approaches to build AI in a more systematic way.
That means that you can’t easily verify whether the claims of the LLM are true but I still think that the LLM can learn “general truths” from the data it has access to.