My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
The only thing we seem to disagree on is how to formulate statements of belief.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
But even here the Bayesian agrees with you if the coin is well-balanced.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.
My form of naive realism is contingent, not absolutist. It is perfectly acceptable to find myself to be in error. But … there are certain forms of dead that are just plain dead. Such as the cat I just came home to discover was in rigor mortis an hour ago.
Can we change topics, please?
My condolences. Would you prefer if we use a different example? The upshot is that the Bayesian agrees with you that there is a reality out there. If you agree that one can be in error then you and the Bayesian aren’t disagreeing here.
The only thing we seem to disagree on is how to formulate statements of belief. As I wrote elsewhere:
What I am espousing is the use of contingent naive realism for handling assertions regarding “the territory” coincident with Bayesian reasoning regarding “the map”.
I strongly agree that Bayesian reasoning is a powerful tool for making predictive statements, but I still affirm that the adage that the number of times a well-balanced coin has come up heads has no bearing on whether it actually will come up heads on the next trial.
But even here the Bayesian agrees with you if the coin is well-balanced.
If that is the case then the only disagreement you have is a matter of language not a matter of philosophy. This confuses me because the subthread about knowledge about one’s own internal cognitive processes seemed to assert a difference in actual philosophy.
I was making a trivial example. It gets more complicated when we start talking about the probabilities that a specific truth claim is valid.
I’m not exactly sure how to address this problem. I do know that it is more than just a question of language. The subthread you’re talking about was dealing on the existence of knowable truth, and I was using absolute truths for that purpose.
There are, necessarily, a very limited number of knowable absolute truths.
I do note that your wording on this instance, “knowledge about one’s own internal cognitive processes”, indicates that I may still be failing to achieve sufficient clarity in the message I was conveying in that thread. To reiterate: my claim was that it is absolutely true that when you are cognizant of a specific thought, you know that you are cognizant of that thought. In other words: you know absolutely that you are aware of your awareness. This is tautological to self-awareness.