What alternative theory of truth can we use if not a correspondence one? There’s a few options, but I’ll simply consider my favorite here in the interest of time: predicted experience. That is, rather than assuming that there is some external territory to be mapped and that the accuracy of that mapping is how we determine if a mapping (a proposition) is true or not, we can ground truth in our experience since it’s the only thing we are really forced to assume (see the below aside for why). Then propositions or beliefs are true to the extent they predict what we experience.
Could you set up a mathematical toy model for this, similar to how there are mathematical toy models for various correspondence theories? Or point to one that already exists? Or I guess just clarify some questions.
In particular, I’m confused about a few things here:
What’s the type signature of a proposition here? In a correspondence theory, it’d be some logical expression whose atomic parts describe basic features of the world, but this doesn’t seem viable here. I guess you could have the atomic parts describe basic features of your observations, but that would lead you to the problems with logical positivism.
Can there be multiple incompatible propositions that predict the same experiences, and how does your approach deal with them? In particular, what if they only predict the same experiences within some range of observation, but diverge outside of that? What if you can’t get outside, or don’t get outside, of the range?
How does it deal with things like collider bias? If Nassim Taleb filters for people with high g factor (due to job + interests) and for people who understand long tails (due to his strong opinions on long tails), his experience might become that there is a negative correlation between intelligence and understanding long tails. Would it then be “true” “for him” that there’s a tradeoff between g and understanding long tails, even if g is positively correlated with understanding long tails in more representative experiences?
Can there be multiple incompatible propositions that predict the same experiences, and how does your approach deal with them? In particular, what if they only predict the same experiences within some range of observation, but diverge outside of that? What if you can’t get outside, or don’t get outside, of the range?
That seems fine. Consistency is often useful, but it’s not always. Sometimes completeness is better at the expense of consistency.
How does it deal with things like collider bias? If Nassim Taleb filters for people with high g factor (due to job + interests) and for people who understand long tails (due to his strong opinions on long tails), his experience might become that there is a negative correlation between intelligence and understanding long tails. Would it then be “true” “for him” that there’s a tradeoff between g and understanding long tails, even if g is positively correlated with understanding long tails in more representative experiences?
Since experience is subjective and I’m implicitly here talking about subjective probability (this is LessWrong; no frequentists allowed 😛), then of course truth becomes subjective, but of course only because “subjective” is kind of meaningless because there’s no such thing as objectivity anyway except as we infer there to be some things that are so common among the things we classify in our experience to be reports of others experience to believe that maybe there’s some stuff out there that is the same for all of us.
Could you set up a mathematical toy model for this, similar to how there are mathematical toy models for various correspondence theories? Or point to one that already exists? Or I guess just clarify some questions.
In particular, I’m confused about a few things here:
What’s the type signature of a proposition here? In a correspondence theory, it’d be some logical expression whose atomic parts describe basic features of the world, but this doesn’t seem viable here. I guess you could have the atomic parts describe basic features of your observations, but that would lead you to the problems with logical positivism.
Can there be multiple incompatible propositions that predict the same experiences, and how does your approach deal with them? In particular, what if they only predict the same experiences within some range of observation, but diverge outside of that? What if you can’t get outside, or don’t get outside, of the range?
How does it deal with things like collider bias? If Nassim Taleb filters for people with high g factor (due to job + interests) and for people who understand long tails (due to his strong opinions on long tails), his experience might become that there is a negative correlation between intelligence and understanding long tails. Would it then be “true” “for him” that there’s a tradeoff between g and understanding long tails, even if g is positively correlated with understanding long tails in more representative experiences?
experience → experience
That seems fine. Consistency is often useful, but it’s not always. Sometimes completeness is better at the expense of consistency.
Since experience is subjective and I’m implicitly here talking about subjective probability (this is LessWrong; no frequentists allowed 😛), then of course truth becomes subjective, but of course only because “subjective” is kind of meaningless because there’s no such thing as objectivity anyway except as we infer there to be some things that are so common among the things we classify in our experience to be reports of others experience to believe that maybe there’s some stuff out there that is the same for all of us.