Ty for the comment. I mostly disagree with it. Here’s my attempt to restate the thrust of your argument:
The issues with binary truth-values raised in the post are all basically getting at the idea that the meaning of a proposition is context-dependent. But we can model context-dependence in a Bayesian way by referring to latent variables in the speaker’s model of the world. Therefore we don’t need fuzzy truth-values.
But this assumes that, given the speaker’s probabilistic model, truth-values are binary. I don’t see why this needs to be the case. Here’s an example: suppose my non-transhumanist friend says “humanity will be extinct in 100 years”. And I say “by ‘extinct’ do you include genetically engineered until future humans are a different species? How about being uploaded? How about all being cryonically frozen, to be revived later? How about....”
In this case, there is simply no fact of the matter about which of these possibilities should be included or excluded in the context of my friend’s original claim, because (I’ll assume) they hadn’t considered any of those possibilities.
More prosaically, even if I have considered some possibilities in the past, at the time when I make a statement I’m not actively considering almost any of them. For some of them, if you’d raised those possibilities to me when I’d asked the question, I’d have said “obviously I did/didn’t mean to include that”, but for others I’d have said “huh, idk” and for others still I would have said different things depending on how you presented them to me. So what reason do we have to think that there’s any ground truth about what the context does or doesn’t include? Similar arguments apply re approximation error about how far away the grocery store is: clearly 10km error is unacceptable, and 1m is acceptable, but what reason do we have to think that any “correct” threshold can be deduced even given every fact about my brain-state when I asked the question?
I picture you saying in response to this “even if there are some problems with binary truth-values, fuzzy truth-values don’t actually help very much”. To this I say: yes, in the context of propositions, I agree. But that’s because we shouldn’t be doing epistemology in terms of propositions. And so you can think of the logical flow of my argument as:
Here’s why, even for propositions, binary truth is a mess. I’m not saying I can solve it but this section should at least leave you open-minded about fuzzy truth-values.
Here’s why we shouldn’t be thinking in terms of propositions at all, but rather in terms of models.
And when it comes to models, something like fuzzy truth-values seems very important (because it is crucial to be able to talk about models being closer to the truth without being absolutely true or false).
I accept that this logical flow wasn’t as clear as it could have been. Perhaps I should have started off by talking about models, and only then introduced fuzzy truth-values? But I needed the concept of fuzzy truth-values to explain why models are actually different from propositions at all, so idk.
I also accept that “something like fuzzy truth-values” is kinda undefined here, and am mostly punting that to a successor post.
But this assumes that, given the speaker’s probabilistic model, truth-values are binary.
In some sense yes, but there is totally allowed to be irreducible uncertainty in the latents—i.e. given both the model and complete knowledge of everything in the physical world, there can still be uncertainty in the latents. And those latents can still be meaningful and predictively powerful. I think that sort of uncertainty does the sort of thing you’re trying to achieve by introducing fuzzy truth values, without having to leave a Bayesian framework.
Let’s look at this example:
suppose my non-transhumanist friend says “humanity will be extinct in 100 years”. And I say “by ‘extinct’ do you include genetically engineered until future humans are a different species? How about being uploaded? How about all being cryonically frozen, to be revived later? How about....”
In this case, there is simply no fact of the matter about which of these possibilities should be included or excluded in the context of my friend’s original claim...
Here’s how that would be handled by a Bayesian mind:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
If this seems strange and confusing, remember: there is absolutely no rule saying that the variables in a Bayesian agent’s world model need to represent any particular thing in the external world. I can program a Bayesian reasoner hardcoded to believe it’s in the Game of Life, and feed that reasoner data from my webcam, and the variables in its world model will not represent any particular stuff in the actual environment. The case of semantics does not involve such an extreme disconnect, but it does involve some useful variables which do not fully ground out in any physical state.
Here’s how that would be handled by a Bayesian mind:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
What would resolve the uncertainty that remains after you have conditioned on the entire low-level state of the physical world? (I assume that we’re in the logically omniscient setting here?)
We are indeed in the logically omniscient setting still, so nothing would resolve that uncertainty.
The simplest concrete example I know is the Boltzman distribution for an ideal gas—not the assorted things people say about the Boltzmann distribution, but the actual math, interpreted as Bayesian probability. The model has one latent variable, the temperature T, and says that all the particle velocities are normally distributed with mean zero and variance proportional to T. Then, just following the ordinary Bayesian math: in order to estimate T from all the particle velocities, I start with some prior P[T], calculate P[T|velocities] using Bayes’ rule, and then for ~any reasonable prior I end up with a posterior distribution over T which is very tightly peaked around the average particle energy… but has nonzero spread. There’s small but nonzero uncertainty in T given all of the particle velocities. And in this simple toy gas model, those particles are the whole world, there’s nothing else to learn about which would further reduce my uncertainty in T.
Ty for the comment. I mostly disagree with it. Here’s my attempt to restate the thrust of your argument:
But this assumes that, given the speaker’s probabilistic model, truth-values are binary. I don’t see why this needs to be the case. Here’s an example: suppose my non-transhumanist friend says “humanity will be extinct in 100 years”. And I say “by ‘extinct’ do you include genetically engineered until future humans are a different species? How about being uploaded? How about all being cryonically frozen, to be revived later? How about....”
In this case, there is simply no fact of the matter about which of these possibilities should be included or excluded in the context of my friend’s original claim, because (I’ll assume) they hadn’t considered any of those possibilities.
More prosaically, even if I have considered some possibilities in the past, at the time when I make a statement I’m not actively considering almost any of them. For some of them, if you’d raised those possibilities to me when I’d asked the question, I’d have said “obviously I did/didn’t mean to include that”, but for others I’d have said “huh, idk” and for others still I would have said different things depending on how you presented them to me. So what reason do we have to think that there’s any ground truth about what the context does or doesn’t include? Similar arguments apply re approximation error about how far away the grocery store is: clearly 10km error is unacceptable, and 1m is acceptable, but what reason do we have to think that any “correct” threshold can be deduced even given every fact about my brain-state when I asked the question?
I picture you saying in response to this “even if there are some problems with binary truth-values, fuzzy truth-values don’t actually help very much”. To this I say: yes, in the context of propositions, I agree. But that’s because we shouldn’t be doing epistemology in terms of propositions. And so you can think of the logical flow of my argument as:
Here’s why, even for propositions, binary truth is a mess. I’m not saying I can solve it but this section should at least leave you open-minded about fuzzy truth-values.
Here’s why we shouldn’t be thinking in terms of propositions at all, but rather in terms of models.
And when it comes to models, something like fuzzy truth-values seems very important (because it is crucial to be able to talk about models being closer to the truth without being absolutely true or false).
I accept that this logical flow wasn’t as clear as it could have been. Perhaps I should have started off by talking about models, and only then introduced fuzzy truth-values? But I needed the concept of fuzzy truth-values to explain why models are actually different from propositions at all, so idk.
I also accept that “something like fuzzy truth-values” is kinda undefined here, and am mostly punting that to a successor post.
In some sense yes, but there is totally allowed to be irreducible uncertainty in the latents—i.e. given both the model and complete knowledge of everything in the physical world, there can still be uncertainty in the latents. And those latents can still be meaningful and predictively powerful. I think that sort of uncertainty does the sort of thing you’re trying to achieve by introducing fuzzy truth values, without having to leave a Bayesian framework.
Let’s look at this example:
Here’s how that would be handled by a Bayesian mind:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
If this seems strange and confusing, remember: there is absolutely no rule saying that the variables in a Bayesian agent’s world model need to represent any particular thing in the external world. I can program a Bayesian reasoner hardcoded to believe it’s in the Game of Life, and feed that reasoner data from my webcam, and the variables in its world model will not represent any particular stuff in the actual environment. The case of semantics does not involve such an extreme disconnect, but it does involve some useful variables which do not fully ground out in any physical state.
What would resolve the uncertainty that remains after you have conditioned on the entire low-level state of the physical world? (I assume that we’re in the logically omniscient setting here?)
We are indeed in the logically omniscient setting still, so nothing would resolve that uncertainty.
The simplest concrete example I know is the Boltzman distribution for an ideal gas—not the assorted things people say about the Boltzmann distribution, but the actual math, interpreted as Bayesian probability. The model has one latent variable, the temperature T, and says that all the particle velocities are normally distributed with mean zero and variance proportional to T. Then, just following the ordinary Bayesian math: in order to estimate T from all the particle velocities, I start with some prior P[T], calculate P[T|velocities] using Bayes’ rule, and then for ~any reasonable prior I end up with a posterior distribution over T which is very tightly peaked around the average particle energy… but has nonzero spread. There’s small but nonzero uncertainty in T given all of the particle velocities. And in this simple toy gas model, those particles are the whole world, there’s nothing else to learn about which would further reduce my uncertainty in T.