But this assumes that, given the speaker’s probabilistic model, truth-values are binary.
In some sense yes, but there is totally allowed to be irreducible uncertainty in the latents—i.e. given both the model and complete knowledge of everything in the physical world, there can still be uncertainty in the latents. And those latents can still be meaningful and predictively powerful. I think that sort of uncertainty does the sort of thing you’re trying to achieve by introducing fuzzy truth values, without having to leave a Bayesian framework.
Let’s look at this example:
suppose my non-transhumanist friend says “humanity will be extinct in 100 years”. And I say “by ‘extinct’ do you include genetically engineered until future humans are a different species? How about being uploaded? How about all being cryonically frozen, to be revived later? How about....”
In this case, there is simply no fact of the matter about which of these possibilities should be included or excluded in the context of my friend’s original claim...
Here’s how that would be handled by a Bayesian mind:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
If this seems strange and confusing, remember: there is absolutely no rule saying that the variables in a Bayesian agent’s world model need to represent any particular thing in the external world. I can program a Bayesian reasoner hardcoded to believe it’s in the Game of Life, and feed that reasoner data from my webcam, and the variables in its world model will not represent any particular stuff in the actual environment. The case of semantics does not involve such an extreme disconnect, but it does involve some useful variables which do not fully ground out in any physical state.
Here’s how that would be handled by a Bayesian mind:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
What would resolve the uncertainty that remains after you have conditioned on the entire low-level state of the physical world? (I assume that we’re in the logically omniscient setting here?)
We are indeed in the logically omniscient setting still, so nothing would resolve that uncertainty.
The simplest concrete example I know is the Boltzman distribution for an ideal gas—not the assorted things people say about the Boltzmann distribution, but the actual math, interpreted as Bayesian probability. The model has one latent variable, the temperature T, and says that all the particle velocities are normally distributed with mean zero and variance proportional to T. Then, just following the ordinary Bayesian math: in order to estimate T from all the particle velocities, I start with some prior P[T], calculate P[T|velocities] using Bayes’ rule, and then for ~any reasonable prior I end up with a posterior distribution over T which is very tightly peaked around the average particle energy… but has nonzero spread. There’s small but nonzero uncertainty in T given all of the particle velocities. And in this simple toy gas model, those particles are the whole world, there’s nothing else to learn about which would further reduce my uncertainty in T.
In some sense yes, but there is totally allowed to be irreducible uncertainty in the latents—i.e. given both the model and complete knowledge of everything in the physical world, there can still be uncertainty in the latents. And those latents can still be meaningful and predictively powerful. I think that sort of uncertainty does the sort of thing you’re trying to achieve by introducing fuzzy truth values, without having to leave a Bayesian framework.
Let’s look at this example:
Here’s how that would be handled by a Bayesian mind:
There’s some latent variable representing the semantics of “humanity will be extinct in 100 years”; call that variable S for semantics.
Lots of things can provide evidence about S. The sentence itself, context of the conversation, whatever my friend says about their intent, etc, etc.
… and yet it is totally allowed, by the math of Bayesian agents, for that variable S to still have some uncertainty in it even after conditioning on the sentence itself and the entire low-level physical state of my friend, or even the entire low-level physical state of the world.
If this seems strange and confusing, remember: there is absolutely no rule saying that the variables in a Bayesian agent’s world model need to represent any particular thing in the external world. I can program a Bayesian reasoner hardcoded to believe it’s in the Game of Life, and feed that reasoner data from my webcam, and the variables in its world model will not represent any particular stuff in the actual environment. The case of semantics does not involve such an extreme disconnect, but it does involve some useful variables which do not fully ground out in any physical state.
What would resolve the uncertainty that remains after you have conditioned on the entire low-level state of the physical world? (I assume that we’re in the logically omniscient setting here?)
We are indeed in the logically omniscient setting still, so nothing would resolve that uncertainty.
The simplest concrete example I know is the Boltzman distribution for an ideal gas—not the assorted things people say about the Boltzmann distribution, but the actual math, interpreted as Bayesian probability. The model has one latent variable, the temperature T, and says that all the particle velocities are normally distributed with mean zero and variance proportional to T. Then, just following the ordinary Bayesian math: in order to estimate T from all the particle velocities, I start with some prior P[T], calculate P[T|velocities] using Bayes’ rule, and then for ~any reasonable prior I end up with a posterior distribution over T which is very tightly peaked around the average particle energy… but has nonzero spread. There’s small but nonzero uncertainty in T given all of the particle velocities. And in this simple toy gas model, those particles are the whole world, there’s nothing else to learn about which would further reduce my uncertainty in T.