“The ball is blue” only gets assigned a probability by your prior when “blue” is interpreted, not as a word that you don’t understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn’t previously know about, plus the one number you do know about. It’s like imagining that there’s a fifth force appearing in quark-quark interactions a la the “Alderson Drive”. You don’t need to have seen the fifth force for the hypothesis to be meaningful, so long as the hypothesis specifies how the causal force interacts with you.
If you restrain yourself to only finite sets of physical laws of this sort, your prior will be over countably many causal models.
“The ball is blue” only gets assigned a probability by your prior when “blue” is interpreted, not as a word that you don’t understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn’t previously know about, plus the one number you do know about.
Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both “what are my beliefs about words that I don’t understand used in a sentence” and “what are my beliefs about physics I don’t understand yet.” This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.
To me the conversational part of this seems way less complicated/interesting than the unknown causal models part—if I have any ‘philosophical’ confusion about how to treat unknown strings of English letters it is not obvious to me what it is.
There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.
This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.
Note that this assumes that states of experience with zero discernible difference between them are the same thing—eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they’re the same model.
But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam’s Razor favors one over the other, and our experiences give us ample cause to trust Occam’s Razor.
At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...
There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
Er, now I see that Eliezer’s post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can’t predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can’t put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
we can’t predict the n in n-valued future experiential states.
What? Of course we can—it’s much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.
If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision.
Arbitrary, yes. Unbounded, no. It’s still bounded by the amount of physical memory it can use to represent state.
In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don’t know how this zero-probability assignment would be justified for any n—there’s a non-zero probability that one’s model of physics is completely wrong, and once that’s gone, there’s not much left to make something impossible.
“The ball is blue” only gets assigned a probability by your prior when “blue” is interpreted, not as a word that you don’t understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn’t previously know about, plus the one number you do know about. It’s like imagining that there’s a fifth force appearing in quark-quark interactions a la the “Alderson Drive”. You don’t need to have seen the fifth force for the hypothesis to be meaningful, so long as the hypothesis specifies how the causal force interacts with you.
If you restrain yourself to only finite sets of physical laws of this sort, your prior will be over countably many causal models.
Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both “what are my beliefs about words that I don’t understand used in a sentence” and “what are my beliefs about physics I don’t understand yet.” This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.
To me the conversational part of this seems way less complicated/interesting than the unknown causal models part—if I have any ‘philosophical’ confusion about how to treat unknown strings of English letters it is not obvious to me what it is.
Causal models are countable? Are irrational constants not part of causal models?
There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.
This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.
Note that this assumes that states of experience with zero discernible difference between them are the same thing—eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they’re the same model.
But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam’s Razor favors one over the other, and our experiences give us ample cause to trust Occam’s Razor.
At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...
There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
Er, now I see that Eliezer’s post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can’t predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can’t put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
What? Of course we can—it’s much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.
Arbitrary, yes. Unbounded, no. It’s still bounded by the amount of physical memory it can use to represent state.
In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don’t know how this zero-probability assignment would be justified for any n—there’s a non-zero probability that one’s model of physics is completely wrong, and once that’s gone, there’s not much left to make something impossible.