There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.
This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.
Note that this assumes that states of experience with zero discernible difference between them are the same thing—eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they’re the same model.
But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam’s Razor favors one over the other, and our experiences give us ample cause to trust Occam’s Razor.
At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...
There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
Er, now I see that Eliezer’s post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can’t predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can’t put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
we can’t predict the n in n-valued future experiential states.
What? Of course we can—it’s much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.
If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision.
Arbitrary, yes. Unbounded, no. It’s still bounded by the amount of physical memory it can use to represent state.
In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don’t know how this zero-probability assignment would be justified for any n—there’s a non-zero probability that one’s model of physics is completely wrong, and once that’s gone, there’s not much left to make something impossible.
There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.
This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.
Note that this assumes that states of experience with zero discernible difference between them are the same thing—eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they’re the same model.
But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam’s Razor favors one over the other, and our experiences give us ample cause to trust Occam’s Razor.
At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...
There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.
Er, now I see that Eliezer’s post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can’t predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can’t put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.
What? Of course we can—it’s much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.
Arbitrary, yes. Unbounded, no. It’s still bounded by the amount of physical memory it can use to represent state.
In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don’t know how this zero-probability assignment would be justified for any n—there’s a non-zero probability that one’s model of physics is completely wrong, and once that’s gone, there’s not much left to make something impossible.