One thing I don’t understand / don’t agree with here is the move from propositions to models. It seems to me that models can be (and usually are) understood in terms of propositions.
For example, Solomonoff understands models as computer programs which generate predictions. However, computer programs are constructed out of bits, which can be understood as propositions. The bits are not very meaningful in isolation; the claim “program-bit number 37 is a 1” has almost no meaning in the absence of further information about the other program bits. However, this isn’t much of an issue for the formalism.
Similarly, I expect that any attempt to formally model “models” can be broken down into propositions. EG, if someone claimed that humans understand the world in terms of systems of differential equations, this would still be well-facilitated by a concept of propositions (ie, the equations).
It seems to me like a convincing abandonment of propositions would have to be quite radical, abandoning the idea of formalism entirely. This is because you’d have to explain why your way of thinking about models is not amenable to a mathematical treatment (since math is commonly understood in terms of propositions).
So (a) I’m not convinced that thinking in terms of propositions makes it difficult to think in terms of models; (b) it seems to me that refusing to think in terms of propositions would make it difficult to think in terms of models.
The bits are not very meaningful in isolation; the claim “program-bit number 37 is a 1” has almost no meaning in the absence of further information about the other program bits. However, this isn’t much of an issue for the formalism.
In my post I defend the use of propositions as a way to understand models, and attack the use of propositions as a way to understand reality. You can think of this as a two-level structure: claims about models can be crisp and precise enough that it makes sense to talk about them in propositional terms, but for complex bits of reality you mostly want to make claims of the form “this is well-modeled by model X”. Those types of claims need to be understood in terms of continuous truth-values: they’re basically never entirely true or entirely false.
Separately, Solomonoff programs are non-central examples of models because they do not come with structural correspondences to reality attached (except via their inputs and outputs). Most models have some mapping that allows you to point at program-bits and infer some features of reality from them.
I notice as I write this that there’s some tension in my position: I’m saying we shouldn’t apply propositions to reality, but also the mappings I mentioned above allow us to formulate propositions like “the value of X in reality is approximately the value of this variable in my model”.
So maybe actually I’m actually arguing for a middle ground between two extremes:
The basic units of epistemology should all map precisely to claims about reality, and should be arbitrarily combinable and composable (the propositional view)
The basic units of epistemology should only map to claims about reality in terms of observable predictions, and not be combinable or composable at all (the Solomonoff view)
This spectrum isn’t fully well-defined even in my head but seems like an interesting way to view things which I’ll think more about.
I agree that Solomonoff’s epistemology is noncentral in the way you describe, but I don’t think it impacts my points very much; replace Solomonoff with whatever epistemic theory you like. It was just a convenient example.
(Although I expect defenders of Solomonoff to expect the program bits to be meaningful; and I somewhat agree. It’s just that the theory doesn’t address the meaning there, instead treating programs more like black-box predictors.)
In my view, meaning is the property of being optimized to adhere to some map-territory relationship. However, this optimization itself must always occur within some model (it provides the map-territory relationship to optimize for). In the context of Solomonoff Induction, this may emerge from the incentive to predict, but it is not easy to reason about.
In some sense, reality isn’t made of bits, propositions, or any such thing; it is of unknowable type. However, we always describe it via terms of some type (a language).
I’m no longer sure where the disagreement lies, if any, but I still feel like the original post overstates things.
One thing I don’t understand / don’t agree with here is the move from propositions to models. It seems to me that models can be (and usually are) understood in terms of propositions.
For example, Solomonoff understands models as computer programs which generate predictions. However, computer programs are constructed out of bits, which can be understood as propositions. The bits are not very meaningful in isolation; the claim “program-bit number 37 is a 1” has almost no meaning in the absence of further information about the other program bits. However, this isn’t much of an issue for the formalism.
Similarly, I expect that any attempt to formally model “models” can be broken down into propositions. EG, if someone claimed that humans understand the world in terms of systems of differential equations, this would still be well-facilitated by a concept of propositions (ie, the equations).
It seems to me like a convincing abandonment of propositions would have to be quite radical, abandoning the idea of formalism entirely. This is because you’d have to explain why your way of thinking about models is not amenable to a mathematical treatment (since math is commonly understood in terms of propositions).
So (a) I’m not convinced that thinking in terms of propositions makes it difficult to think in terms of models; (b) it seems to me that refusing to think in terms of propositions would make it difficult to think in terms of models.
In my post I defend the use of propositions as a way to understand models, and attack the use of propositions as a way to understand reality. You can think of this as a two-level structure: claims about models can be crisp and precise enough that it makes sense to talk about them in propositional terms, but for complex bits of reality you mostly want to make claims of the form “this is well-modeled by model X”. Those types of claims need to be understood in terms of continuous truth-values: they’re basically never entirely true or entirely false.
Separately, Solomonoff programs are non-central examples of models because they do not come with structural correspondences to reality attached (except via their inputs and outputs). Most models have some mapping that allows you to point at program-bits and infer some features of reality from them.
I notice as I write this that there’s some tension in my position: I’m saying we shouldn’t apply propositions to reality, but also the mappings I mentioned above allow us to formulate propositions like “the value of X in reality is approximately the value of this variable in my model”.
So maybe actually I’m actually arguing for a middle ground between two extremes:
The basic units of epistemology should all map precisely to claims about reality, and should be arbitrarily combinable and composable (the propositional view)
The basic units of epistemology should only map to claims about reality in terms of observable predictions, and not be combinable or composable at all (the Solomonoff view)
This spectrum isn’t fully well-defined even in my head but seems like an interesting way to view things which I’ll think more about.
I agree that Solomonoff’s epistemology is noncentral in the way you describe, but I don’t think it impacts my points very much; replace Solomonoff with whatever epistemic theory you like. It was just a convenient example.
(Although I expect defenders of Solomonoff to expect the program bits to be meaningful; and I somewhat agree. It’s just that the theory doesn’t address the meaning there, instead treating programs more like black-box predictors.)
In my view, meaning is the property of being optimized to adhere to some map-territory relationship. However, this optimization itself must always occur within some model (it provides the map-territory relationship to optimize for). In the context of Solomonoff Induction, this may emerge from the incentive to predict, but it is not easy to reason about.
In some sense, reality isn’t made of bits, propositions, or any such thing; it is of unknowable type. However, we always describe it via terms of some type (a language).
I’m no longer sure where the disagreement lies, if any, but I still feel like the original post overstates things.