Explicitly assuming realism and reductionism. I think.
A meaningful statement is one that claims the “actual reality” lies within a particular well-defined subset of possible worlds, where each possible world is a complete and concrete specification of everything in that universe, to the highest available precision, at the lowest possible level of description, in that universes own ontology.
Of course, superhuge uncomputable subsets of possible worlds are not practically useful, so we compress by talking about concepts (like “white”, “snow”, “5”, “post-utopian”), among other things. Unfortunately, once we get into turing-complete compression, we can construct programs (concepts) that do all sorts of stupid stuff (like not halt). Concepts need to be portable between ontologies. This might sink this whole idea.
For example, “snow is white” says the One True Reality is within the (unimaginably huge) subset of possible worlds where the substructures that the “snow” concept matches are also matched by the “white” concept.
For example “2 + 2 = 5” refers to the subset of possible worlds where the concept generated by the application of the higher-order concept “+” to “2” and “2″ will match everything matched by “5”. (I unpacked “=” to “concepts match same things”, but you don’t have to) There’s something really neat about these abstract concepts, but they don’t seem fundamentally different from other ones.
TL;DR: So the rule is “your beliefs should be specified by a probability distribution over exact possible worlds”, and I don’t know of a compression language for possible world subsets that can’t express meaningless concepts (and it probably isn’t worth it to look for one).
Explicitly assuming realism and reductionism. I think.
A meaningful statement is one that claims the “actual reality” lies within a particular well-defined subset of possible worlds, where each possible world is a complete and concrete specification of everything in that universe, to the highest available precision, at the lowest possible level of description, in that universes own ontology.
Of course, superhuge uncomputable subsets of possible worlds are not practically useful, so we compress by talking about concepts (like “white”, “snow”, “5”, “post-utopian”), among other things. Unfortunately, once we get into turing-complete compression, we can construct programs (concepts) that do all sorts of stupid stuff (like not halt). Concepts need to be portable between ontologies. This might sink this whole idea.
For example, “snow is white” says the One True Reality is within the (unimaginably huge) subset of possible worlds where the substructures that the “snow” concept matches are also matched by the “white” concept.
For example “2 + 2 = 5” refers to the subset of possible worlds where the concept generated by the application of the higher-order concept “+” to “2” and “2″ will match everything matched by “5”. (I unpacked “=” to “concepts match same things”, but you don’t have to) There’s something really neat about these abstract concepts, but they don’t seem fundamentally different from other ones.
TL;DR: So the rule is “your beliefs should be specified by a probability distribution over exact possible worlds”, and I don’t know of a compression language for possible world subsets that can’t express meaningless concepts (and it probably isn’t worth it to look for one).