Having thought about lots of real-world use-cases for abstraction over the past few months, I don’t think it’s mainly used for logical-uncertainty-style things. It’s used mainly for statistical-mechanics-style things: situations where the available data is only a high-level summary of a system with many low-level degrees of freedom. A few examples:
The word “tree” is a very high-level description; if I know “there’s a tree over there”, then that’s high-level data on a system with tons of remaining degrees of freedom even at the macroscopic level (e.g. type of tree, size, orientation of branches, etc).
A street map of New York City gives high-level data on a system with many degrees of freedom not captured by the map (e.g. lane markings, potholes, street widths, number of lanes, signs, sidewalks, etc).
When I run a given python script, what actually happens at the transistor level on my particular machine? There may be people out there for whom this is a logical uncertainty question, but I am not one of them; I do not know the details of i86 architecture. All I know is a high-level summary of its behavior.
Good points. But I think that you can get a little logical uncertainty even with just a little bit of the necessary property.
That property being throwing away more information than logically necessary. Like modeling humans using an agent model you know is contradicted by some low-level information.
(From a Shannon perspective calling this throwing away information is weird, since the agent model might produce a sharper probability distribution than the optimal model. But it makes sense to me from a Solomonoff perspective, where you imagine the true sequence as “model + diff,” where diff is something like an imaginary program that fills in for the model and corrects its mistakes. Models that throw away more information will have a longer diff.)
I guess ultimately, what I mean is that there are some benefits of logical uncertainty like for counterfactual reasoning and planning, where using an abstract model automatically gets those benefits. If you never knew the contradictory low-level information in the first place, like your examples, then we just call this “statistical-mechanics-style things.” If you knew the low-level information but threw it away, you could call it logical uncertainty. But it’s still the same model with the same benefits.
The way I think about it, it’s not that we’re using an agent model which is contradicted by some low-level information, it’s that we’re using an agent model which is only valid for some queries. Every abstraction comes with a set of queries over which the high-level model’s predictions match the low-level model’s predictions. Independence of the low-level variables corresponding to high-level components is the main way we track which queries are valid: valid queries are those which don’t ask about variables “close together”.
So at least the way I’m thinking about it, there’s never any contradictory information to throw away in the first place.
Having thought about lots of real-world use-cases for abstraction over the past few months, I don’t think it’s mainly used for logical-uncertainty-style things. It’s used mainly for statistical-mechanics-style things: situations where the available data is only a high-level summary of a system with many low-level degrees of freedom. A few examples:
The word “tree” is a very high-level description; if I know “there’s a tree over there”, then that’s high-level data on a system with tons of remaining degrees of freedom even at the macroscopic level (e.g. type of tree, size, orientation of branches, etc).
A street map of New York City gives high-level data on a system with many degrees of freedom not captured by the map (e.g. lane markings, potholes, street widths, number of lanes, signs, sidewalks, etc).
When I run a given python script, what actually happens at the transistor level on my particular machine? There may be people out there for whom this is a logical uncertainty question, but I am not one of them; I do not know the details of i86 architecture. All I know is a high-level summary of its behavior.
Good points. But I think that you can get a little logical uncertainty even with just a little bit of the necessary property.
That property being throwing away more information than logically necessary. Like modeling humans using an agent model you know is contradicted by some low-level information.
(From a Shannon perspective calling this throwing away information is weird, since the agent model might produce a sharper probability distribution than the optimal model. But it makes sense to me from a Solomonoff perspective, where you imagine the true sequence as “model + diff,” where diff is something like an imaginary program that fills in for the model and corrects its mistakes. Models that throw away more information will have a longer diff.)
I guess ultimately, what I mean is that there are some benefits of logical uncertainty like for counterfactual reasoning and planning, where using an abstract model automatically gets those benefits. If you never knew the contradictory low-level information in the first place, like your examples, then we just call this “statistical-mechanics-style things.” If you knew the low-level information but threw it away, you could call it logical uncertainty. But it’s still the same model with the same benefits.
The way I think about it, it’s not that we’re using an agent model which is contradicted by some low-level information, it’s that we’re using an agent model which is only valid for some queries. Every abstraction comes with a set of queries over which the high-level model’s predictions match the low-level model’s predictions. Independence of the low-level variables corresponding to high-level components is the main way we track which queries are valid: valid queries are those which don’t ask about variables “close together”.
So at least the way I’m thinking about it, there’s never any contradictory information to throw away in the first place.