I think a wave would be a good test in a lot of ways, but by being such a clean example it might miss some possible pitfalls. The big one is, I think, the underdetermination of pointing at a flower—a flower petal is also an object even by human standards, so how could the program know you’re not pointing at a flower petal? Even more perversely, humans can refer to things like “this cubic meter of air.”
In some sense, I’m of the opinion that solving this means solving the frame problem—part of what makes a flower a flower isn’t merely its material properties, but what sorts of actions humans can take in the environment, what humans care about, how our language and culture shape how we chunk up the world into labels, and what sort of objects we typically communicate about by pointing versus other signals.
Those examples bring up some good points that didn’t make it into the OP:
Objects are often composed of subobjects—a flower petal is an object in its own right. Assuming the general approach of the OP holds, we’d expect a boundary which follows around just the petal to also be a local minimum of required summary data. Of course, that boundary would not match the initial conditions of the problem in the OP—that’s why we need the boundary at time zero to cover the flower, not just one petal.
On the other hand, the north half of the flower is something we can point to semantically (by saying “north half of the flower”), but isn’t an object in its own right—it’s not defined by a locally-minimal boundary. “This cubic meter of air” is the same sort of thing. In both cases, note that there isn’t really a well-defined object which sticks around over time—if I point to “this cubic meter of air”, and then ask where that cubic meter of air is five minutes later, there’s not a natural answer. Could be the same cubic meter (in some reference frame), the same air molecules, the cubic meter displaced along local wind currents, …
Regarding the frame problem: there are many locally-minimal abstract-object-boundaries out there. Humans tend to switch which abstractions they use based on the problem at hand—e.g. thinking of a flower as a flower or as petals + leaves + stem or as a bunch of cells or… That said, the choice is still very highly constrained: if just draw a box around a random cubic meter of air, then there is no useful sense in which that object sticks around over time. It’s not just biological hard-wiring that makes different human cultures recognize mostly-similar things as objects—no culture has a word for the north half of a flower or a particular cubic meter of air. The cases where different cultures do recognize entirely different “objects” are interesting largely because they are unusual.
(We could imagine a culture with a word for the north half of a flower, and we could guess why such a word might exist: maybe the north half of the flower gets more sun unless it’s in the shade, so that half of the flower in particular contains relevant information about the rest of the world. We can immediately see that the approach of the OP applies directly here: the north half of the flower specifically contains information about far-away things. The subjectivity is in picking which “far-away” things we’re interested in.)
Point is: I do not think that the set of possible objects is subjective in the sense of allowing any arbitrary boundary at all. However, which objects we reason about for any particular problem does vary, depending on what “far-away” things we’re interested in.
A key practical upshot is that, since the set of sensible objects doesn’t include things like a random chunk of space, we should be able to write code which recognizes the same objects as humans without having to point to those objects perfectly. A pointer (e.g. the initial boundary in the OP) can be “good enough”, and can be refined by looking for the closest local minimum.
So I’m betting, before really thinking about it, that I can find something as microphysically absurd as “the north side of the flower.” How about “the mainland,” where humans use a weird ontology to draw the boundary in, that makes no sense to a non-human-centric ontology? Or parts based on analogy or human-centric function, like being able to talk about “the seat” of a chair that is just one piece of plastic.
On the Type 2 error side, there are also lots of local minima of “information passing through the boundary” that humans wouldn’t recognize. Like “the flower except for cell #13749788206.” Often, the boundary a human draws is a fuzzy fiction that only needs to get filled in as one looks more closely—maybe we want to include that cell if it goes on to replicate, but are fine with excluding it if it will die soon. But humans don’t think about this as a black box with Laplace’s Demon inside, they think about it as using future information to fill in this fuzzy boundary when we try to look at it closer.
I don’t think “the mainland” works as an example of human-centric-ontology (pretty sure the OP approach would consider that an object), but “seat of a chair” might, especially for chairs all made of one piece of plastic/metal. At the very least, it is clear that we can point to things which are not “natural” objects in the OP’s sense (e.g. a particular cubic meter of air), but then the question is: how do we define that object over time? In the chair example, my (not-yet-fully-thought-out) answer is that the chair is clearly a natural object, and we’re able to track the “seat” over time mainly because it’s defined relative to the chair. If the chair dramatically changes its form-factor, for instance, then there may no longer be a natural-to-a-human answer to the question “which part of this object is the seat?” (and if there is a natural answer, then it’s probably because the seat was a natural object to begin with, for instance maybe it’s a separate piece which can detach).
I do agree that there are tons of “objects” recognized by this method which are not recognized by humans—for instance, objects like cells, which we now recognize but once didn’t. But I think a general pattern is that, once we point to such an example, we think “yeah, that’s weird, but it’s definitely a well-defined object—e.g. I can keep track of it over time”. The flower-minus-one-cell is a good example of this: it’s not something a human would normally think of, but once you point to it, a human would recognize this as a well-defined thing and be able to keep track of it over time. If you draw a boundary around a flower and one cell within that flower, then ask me to identify the flower-minus-a-cell some time later, that’s a well-defined task which I (as a human) intuitively understand how to do.
I also agree that humans use different boundaries for different tasks and often switch to using other boundaries on the fly. In particular, I totally agree that there’s some laziness in figuring out where the boundaries go. This does not imply that object-notions are ever fuzzy, though—our objects can have sharply-defined referents even if we don’t have full information about that referent or if we’re switching between referents quite often. That’s what I think is mostly going on. E.g. in your cell-which-may-or-may-not-replicate example, there is a sharp boundary, we just don’t yet have the information to determine where that boundary is.
I think a wave would be a good test in a lot of ways, but by being such a clean example it might miss some possible pitfalls. The big one is, I think, the underdetermination of pointing at a flower—a flower petal is also an object even by human standards, so how could the program know you’re not pointing at a flower petal? Even more perversely, humans can refer to things like “this cubic meter of air.”
In some sense, I’m of the opinion that solving this means solving the frame problem—part of what makes a flower a flower isn’t merely its material properties, but what sorts of actions humans can take in the environment, what humans care about, how our language and culture shape how we chunk up the world into labels, and what sort of objects we typically communicate about by pointing versus other signals.
Those examples bring up some good points that didn’t make it into the OP:
Objects are often composed of subobjects—a flower petal is an object in its own right. Assuming the general approach of the OP holds, we’d expect a boundary which follows around just the petal to also be a local minimum of required summary data. Of course, that boundary would not match the initial conditions of the problem in the OP—that’s why we need the boundary at time zero to cover the flower, not just one petal.
On the other hand, the north half of the flower is something we can point to semantically (by saying “north half of the flower”), but isn’t an object in its own right—it’s not defined by a locally-minimal boundary. “This cubic meter of air” is the same sort of thing. In both cases, note that there isn’t really a well-defined object which sticks around over time—if I point to “this cubic meter of air”, and then ask where that cubic meter of air is five minutes later, there’s not a natural answer. Could be the same cubic meter (in some reference frame), the same air molecules, the cubic meter displaced along local wind currents, …
Regarding the frame problem: there are many locally-minimal abstract-object-boundaries out there. Humans tend to switch which abstractions they use based on the problem at hand—e.g. thinking of a flower as a flower or as petals + leaves + stem or as a bunch of cells or… That said, the choice is still very highly constrained: if just draw a box around a random cubic meter of air, then there is no useful sense in which that object sticks around over time. It’s not just biological hard-wiring that makes different human cultures recognize mostly-similar things as objects—no culture has a word for the north half of a flower or a particular cubic meter of air. The cases where different cultures do recognize entirely different “objects” are interesting largely because they are unusual.
(We could imagine a culture with a word for the north half of a flower, and we could guess why such a word might exist: maybe the north half of the flower gets more sun unless it’s in the shade, so that half of the flower in particular contains relevant information about the rest of the world. We can immediately see that the approach of the OP applies directly here: the north half of the flower specifically contains information about far-away things. The subjectivity is in picking which “far-away” things we’re interested in.)
Point is: I do not think that the set of possible objects is subjective in the sense of allowing any arbitrary boundary at all. However, which objects we reason about for any particular problem does vary, depending on what “far-away” things we’re interested in.
A key practical upshot is that, since the set of sensible objects doesn’t include things like a random chunk of space, we should be able to write code which recognizes the same objects as humans without having to point to those objects perfectly. A pointer (e.g. the initial boundary in the OP) can be “good enough”, and can be refined by looking for the closest local minimum.
So I’m betting, before really thinking about it, that I can find something as microphysically absurd as “the north side of the flower.” How about “the mainland,” where humans use a weird ontology to draw the boundary in, that makes no sense to a non-human-centric ontology? Or parts based on analogy or human-centric function, like being able to talk about “the seat” of a chair that is just one piece of plastic.
On the Type 2 error side, there are also lots of local minima of “information passing through the boundary” that humans wouldn’t recognize. Like “the flower except for cell #13749788206.” Often, the boundary a human draws is a fuzzy fiction that only needs to get filled in as one looks more closely—maybe we want to include that cell if it goes on to replicate, but are fine with excluding it if it will die soon. But humans don’t think about this as a black box with Laplace’s Demon inside, they think about it as using future information to fill in this fuzzy boundary when we try to look at it closer.
I don’t think “the mainland” works as an example of human-centric-ontology (pretty sure the OP approach would consider that an object), but “seat of a chair” might, especially for chairs all made of one piece of plastic/metal. At the very least, it is clear that we can point to things which are not “natural” objects in the OP’s sense (e.g. a particular cubic meter of air), but then the question is: how do we define that object over time? In the chair example, my (not-yet-fully-thought-out) answer is that the chair is clearly a natural object, and we’re able to track the “seat” over time mainly because it’s defined relative to the chair. If the chair dramatically changes its form-factor, for instance, then there may no longer be a natural-to-a-human answer to the question “which part of this object is the seat?” (and if there is a natural answer, then it’s probably because the seat was a natural object to begin with, for instance maybe it’s a separate piece which can detach).
I do agree that there are tons of “objects” recognized by this method which are not recognized by humans—for instance, objects like cells, which we now recognize but once didn’t. But I think a general pattern is that, once we point to such an example, we think “yeah, that’s weird, but it’s definitely a well-defined object—e.g. I can keep track of it over time”. The flower-minus-one-cell is a good example of this: it’s not something a human would normally think of, but once you point to it, a human would recognize this as a well-defined thing and be able to keep track of it over time. If you draw a boundary around a flower and one cell within that flower, then ask me to identify the flower-minus-a-cell some time later, that’s a well-defined task which I (as a human) intuitively understand how to do.
I also agree that humans use different boundaries for different tasks and often switch to using other boundaries on the fly. In particular, I totally agree that there’s some laziness in figuring out where the boundaries go. This does not imply that object-notions are ever fuzzy, though—our objects can have sharply-defined referents even if we don’t have full information about that referent or if we’re switching between referents quite often. That’s what I think is mostly going on. E.g. in your cell-which-may-or-may-not-replicate example, there is a sharp boundary, we just don’t yet have the information to determine where that boundary is.