To try to understand a bit better: does your pessimism about this come from the hardness of the technical challenge of querying a zillion-particle entity for its objective function? Or does it come from the hardness of the definitional challenge of exhaustively labeling every one of those zillion particles to make sure the demarcation is fully specified? Or is there a reason you think constructing any such demarcation is impossible even in principle? Or something else?
Probably something like the last one, although I think “even in principle” is doing some probably doing something suspicious in that statement. Like, sure, “in principle,” you can pretty much construct any demarcation you could possibly imagine, including the Cartesian one, but what I’m trying to say is something like, “all demarcations, by their very nature, exist only in the map, not the territory.” Carving reality is an operation that could only make sense within the context of a map, as reality simply is. Your concept of “agent” is defined in terms of other representations that similarly exist only within your world-model; other humans have a similar concept of “agent” because they have a similar representation built from correspondingly similar parts. If an AI is to understand the human notion of “agency,” it will need to also understand plenty of other “things” which are also only abstractions or latent variables within our world models, as well as what those variables “point to” (at least, what variables in the AI’s own world model they ‘point to,’ as by now I hope you’re seeing the problem with trying to talk about “things they point to” in external/‘objective’ reality!).
I’m with you on this, and I suspect we’d agree on most questions of fact around this topic. Of course demarcation is an operation on maps and not on territories.
But as a practical matter, the moment one starts talking about the definition of something such as a mesa-objective, one has already unfolded one’s map and started pointing to features on it. And frankly, that seems fine! Because historically, a great way to make forward progress on a conceptual question has been to work out a sequence of maps that give you successive degrees of approximation to the territory.
I’m not suggesting actually trying to imbue an AI with such concepts — that would be dangerous (for the reasons you alluded to) even if it wasn’t pointless (because prosaic systems will just learn the representations they need anyway). All I’m saying is that the moment we started playing the game of definitions, we’d already started playing the game of maps. So using an arbitrary demarcation to construct our definitions might be bad for any number of legitimate reasons, but it can’t be bad just because it caused us to start using maps: our earlier decision to talk about definitions already did that.
(I’m not 100% sure if I’ve interpreted your objection correctly, so please let me know if I haven’t.)
Probably something like the last one, although I think “even in principle” is doing some probably doing something suspicious in that statement. Like, sure, “in principle,” you can pretty much construct any demarcation you could possibly imagine, including the Cartesian one, but what I’m trying to say is something like, “all demarcations, by their very nature, exist only in the map, not the territory.” Carving reality is an operation that could only make sense within the context of a map, as reality simply is. Your concept of “agent” is defined in terms of other representations that similarly exist only within your world-model; other humans have a similar concept of “agent” because they have a similar representation built from correspondingly similar parts. If an AI is to understand the human notion of “agency,” it will need to also understand plenty of other “things” which are also only abstractions or latent variables within our world models, as well as what those variables “point to” (at least, what variables in the AI’s own world model they ‘point to,’ as by now I hope you’re seeing the problem with trying to talk about “things they point to” in external/‘objective’ reality!).
I’m with you on this, and I suspect we’d agree on most questions of fact around this topic. Of course demarcation is an operation on maps and not on territories.
But as a practical matter, the moment one starts talking about the definition of something such as a mesa-objective, one has already unfolded one’s map and started pointing to features on it. And frankly, that seems fine! Because historically, a great way to make forward progress on a conceptual question has been to work out a sequence of maps that give you successive degrees of approximation to the territory.
I’m not suggesting actually trying to imbue an AI with such concepts — that would be dangerous (for the reasons you alluded to) even if it wasn’t pointless (because prosaic systems will just learn the representations they need anyway). All I’m saying is that the moment we started playing the game of definitions, we’d already started playing the game of maps. So using an arbitrary demarcation to construct our definitions might be bad for any number of legitimate reasons, but it can’t be bad just because it caused us to start using maps: our earlier decision to talk about definitions already did that.
(I’m not 100% sure if I’ve interpreted your objection correctly, so please let me know if I haven’t.)