If you concede that you need some kind of “multi-level” model of the world to capture human beliefs about their environment, and in particular if you think that this is necessary for value learning, it seems like you must agree that the game doesn’t stop there. Much human knowledge can’t be simply described as facts about the world at any level of coarse-graining, at least not in any stronger sense than facts about my dog are facts about the underlying physical data.
That is, facts about my dog can definitely be cashed out as logical facts about the relationship between the underlying physical data + the laws of physics. But they are definitely not explicitly represented as such or conveniently understood as such.
It may be that coarse graining is literally the only way that complex beliefs of this kind work. I would find that surprising in the extreme.
Is anyone defending a position like this, or is the view more something like “well, we know that this is at least one thing that humans do, so we will either (a) address this and then address the next thing and so on, or (b) learn something important about the representation of beliefs/etc. in the course of understanding multi-level models”? Or something very different?
It seems to me like the game probably doesn’t stop anywhere sane, so the only option is really for it to stop immediately (probably before you even assume the human is making non-trivial ontological assumptions).
Yeah, I think just having coarse-grained facts would not be enough. I’m referring to a more general idea when I say “multi-level models”: something that can represent concepts at different levels of abstraction, probably not with the high-level facts being a function of the low-level facts. My goal would be to have at least some concrete model for a multi-level model that, for example, preserves the “diamond” concept as it learns new physics. I think (a) and (b) are both reasons why I want to do this; I want to know if it’s possible to create an AI with goals related to concrete physical things (which requires something like multi-level models, but maybe not much more?), and I also want to have a better understanding of more abstract concepts to see if it’s possible to have an AI do anything useful with them.
Could our disagreement be stated as: I think it is plausible that, with a few years of work, a small number of researchers could make useful models for things like diamond-maximization; whereas you don’t think it is plausible?
If you concede that you need some kind of “multi-level” model of the world to capture human beliefs about their environment, and in particular if you think that this is necessary for value learning, it seems like you must agree that the game doesn’t stop there. Much human knowledge can’t be simply described as facts about the world at any level of coarse-graining, at least not in any stronger sense than facts about my dog are facts about the underlying physical data.
That is, facts about my dog can definitely be cashed out as logical facts about the relationship between the underlying physical data + the laws of physics. But they are definitely not explicitly represented as such or conveniently understood as such.
It may be that coarse graining is literally the only way that complex beliefs of this kind work. I would find that surprising in the extreme.
Is anyone defending a position like this, or is the view more something like “well, we know that this is at least one thing that humans do, so we will either (a) address this and then address the next thing and so on, or (b) learn something important about the representation of beliefs/etc. in the course of understanding multi-level models”? Or something very different?
It seems to me like the game probably doesn’t stop anywhere sane, so the only option is really for it to stop immediately (probably before you even assume the human is making non-trivial ontological assumptions).
Yeah, I think just having coarse-grained facts would not be enough. I’m referring to a more general idea when I say “multi-level models”: something that can represent concepts at different levels of abstraction, probably not with the high-level facts being a function of the low-level facts. My goal would be to have at least some concrete model for a multi-level model that, for example, preserves the “diamond” concept as it learns new physics. I think (a) and (b) are both reasons why I want to do this; I want to know if it’s possible to create an AI with goals related to concrete physical things (which requires something like multi-level models, but maybe not much more?), and I also want to have a better understanding of more abstract concepts to see if it’s possible to have an AI do anything useful with them.
Could our disagreement be stated as: I think it is plausible that, with a few years of work, a small number of researchers could make useful models for things like diamond-maximization; whereas you don’t think it is plausible?