For example, when we try to teach morality to an AI, we might think of trying to teach it the mapping humans use from situations to value. But this approach runs into problems, and I think part of those problems is that it treats as an object-in-reality (the mapping) something that’s an object-in-our-map.
At the risk of too much tooting my own horn, this is a big part of what I’m getting at in my recent work on noematological AI alignment: you can’t align over shared, “objective” values because they don’t exist.
A question then for both of you – isn’t the object in this case exactly one that exists in both *reality* and our map of reality? It’s not obvious to me that something like this *isn’t* objective and even potentially knowable. It’s information, it must be stored somewhere in some kind of physical ‘media’, and the better it is as a working component of our map the more likely it is that it corresponds to some thing-in-reality.
Interestingly, it just occurred to me that stuff like this – ‘information stuff’ – is exactly the kind of thing that, to the degree it’s helpful in a ‘map’, is something we should expect to find more or less as-is in the world itself.
If there’s a tree in both the territory and my map, that’s just the usual state of affairs. But when I talk about the tree, you don’t need to look at my map to know what I mean, you can just look at the tree. Morality is different—we intuitively think of morality as something other people can see, but this works because there is a common factor (our common ancestry), not because you can actually see the morality I’m talking about.
We could theoretically cash out statements about morality in terms of complicated evaluations of ideas and percepts, but this can’t save the intuitions about e.g. what arguments about morality are doing. Unlike the case of a tree and our idea of the tree, I think there really is a mismatch between our computation of morality and our idea of it.
Interestingly, it just occurred to me that stuff like this – ‘information stuff’ – is exactly the kind of thing that, to the degree it’s helpful in a ‘map’, is something we should expect to find more or less as-is in the world itself.
The interesting thing about information is that it’s not stuff the same way matter is, but something that is created via experience and it only exists so long as there is physical stuff interacting to create it via energy transfer. And this is the key to addressing your question: the map (ontology) is only information while the territory (the ontic) is stuff and its experiences. It is only across the gap of intentionality that ontology is made to correspond to the ontic.
That’s kind of cryptic but I maybe do a better job of laying out what’s going on here.
At the risk of too much tooting my own horn, this is a big part of what I’m getting at in my recent work on noematological AI alignment: you can’t align over shared, “objective” values because they don’t exist.
A question then for both of you – isn’t the object in this case exactly one that exists in both *reality* and our map of reality? It’s not obvious to me that something like this *isn’t* objective and even potentially knowable. It’s information, it must be stored somewhere in some kind of physical ‘media’, and the better it is as a working component of our map the more likely it is that it corresponds to some thing-in-reality.
Interestingly, it just occurred to me that stuff like this – ‘information stuff’ – is exactly the kind of thing that, to the degree it’s helpful in a ‘map’, is something we should expect to find more or less as-is in the world itself.
If there’s a tree in both the territory and my map, that’s just the usual state of affairs. But when I talk about the tree, you don’t need to look at my map to know what I mean, you can just look at the tree. Morality is different—we intuitively think of morality as something other people can see, but this works because there is a common factor (our common ancestry), not because you can actually see the morality I’m talking about.
We could theoretically cash out statements about morality in terms of complicated evaluations of ideas and percepts, but this can’t save the intuitions about e.g. what arguments about morality are doing. Unlike the case of a tree and our idea of the tree, I think there really is a mismatch between our computation of morality and our idea of it.
The interesting thing about information is that it’s not stuff the same way matter is, but something that is created via experience and it only exists so long as there is physical stuff interacting to create it via energy transfer. And this is the key to addressing your question: the map (ontology) is only information while the territory (the ontic) is stuff and its experiences. It is only across the gap of intentionality that ontology is made to correspond to the ontic.
That’s kind of cryptic but I maybe do a better job of laying out what’s going on here.