Sounds all reasonable but I’m not entirely clear what you are driving at.
I’d like to pick a tangent:
One might imagine that if we modeled the world using a big pile of autoencoders, this pile would already contain predictors for many concepts we might want to specify, but that if we use examples to try and communicate a concept that was not already learned, the pile might not even contain the features that make our concept easy to specify.
This reminds me of a recent discussion around whether future AIs might be able to better communicate than humans because they could be able to exchange the meaning of intermediate layers in their deep learning architectures whereas we can communicate the terminal symbols only. This circled in my mind when I saw a children’s picture book (one where simple clear pictures allow parents to name objects) and I thought: We can not only name terminal symbols in our ‘deep learning architecture’ we can name a lot of intermediate ‘facets’ of ‘objects’. I don’t mean sub-objects like ‘leg’ or ‘surface’. Call them properties like ‘yellow’, ‘round’, ‘smooth’ or even more vague features like ‘beautiful’. I think that we are basically able to name all those intermediate features that can be communicated at all. Sometimes there is no word or there is no need for a word usually or the few cases of experiences that are seldom and hard to share like e.g. certain trance states. But even in these cases we could imagine that it is possible in principle to communicate the aspects/facets of our perception.
Sounds all reasonable but I’m not entirely clear what you are driving at.
I’d like to pick a tangent:
This reminds me of a recent discussion around whether future AIs might be able to better communicate than humans because they could be able to exchange the meaning of intermediate layers in their deep learning architectures whereas we can communicate the terminal symbols only. This circled in my mind when I saw a children’s picture book (one where simple clear pictures allow parents to name objects) and I thought: We can not only name terminal symbols in our ‘deep learning architecture’ we can name a lot of intermediate ‘facets’ of ‘objects’. I don’t mean sub-objects like ‘leg’ or ‘surface’. Call them properties like ‘yellow’, ‘round’, ‘smooth’ or even more vague features like ‘beautiful’. I think that we are basically able to name all those intermediate features that can be communicated at all. Sometimes there is no word or there is no need for a word usually or the few cases of experiences that are seldom and hard to share like e.g. certain trance states. But even in these cases we could imagine that it is possible in principle to communicate the aspects/facets of our perception.