But it seems pretty plausible that a major reason why humans arrive at these ‘objective’ 3rd-person world-models is because humans have a strong incentive to think about the world in ways that make communication possible.
This is an interesting point which I had not thought about before, thank you. Insofar as I have a response already, it’s basically the same as this thread: it seems like understanding of interoperable concepts falls upstream of understanding non-interoperable concepts on the tech tree, and also there’s nontrivial probability that non-interoperable concepts just aren’t used much even by Solomonoff inductors (in a realistic environment).
Ah, don’t get me wrong: I agree that understanding interoperability is the thing to focus on. Indeed, I think perhaps “understanding” itself has something to do with interoperability.
The difference, I think, is that in my view the whole game of interoperability has to do with translating between 1st person and 3rd person perspectives.
Your magic box takes utterances and turns them into interoperable mental content.
My magic box takes non-interoperable-by-default[1] mental content and turns them into interoperable utterances.
The language is the interoperable thing. The nature of the interoperable thing is that it has been optimized so as to easily translate between many not-so-easily-interoperable (1st person, subjective, idiosyncratic) perspectives.
“Default” is the wrong concept here, since we are raised from little babies to be highly interoperable, and would die without society. What I mean here is something like, it is relatively easy to spell out non-interoperable theories of learning / mental content, EG solomonoff’s theory, or neural nets.
This is an interesting point which I had not thought about before, thank you. Insofar as I have a response already, it’s basically the same as this thread: it seems like understanding of interoperable concepts falls upstream of understanding non-interoperable concepts on the tech tree, and also there’s nontrivial probability that non-interoperable concepts just aren’t used much even by Solomonoff inductors (in a realistic environment).
Ah, don’t get me wrong: I agree that understanding interoperability is the thing to focus on. Indeed, I think perhaps “understanding” itself has something to do with interoperability.
The difference, I think, is that in my view the whole game of interoperability has to do with translating between 1st person and 3rd person perspectives.
Your magic box takes utterances and turns them into interoperable mental content.
My magic box takes non-interoperable-by-default[1] mental content and turns them into interoperable utterances.
The language is the interoperable thing. The nature of the interoperable thing is that it has been optimized so as to easily translate between many not-so-easily-interoperable (1st person, subjective, idiosyncratic) perspectives.
“Default” is the wrong concept here, since we are raised from little babies to be highly interoperable, and would die without society. What I mean here is something like, it is relatively easy to spell out non-interoperable theories of learning / mental content, EG solomonoff’s theory, or neural nets.