Notice that in our picture so far, the output of Alice’s semantics-box consists of values of some random variables in Alice’s model, and the output of Bob’s semantics-box consists of values of some random variables in Bob’s model. With that picture in mind, it’s unclear what it would even mean for Alice and Bob to “agree” on the semantics of sentences. For instance, imagine that Alice and Bob are both Solomonoff inductors with a special module for natural language. They both find some shortest program to model the world, but the programs they find may not be exactly identical; maybe Alice and Bob are running slightly different Turing machines, so their shortest programs have somewhat different functions and variables internally. Their semantics-boxes then output values of variables in those programs. If those are totally different programs, what does it even mean for Alice and Bob to “agree” on the values of variables in these totally different programs?
This importantly understates the problem. (You did say “for instance”—I don’t think you are necessarily ignoring the following point, but I think it is a point worth making.)
Even if Alice and Bob share the same universal prior, Solomonoff induction comes up with agent-centric models of the world, because it is trying to predict perceptions. Alice and Bob may live in the same world, but they will perceive different things. Even if they stay in the same room and look at the same objects, they will see different angles.
If we’re lucky, Alice and Bob will both land on two-part representations which (1) model the world from a 3rd person perspective, and (2) then identify the specific agent whose perceptions are being predicted, providing a ‘phenomonological bridge’ to translate the 3rd-person view of reality into a 1st person view. Then we’re left with the problem which you mention: Alice and Bob could have slightly different 3rd-person understandings of the universe.
If we could get there, great. However, I think we imagine Solomonoff induction arriving at such a two-part model largely because we think it is smart, and we think smart people understand the world in terms of physics and other 3rd-person-valid concepts. We think the physicalist/objective conception of the world is true, and therefore, Solomonoff induction will figure out that it is the best way.
Maybe so. But it seems pretty plausible that a major reason why humans arrive at these ‘objective’ 3rd-person world-models is because humans have a strong incentive to think about the world in ways that make communication possible. We come up with 3rd-person descriptions of the words because they are incredibly useful for communicating. Solomonoff induction is not particularly designed to respect this incentive, so it seems plausible that it could arrange its ontology in an entirely 1st-person manner instead.
But it seems pretty plausible that a major reason why humans arrive at these ‘objective’ 3rd-person world-models is because humans have a strong incentive to think about the world in ways that make communication possible.
This is an interesting point which I had not thought about before, thank you. Insofar as I have a response already, it’s basically the same as this thread: it seems like understanding of interoperable concepts falls upstream of understanding non-interoperable concepts on the tech tree, and also there’s nontrivial probability that non-interoperable concepts just aren’t used much even by Solomonoff inductors (in a realistic environment).
Ah, don’t get me wrong: I agree that understanding interoperability is the thing to focus on. Indeed, I think perhaps “understanding” itself has something to do with interoperability.
The difference, I think, is that in my view the whole game of interoperability has to do with translating between 1st person and 3rd person perspectives.
Your magic box takes utterances and turns them into interoperable mental content.
My magic box takes non-interoperable-by-default[1] mental content and turns them into interoperable utterances.
The language is the interoperable thing. The nature of the interoperable thing is that it has been optimized so as to easily translate between many not-so-easily-interoperable (1st person, subjective, idiosyncratic) perspectives.
“Default” is the wrong concept here, since we are raised from little babies to be highly interoperable, and would die without society. What I mean here is something like, it is relatively easy to spell out non-interoperable theories of learning / mental content, EG solomonoff’s theory, or neural nets.
This importantly understates the problem. (You did say “for instance”—I don’t think you are necessarily ignoring the following point, but I think it is a point worth making.)
Even if Alice and Bob share the same universal prior, Solomonoff induction comes up with agent-centric models of the world, because it is trying to predict perceptions. Alice and Bob may live in the same world, but they will perceive different things. Even if they stay in the same room and look at the same objects, they will see different angles.
If we’re lucky, Alice and Bob will both land on two-part representations which (1) model the world from a 3rd person perspective, and (2) then identify the specific agent whose perceptions are being predicted, providing a ‘phenomonological bridge’ to translate the 3rd-person view of reality into a 1st person view. Then we’re left with the problem which you mention: Alice and Bob could have slightly different 3rd-person understandings of the universe.
If we could get there, great. However, I think we imagine Solomonoff induction arriving at such a two-part model largely because we think it is smart, and we think smart people understand the world in terms of physics and other 3rd-person-valid concepts. We think the physicalist/objective conception of the world is true, and therefore, Solomonoff induction will figure out that it is the best way.
Maybe so. But it seems pretty plausible that a major reason why humans arrive at these ‘objective’ 3rd-person world-models is because humans have a strong incentive to think about the world in ways that make communication possible. We come up with 3rd-person descriptions of the words because they are incredibly useful for communicating. Solomonoff induction is not particularly designed to respect this incentive, so it seems plausible that it could arrange its ontology in an entirely 1st-person manner instead.
This is an interesting point which I had not thought about before, thank you. Insofar as I have a response already, it’s basically the same as this thread: it seems like understanding of interoperable concepts falls upstream of understanding non-interoperable concepts on the tech tree, and also there’s nontrivial probability that non-interoperable concepts just aren’t used much even by Solomonoff inductors (in a realistic environment).
Ah, don’t get me wrong: I agree that understanding interoperability is the thing to focus on. Indeed, I think perhaps “understanding” itself has something to do with interoperability.
The difference, I think, is that in my view the whole game of interoperability has to do with translating between 1st person and 3rd person perspectives.
Your magic box takes utterances and turns them into interoperable mental content.
My magic box takes non-interoperable-by-default[1] mental content and turns them into interoperable utterances.
The language is the interoperable thing. The nature of the interoperable thing is that it has been optimized so as to easily translate between many not-so-easily-interoperable (1st person, subjective, idiosyncratic) perspectives.
“Default” is the wrong concept here, since we are raised from little babies to be highly interoperable, and would die without society. What I mean here is something like, it is relatively easy to spell out non-interoperable theories of learning / mental content, EG solomonoff’s theory, or neural nets.