What interests me about the Boltzmann brain (this is a bit of a tangent) is that it sharply poses the question of where the boundary of a subjective state lies. It doesn’t seem that there’s any part X of your mental state that couldn’t be replaced by a mere “impression of X”. E.g. an impression of having been to a party yesterday rather than a memory of the party. Or an impression that one is aware of two differently-coloured patches rather than the patches themselves together with their colours. Or an impression of ‘difference’ rather than an impression of differently coloured patches.
If we imagine “you” to be a circle drawn with magic marker around a bunch of miscellaneous odds and ends (ideas, memories etc. but perhaps also bits of the ‘outside world’, like the tattoos on the guy in Memento) then there seems to be no limit to how small we can draw the circle—how much of your mental state can be regarded as ‘external’. But if only the ‘interior’ of the circle needs to be instantiated in order to have a copy of ‘you’, it seems like anything, no matter how random, can be regarded as a “Boltzmann brain”.
Besides intuitions, has there been any progress on better understanding the whole agent-environment thing? (There’s some abstract machine conference meeting in Germany soon that’s having a workshop on computation in context. They meet every 2 years. It might be good for DT folk to have something to inspire them with next time it comes around.)
I’m afraid I’ll butcher the explanation and hurt the meme, so I’ll say it’s mostly about other programs, not your own. E.g. like how do we differentiate between the preferences of an agent and just things in its environment after it’s gone and interacted with its local environment a lot, e.g. Odysseus and his ship. (These are just cached ideas without any context in my mind, I didn’t generate these concerns myself.)
What interests me about the Boltzmann brain (this is a bit of a tangent) is that it sharply poses the question of where the boundary of a subjective state lies. It doesn’t seem that there’s any part X of your mental state that couldn’t be replaced by a mere “impression of X”. E.g. an impression of having been to a party yesterday rather than a memory of the party. Or an impression that one is aware of two differently-coloured patches rather than the patches themselves together with their colours. Or an impression of ‘difference’ rather than an impression of differently coloured patches.
If we imagine “you” to be a circle drawn with magic marker around a bunch of miscellaneous odds and ends (ideas, memories etc. but perhaps also bits of the ‘outside world’, like the tattoos on the guy in Memento) then there seems to be no limit to how small we can draw the circle—how much of your mental state can be regarded as ‘external’. But if only the ‘interior’ of the circle needs to be instantiated in order to have a copy of ‘you’, it seems like anything, no matter how random, can be regarded as a “Boltzmann brain”.
If you haven’t decided what the circle should represent.
Besides intuitions, has there been any progress on better understanding the whole agent-environment thing? (There’s some abstract machine conference meeting in Germany soon that’s having a workshop on computation in context. They meet every 2 years. It might be good for DT folk to have something to inspire them with next time it comes around.)
What agent-environment thing? There doesn’t appear to be a mystery here.
I’m afraid I’ll butcher the explanation and hurt the meme, so I’ll say it’s mostly about other programs, not your own. E.g. like how do we differentiate between the preferences of an agent and just things in its environment after it’s gone and interacted with its local environment a lot, e.g. Odysseus and his ship. (These are just cached ideas without any context in my mind, I didn’t generate these concerns myself.)