By describing the abstract structure of that ‘network of causal relationships’ you were talking about?
So you can describe your brain by saying explicitly what it contains, but this is not the shortest possible description in the sense of Kolmogorov complexity.
I believe that the shortest way to describe the contents of your brain—not your brain sitting inside a universe or anything—is to describe the universe (which has lower complexity than your brain, in the sense that it is the output of a shorter program) and then to point to your brain. This has lower complexity than trying to describe your brain directly.
I understand what you were trying to do a little better now.
This has lower complexity than trying to describe your brain directly.
I think that so far you’ve tended to treat this as if it was obvious whereas I’ve treated it as if it was obviously false, but neither of us has given much in the way of justification.
Some things to keep in mind:
Giving a pointer to a position within some ‘branch’ of the multiverse could cost a hell of a lot of information. (Rather like how specifying the location of a book within the Library of Babel offers zero compression compared to just writing out the book.) I understand that, if there are lots of copies then, ceteris paribus, the length of a pointer should decrease at rate logarithmic in the number of copies. But it’s not obvious that this reduces the cost to below that of a more ‘direct’ description.
There are many possibilities other than ‘direct, naive, literal, explicit description’ and description in terms of ‘our universe + a pointer’. For instance, one could apply some compression algorithm to an explicit description. And here it’s conceivable that the result could be regarded as a description of the mental state as sitting inside some larger, simpler ‘universe’, but very different from and much smaller than the ‘real world’. Is it really true that all of general relativity and quantum mechanics is implicit in the mental state of an ancient thinker like Socrates? I don’t want to say the answer is ‘obviously not’ - actually I find it extremely difficult to justify ‘yes’ or ‘no’ here. Some of the difficulty is due to the indeterminacy in the concept of “mental state”. (And what if we replace ‘Socrates’ with ‘a cow’?)
What repertoire of logical and/or physical primitives are permitted when it comes to writing down a pointer? For instance, in a universe containing just a single observer, can we efficiently ‘point to’ the observer by describing it as “the unique observer”? In our own universe, can we shorten a pointer by describing Jones as “the nearest person to X” where X is some easily-describable landmark (e.g. a supermassive black hole)?
I think a pointer that effectively forces you to compute the entire program in order to find the object it references is still reducing complexity based on the definition used. Computationally expensive != complex.
Sure, it might be reducing complexity, but it might not be. Consider the Library of Babel example, and bear in mind that a brain-state has a ton of extra information over and above the ‘mental state’ it supports. (Though strictly speaking this depends on the notion of ‘mental state’, which is indeterminate.)
Also, we have to ask “reducing complexity relative to what?” (As I said above, there are many possibilities other than “literal description” and “our universe + pointer”.)
So you can describe your brain by saying explicitly what it contains, but this is not the shortest possible description in the sense of Kolmogorov complexity.
I believe that the shortest way to describe the contents of your brain—not your brain sitting inside a universe or anything—is to describe the universe (which has lower complexity than your brain, in the sense that it is the output of a shorter program) and then to point to your brain. This has lower complexity than trying to describe your brain directly.
I understand what you were trying to do a little better now.
I think that so far you’ve tended to treat this as if it was obvious whereas I’ve treated it as if it was obviously false, but neither of us has given much in the way of justification.
Some things to keep in mind:
Giving a pointer to a position within some ‘branch’ of the multiverse could cost a hell of a lot of information. (Rather like how specifying the location of a book within the Library of Babel offers zero compression compared to just writing out the book.) I understand that, if there are lots of copies then, ceteris paribus, the length of a pointer should decrease at rate logarithmic in the number of copies. But it’s not obvious that this reduces the cost to below that of a more ‘direct’ description.
There are many possibilities other than ‘direct, naive, literal, explicit description’ and description in terms of ‘our universe + a pointer’. For instance, one could apply some compression algorithm to an explicit description. And here it’s conceivable that the result could be regarded as a description of the mental state as sitting inside some larger, simpler ‘universe’, but very different from and much smaller than the ‘real world’. Is it really true that all of general relativity and quantum mechanics is implicit in the mental state of an ancient thinker like Socrates? I don’t want to say the answer is ‘obviously not’ - actually I find it extremely difficult to justify ‘yes’ or ‘no’ here. Some of the difficulty is due to the indeterminacy in the concept of “mental state”. (And what if we replace ‘Socrates’ with ‘a cow’?)
What repertoire of logical and/or physical primitives are permitted when it comes to writing down a pointer? For instance, in a universe containing just a single observer, can we efficiently ‘point to’ the observer by describing it as “the unique observer”? In our own universe, can we shorten a pointer by describing Jones as “the nearest person to X” where X is some easily-describable landmark (e.g. a supermassive black hole)?
I think a pointer that effectively forces you to compute the entire program in order to find the object it references is still reducing complexity based on the definition used. Computationally expensive != complex.
Sure, it might be reducing complexity, but it might not be. Consider the Library of Babel example, and bear in mind that a brain-state has a ton of extra information over and above the ‘mental state’ it supports. (Though strictly speaking this depends on the notion of ‘mental state’, which is indeterminate.)
Also, we have to ask “reducing complexity relative to what?” (As I said above, there are many possibilities other than “literal description” and “our universe + pointer”.)