Still, I don’t think you could compress the content of 1000 brains into one. (And I’m not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)
I argue that my brain right now contains a lossless copy of itself and itself two words ago!
Getting 1000 brains in here would take some creativity, but I’m sure I can figure something out...
But this is all rather facetious. Breaking the quote’s point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.
I argue that my brain right now contains a lossless copy of itself and itself two words ago!
I’d argue that your brain doesn’t even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo’s point about the limits of being able to model other people applies just as strongly to modelling oneself. I don’t, and cannot, know all about myself—past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.
How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done—for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel’s theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents’ source codes. Are there mathematical limits to this?
a lossless copy of itself and itself two words ago
But our memories discard huge amounts of information all the time. Surely there’s been at least a little degradation in the space of two words, or we’d never forget anything.
Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said “epsilon” instead of “two words”.
Surely there’s been at least a little degradation in the space of two words, or we’d never forget anything.
Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn’t synchronized with its learning of new things.
So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.
Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.
So I’ve made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where “two words” = “epsilon”)
It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn’t synchronized with its learning of new things.
See the pigeon-hole argument in the original quote.
You’ll probably have more success losslessly compressing two brains than losslessly compressing one.
Still, I don’t think you could compress the content of 1000 brains into one. (And I’m not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)
I argue that my brain right now contains a lossless copy of itself and itself two words ago!
Getting 1000 brains in here would take some creativity, but I’m sure I can figure something out...
But this is all rather facetious. Breaking the quote’s point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.
Which I can’t.
For now.
I’d argue that your brain doesn’t even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo’s point about the limits of being able to model other people applies just as strongly to modelling oneself. I don’t, and cannot, know all about myself—past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.
How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done—for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel’s theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents’ source codes. Are there mathematical limits to this?
I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.
But our memories discard huge amounts of information all the time. Surely there’s been at least a little degradation in the space of two words, or we’d never forget anything.
Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said “epsilon” instead of “two words”.
Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn’t synchronized with its learning of new things.
So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.
Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.
So I’ve made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where “two words” = “epsilon”)
See the pigeon-hole argument in the original quote.