I’ve heard this idea before, and it has never seemed convincing. Suppose you managed to record one useful bit per second, 24⁄7, for thirty years. That’s approximately one billion bits. There are approximately 100 billion neurons, each with many synapses. How many polynomials of degree n can fit m points for n>m? Infinity many.
Its actually worse than this, because even if you record orders or magnitude more data than the brain contains, perhaps by recording speech and video, then maybe you could recreate the speech and movement centres of the brain with some degree of accuracy, but not recover other areas that seem more fundamental to your identity, because the information is not evenly distributed.
Its easy to get into a ‘happy death spiral’ around superintelligence, but even godlike entities cannot do things which are simply impossible.
I suppose it might be worth recording information about yourself on the basis of low cost and a small chance of astronomically large payoff, and regardless it could be useful for data mining or interesting for future historians. But I can’t see that it has anywhere near the chance of success of cryonics.
Incidentally, a plastinisied brain could be put in a box and buried in a random location and survive for a long time, especially in Antarctica.
That’s approximately one billion bits. There are approximately 100 billion neurons, each with many synapses. How many polynomials of degree n can fit m points for n>m? Infinity many.
That’s true but irrelevant and proves too much (the same point about the ‘underdetermination of theories’ also ‘proves’ that induction is impossible and we cannot learn anything about the world and that I am not writing anything meaningful here and you are not reading anything but noise).
There’s no reason to expect that brains will be maximally random, much reason to expect that to be wrong, and under many restrictive scenarios, you can recover a polynomial with n>m—you might say that’s the defining trait of a number of increasingly popular techniques like the lasso/ridge regression/elastic net, which bring in priors/regularization/sparsity to let one recover a solution even when n<p (as it’s usually written). The question is whether personality and memories are recoverable in realistic scenarios, not unlikely polynomials.
On that, I tend to be fairly optimistic (or pessimistic, depending on how you look at it): humans seem to be small.
When I look at humans’ habits, attitudes, political beliefs, aesthetic preferences, food preferences, jobs, etc—the psychological literature says to me that all of these things are generally stable over lifetimes, small to largely heritable, highly intercorrelated (so you can predict some from others), and many are predictable from underlying latent variables determining attitudes and values (politics in particular seems to have almost nothing to do with explicit factual reasoning; religion and atheism I also suspect to be almost entirely determined by cognitive traits). Stereotypes turn out to be very accurate in practice. On top of that, our short-term memories are small, our long-term memories are vague and limited and rewritten every time they’re recalled, and false memories are easily manufactured; we don’t seem to care much about this in practice, to the point where things like childhood amnesia are taken completely for granted and not regarded as dying. Bandwidth into the brain may be large calculated naively, but estimated from how much we actually understand and can retain and make choices based on, it’s tiny. The heuristics & biases and expertise literatures imply we spend most of the time on autopilot in System I. Then we have evidence from brain traumas: while some small lesions can produce huge changes (pace Sacks), other people shrug off horrific traumas and problems like hydrocephaly without much issue, people come out of comas and being trapped under ice without their personalities usually radically changing, people struck by lightning report cognitive deficits but not loss of memory or personality changes...
(I think most people have experienced at least once the sudden realization of déjà vu, that they were doing the exact same thing or having the exact same conversation as they had in the past; or even (like myself) wrote a whole comment rebutting an old blog post in their head only to discover after reading further that they had already posted that comment, identical except for some spelling and punctuation differences.)
No, as much as humans may flatter ourselves that our minds are so terribly complex and definitely way more intricate than a cat’s and would be infinitely difficult to recover from a damaged sample, I suspect it may turn out to be dismayingly simple for superintelligences to recover a usable version of us from our genome, writings, brain, and knowledge of our environment.
I suspect it may turn out to be dismayingly simple for superintelligences to recover a usable version of us from our genome, writings, brain, and knowledge of our environment.
I think an important point may be to distinguish between producing a usable version of us that functions in a very similar way, and producing a version similar enough to be the ‘same’ person, to preserve continuity of conciousness and provide immortality, if indeed this makes sense. Perhaps it doesn’t, maybe the Buddhists are correct, the self is an illusion and the question of whether a copy of me (of varying quality) really is me is meaningless.
Anyway, I don’t deny that it would be possible to create someone who is extremely similar. People are not randomly sprinkled through personspace, they cluster, and identifying the correct cluster is far simpler. But my intuitions are that the fidelity of reconstruction must be much higher to preserve identity. Comas do not necessary involve a substantial loss of information AFAIK, but wrt more traumatic problems I am willing to bite the bullet and say that they might not be the same person they were before.
As you say, some lesions cause bigger personalty changes than others. But it seems to me that its easy to gather information about superficial aspects, while my inner monologue, my hopes and dreams and other cliches , are not so readily apparent from my web browsing habits. Perhaps I should start keeping a detailed diary. Of course, you might derive some comfort from the existence of a future person who is extremely similar but not the same person as you. But I’d like to live too.
So to summarise, I don’t think the brain is maximally random, but I also don’t think orders of magnitude of compression is possible. If we disagree, it is not about information theory, but about the more confusing metaphysical question of whether cluster identification is sufficient for continuity of self.
And thanks for the reply, its been an interesting read.
I’ve heard this idea before, and it has never seemed convincing. Suppose you managed to record one useful bit per second, 24⁄7, for thirty years. That’s approximately one billion bits. There are approximately 100 billion neurons, each with many synapses. How many polynomials of degree n can fit m points for n>m? Infinity many.
Its actually worse than this, because even if you record orders or magnitude more data than the brain contains, perhaps by recording speech and video, then maybe you could recreate the speech and movement centres of the brain with some degree of accuracy, but not recover other areas that seem more fundamental to your identity, because the information is not evenly distributed.
Its easy to get into a ‘happy death spiral’ around superintelligence, but even godlike entities cannot do things which are simply impossible.
I suppose it might be worth recording information about yourself on the basis of low cost and a small chance of astronomically large payoff, and regardless it could be useful for data mining or interesting for future historians. But I can’t see that it has anywhere near the chance of success of cryonics.
Incidentally, a plastinisied brain could be put in a box and buried in a random location and survive for a long time, especially in Antarctica.
That’s true but irrelevant and proves too much (the same point about the ‘underdetermination of theories’ also ‘proves’ that induction is impossible and we cannot learn anything about the world and that I am not writing anything meaningful here and you are not reading anything but noise).
There’s no reason to expect that brains will be maximally random, much reason to expect that to be wrong, and under many restrictive scenarios, you can recover a polynomial with n>m—you might say that’s the defining trait of a number of increasingly popular techniques like the lasso/ridge regression/elastic net, which bring in priors/regularization/sparsity to let one recover a solution even when n<p (as it’s usually written). The question is whether personality and memories are recoverable in realistic scenarios, not unlikely polynomials.
On that, I tend to be fairly optimistic (or pessimistic, depending on how you look at it): humans seem to be small.
When I look at humans’ habits, attitudes, political beliefs, aesthetic preferences, food preferences, jobs, etc—the psychological literature says to me that all of these things are generally stable over lifetimes, small to largely heritable, highly intercorrelated (so you can predict some from others), and many are predictable from underlying latent variables determining attitudes and values (politics in particular seems to have almost nothing to do with explicit factual reasoning; religion and atheism I also suspect to be almost entirely determined by cognitive traits). Stereotypes turn out to be very accurate in practice. On top of that, our short-term memories are small, our long-term memories are vague and limited and rewritten every time they’re recalled, and false memories are easily manufactured; we don’t seem to care much about this in practice, to the point where things like childhood amnesia are taken completely for granted and not regarded as dying. Bandwidth into the brain may be large calculated naively, but estimated from how much we actually understand and can retain and make choices based on, it’s tiny. The heuristics & biases and expertise literatures imply we spend most of the time on autopilot in System I. Then we have evidence from brain traumas: while some small lesions can produce huge changes (pace Sacks), other people shrug off horrific traumas and problems like hydrocephaly without much issue, people come out of comas and being trapped under ice without their personalities usually radically changing, people struck by lightning report cognitive deficits but not loss of memory or personality changes...
(I think most people have experienced at least once the sudden realization of déjà vu, that they were doing the exact same thing or having the exact same conversation as they had in the past; or even (like myself) wrote a whole comment rebutting an old blog post in their head only to discover after reading further that they had already posted that comment, identical except for some spelling and punctuation differences.)
No, as much as humans may flatter ourselves that our minds are so terribly complex and definitely way more intricate than a cat’s and would be infinitely difficult to recover from a damaged sample, I suspect it may turn out to be dismayingly simple for superintelligences to recover a usable version of us from our genome, writings, brain, and knowledge of our environment.
I think an important point may be to distinguish between producing a usable version of us that functions in a very similar way, and producing a version similar enough to be the ‘same’ person, to preserve continuity of conciousness and provide immortality, if indeed this makes sense. Perhaps it doesn’t, maybe the Buddhists are correct, the self is an illusion and the question of whether a copy of me (of varying quality) really is me is meaningless.
Anyway, I don’t deny that it would be possible to create someone who is extremely similar. People are not randomly sprinkled through personspace, they cluster, and identifying the correct cluster is far simpler. But my intuitions are that the fidelity of reconstruction must be much higher to preserve identity. Comas do not necessary involve a substantial loss of information AFAIK, but wrt more traumatic problems I am willing to bite the bullet and say that they might not be the same person they were before. As you say, some lesions cause bigger personalty changes than others. But it seems to me that its easy to gather information about superficial aspects, while my inner monologue, my hopes and dreams and other cliches , are not so readily apparent from my web browsing habits. Perhaps I should start keeping a detailed diary.
Of course, you might derive some comfort from the existence of a future person who is extremely similar but not the same person as you. But I’d like to live too.
So to summarise, I don’t think the brain is maximally random, but I also don’t think orders of magnitude of compression is possible. If we disagree, it is not about information theory, but about the more confusing metaphysical question of whether cluster identification is sufficient for continuity of self.
And thanks for the reply, its been an interesting read.