You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors.
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.
Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.
But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately.
Right—again we know that it can’t be much more than 10^14 (number of synapses in human adult, it’s not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it’s been measured—the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn’t really matter because either way it isn’t that much.
Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y),
No—it just doesn’t work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency—that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.
We really don’t remember that much at all—not accurately.
AGI will change our world in many ways, one of which concerns our views on personal identity.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
Copy implies a version that is somehow lesser
That’s not how I was intending to use the word.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
identity is not binary
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
[...] What matters most [...] this isn’t nearly as important [...] far less important [...] What actually matters [...]
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
We are in the same situation today
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI).
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.
Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
Right—again we know that it can’t be much more than 10^14 (number of synapses in human adult, it’s not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it’s been measured—the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn’t really matter because either way it isn’t that much.
No—it just doesn’t work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency—that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.
We really don’t remember that much at all—not accurately.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
That’s not how I was intending to use the word.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.