The opinion-to-reasons ratio is quite high in both your comment and mine to which it’s replying, which is probably a sign that there’s only limited value in exploring our disagreements, but I’ll make a few comments.
One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as “why would it create any?”.)
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it’s easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it’s possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where’s the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that’s one more reason to think that resurrecting them is terribly expensive.)
(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they’ll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)
You say with apparent confidence that “this technology probably isn’t that far away”. Of course that could be correct, but my guess is that you’re wronger than a very wrong thing made of wrong. We can’t even simulate C. elegans yet, even though that only has about 1k neurons and they’re always wired up the same way (which we know).
Yes, it’s an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I’m unconvinced that “the sim never needs anything even remotely close to atomic information” given that the (simulated or not) world we’re in appears to contain particle accelerators and the like, but let’s suppose you’re right and that nothing finer-grained than simple neuron simulations is needed; you’re still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it’s worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from—so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it’s all just a colossal challenge and I really don’t think waving your hands and saying “just think of a human brain sim growing up in something like the Matrix” is on the same planet as the right ballpark for justifying a claim that it’s anywhere near within reach.
One future civilization could perhaps create huge numbers of simulations. But why would it want to?
I’ve already answered this—because living people have a high interest in past dead people, and would like them to live again. It’s that simple.
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations.
True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual—very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds.
You have to figure out exactly what the dead were like
No, you don’t. For example the amount of information remaining about my grandfather who died in the 1950′s is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available.
So from my perspective the target is very wide. Personal identity is subjectively relative.
But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before?
You wouldn’t. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim).
We can’t even simulate C. elegans yet, even though that only has about 1k neurons and they’re always wired up the same way (which we know).
You could also say we can’t simulate bacteria, but neither is relevant. I’m not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn’t tell us much because only a tiny amount of resources have been spent on that.
Just to be clear—the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn’t that far away, it’s because AGI isn’t that far away, and this follows shortly thereafter.
you’re still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person
Hardly. You are assuming naive encoding without compression. Neural nets—especially large biological brains -are enormously redundant and highly compressible.
Look—it’s really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upper bound, but the lower bound is much lower.
For the lower bound, consider compressing the inner monologue—which naturally includes everything a person has ever read, heard, and said (even to themselves).
So that gives a lower bound of 10^10 for a 100 year old. This doesn’t include visual information, but the visual cortex is also highly compressible due to translational invariance.
And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it’s all just a colossal challenge and I really don’t think waving your hands and saying “just think of a human brain sim growing up in something like the Matrix”
No—again naysayers will always be able to claim “these aren’t really the same people”. But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people, and the turing test for resurrection is entirely subjective, relative to their limited knowledge of the resurrectee.
But the answer you go on to repeat is one I already explained wasn’t relevant, in the sentence after the one you quoted.
most of the additional cost boils down to a constant factor once you amortize at large scale.
I’m not sure what you’re arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don’t understand how this is any sort of counterargument to my suggestion that it’s a reason to simulate new minds rather than old.
the amount of information remaining about my grandfather who died in the 1950′s is pretty small.
You say that like it’s a good thing, but what it actually means is that almost certainly we can’t bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that’s all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can?
you always need one copy to remain in the historical sim for consistency
I’m not sure what that means. I’d expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won’t start going wrong just because you shut down the historical one.
Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that’s another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don’t believe you (if you’re estimating the parameters this way, you’ll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in that case surely there are anthropic reasons to find this scenario unlikely, because then we should be very surprised to find ourselves in the historical sim rather than the non-historical one that’s the real purpose.
When I say this tech isn’t that far away, it’s because AGI isn’t that far away, and this follows shortly thereafter.
Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you’re outrageously overconfident about what’s going to happen on what timescales. We don’t know whether, or when, AGI will be achieved. We don’t know whether when it is it will rapidly turn into way-superhuman intelligence, or whether that will happen much slower (e.g., depending on hardware technology development which may not be sped up much by slightly-superhuman AGI), or even whether actually the technological wins that would lead to very-superhuman AGI simply aren’t possible for some kind of fundamental physical reason we haven’t grasped. We don’t know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don’t value at all.
You are assuming naive encoding without compression
No, I am assuming that smarter encoding doesn’t buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.
that gives a lower bound of 10^10 for a 100 year old
Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.
naysayers will always be able to claim “these aren’t really the same people”. But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people
What makes you think those are different people’s opinions? If you present me with a simulated person who purports to be my dead grandfather, and I learn that he’s reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather. Perhaps I will have no way of telling the difference (since my own reactions on interacting with this simulated person can be available to the optimization process—if I don’t mind hundreds of years of simulated-me being used for that purpose) but there’s a big difference between “I can’t prove it’s not him” and “I have good reason to think it’s him”.
I don’t really have a great deal of time to explain this so I”ll be brief. Basically this is something I’ve thought a great deal about and I have a rather detailed technical vision on how to achieve (At least to the extant that anyone can today. I’m an expert in the relevant fields—computer simulation/graphics and machine learning, and this is my long term life goal.). Fully explaining a rough roadmap would require a small book or long paper, so just keep that in mind.
most of the additional cost boils down to a constant factor once you amortize at large scale.
I’m not sure what you’re arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead.
Sorry—I meant a large constant, not a constant multiplier. Simulating a mind costs the same—doesn’t matter whether it’s in a historical sim world or a modern day sim or a futuristic sim or a fantasy sim … the cost of simulating the world to (our very crude ) sensory perception limits is always about the same.
The extra cost for an h-sim vs others is in the initial historical research/setup (a constant) and consistency guidance. The consistency enforcement can be achieved by replacing standard forward inference with a goal-directed hierarchical bidirectional inference. The cost ends up asymptotically about the same.
Instead of just a physical sim, or it’s more like a very deep hierarchy where at the highest levels of abstraction historical events are compressed down to text like form in some enormous evolving database written and rewritten by an army of historian AIs. Lower more detailed levels in the graph eventually resolve down into 3D objects and physical simulation sparsely as needed.
You say that like it’s a good thing, but what it actually means is that almost certainly we can’t bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that’s all.
As I said earlier—you do not determine who is or is not my grandfather. Your beliefs have zero weight on that matter. This is such an enormously different perspective that it isn’t worth discussing more until you actually understand what I mean when I say personal identity is relative and subjective. Do you grok it?
Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you’re outrageously overconfident about what’s going to happen on what timescales. We don’t know whether, or when, AGI will be achieved.
Perhaps, but I’m not a random sample—not part of your ‘we’. I’ve spent a great deal of time researching the road to AGI. I’ve written a little about related issues in the past.
AGI will be achieved shortly after we have brain-scale machine learning models (such as ANNs) running on affordable (< 10K) machines. This is at most only about 5 years away. Today we can simulate a few tens of billions of synapses in real time on a single GPU, and another 1000x performance improvement is on the table in the near future—from some mix of software and hardware advances. In fact, it could very well happen in just a year. (I happen to be working on this directly, I know more about it than just about anyone).
AGI can mean many different things, so consider before arguing with the above.
We don’t know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don’t value at all.
Sure, but this whole conversation started with the assumption that we avoid such existential risks.
No, I am assuming that smarter encoding doesn’t buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.
The number of parameters in the compressed model needs to be far less than the number of synapses—otherwise the model will overfit. Compression does not hurt performance, it improves it—enormously. More than that, it’s actually required at a fundamental level due to the connection between compression and prediction.
Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.
Obviously a model fitting a dataset of size 10^10 would need to compress that down even further to learn anything- so that’s an upper bound for the parameter bitsize.
If you present me with a simulated person who purports to be my dead grandfather, and I learn that he’s reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather.
Say you die tomorrow from some accident. You wakeup in ‘heaven’ - which you find out is really a sim in the year 2046. You discover that you are a sim (an AI really) recreated in a historical sim from the biological original. You have all the same memories, and your friends and family (or sims of them? admittedly confusing) still call you by the same name and consider you the same. Who are you?
Do you really think that in this situation you would say—“I’m not the same person! I’m just an AI simulacra. I don’t deserve to inherit any of my original’s wealth, status, or relationships! Just turn me off!”
Not yet. :) I meant expert only in “read up on the field”, not recognized academic expert. Besides, much industrial work is not published in academic journals for various reasons (time isn’t justified, secrecy, etc).
Historical versus other sims: I agree that if the simulation runs for infinitely long then the relevant difference is an additive rather than a multiplicative constant. But in practice it won’t do.
Yes, of course I understand your point that I don’t get to decide what counts as your grandfather; neither do you get to decide what counts as mine. You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors. I do not expect that. Not because I think I get to decide what counts as your grandfather, but because I don’t expect our successors to think in the way that you apparently expect them to think.
Yes, you’ll have terrible overfitting problems if you have too many parameters. But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down. If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately. I appreciate that you think something far cruder will suffice. I hope you appreciate that I disagree. (I also hope you don’t think I disagree because I’m an idiot.) Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y), and if Y<<X then the correct conclusion isn’t “excellent, our number of parameters[1] will be relatively small to avoid overfitting, so we don’t need to worry that the fitting process will take for ever”, it’s “damn, it turns out we can’t reconstruct this person”.
[1] It would be better to say something like “number of independent parameters”, of course; the right thing might be lots of parameters + regularization rather than few parameters.
I would expect a sim whose opinions resemble mine to say, on waking up in heaven, something like “well, gosh, this is nice, and I certainly don’t want it turned off, but do you really have good reason to think that I’m an accurate model of the person whose memories I think I have?”. Perhaps not out loud, since no doubt that sim would prefer not to be turned off. But the relevant point here isn’t about what the sim would want (and particularly not about whether the sim would want to be turned off, which I bet would generally not be the case even if they were convinced they weren’t an accurate model) but about whether for the people responsible for creating the sim a crude approximation was close enough to their ancestor for it to be worth a lot of extra trouble to create that sim rather than a completely new one.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors.
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.
Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.
But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately.
Right—again we know that it can’t be much more than 10^14 (number of synapses in human adult, it’s not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it’s been measured—the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn’t really matter because either way it isn’t that much.
Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y),
No—it just doesn’t work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency—that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.
We really don’t remember that much at all—not accurately.
AGI will change our world in many ways, one of which concerns our views on personal identity.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
Copy implies a version that is somehow lesser
That’s not how I was intending to use the word.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
identity is not binary
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
[...] What matters most [...] this isn’t nearly as important [...] far less important [...] What actually matters [...]
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
We are in the same situation today
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI).
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.
The opinion-to-reasons ratio is quite high in both your comment and mine to which it’s replying, which is probably a sign that there’s only limited value in exploring our disagreements, but I’ll make a few comments.
One future civilization could perhaps create huge numbers of simulations. But why would it want to? (Note that this is not at all the same question as “why would it create any?”.)
The cost of resurrecting the dead is not obviously the same as that of making new minds to share modern simulations. You have to figure out exactly what the dead were like, which (despite your apparent confidence that it’s easy to see how easy it is if you just imagine the Matrix) I think is likely to be completely infeasible, and monstrously expensive if it’s possible at all. But then I repeat a question I raised earlier in this discussion: if you have the power to resurrect the dead in a simulated world, why put them back in a simulation of the same unsatisfactory world as they were in before? Where’s the value in that? (And if the answer is, as proposed by entirelyuseless, that to figure out who and what they were we need to do lots of simulations of their earthly existence, then note that that’s one more reason to think that resurrecting them is terribly expensive.)
(If we can resurrect the dead, then indeed I bet a lot of people will want to do it. But it seems to me they’ll want to do it for reasons incompatible with leaving the resurrected dead in simulations of the mundane early 21st century.)
You say with apparent confidence that “this technology probably isn’t that far away”. Of course that could be correct, but my guess is that you’re wronger than a very wrong thing made of wrong. We can’t even simulate C. elegans yet, even though that only has about 1k neurons and they’re always wired up the same way (which we know).
Yes, it’s an approximate inference problem. With an absolutely colossal number of parameters and, at least on the face of it, scarcely any actual information to base the inferences on. I’m unconvinced that “the sim never needs anything even remotely close to atomic information” given that the (simulated or not) world we’re in appears to contain particle accelerators and the like, but let’s suppose you’re right and that nothing finer-grained than simple neuron simulations is needed; you’re still going to need at the barest minimum a parameter per synapse, which is something like 10^15 per person. But it’s worse, because there are lots of people and they all interact with one another and those interactions are probably where our best hope of getting the information we need for the approximate inference problems comes from—so now we have to do careful joint simulations of lots of people and optimize all their parameters together. And if the goal is to resurrect the dead (rather than just make new people a bit like our ancestors) then we need really accurate approximate inference, and it’s all just a colossal challenge and I really don’t think waving your hands and saying “just think of a human brain sim growing up in something like the Matrix” is on the same planet as the right ballpark for justifying a claim that it’s anywhere near within reach.
I’ve already answered this—because living people have a high interest in past dead people, and would like them to live again. It’s that simple.
True, but most of the additional cost boils down to a constant factor once you amortize at large scale. Recreating a single individual—very expensive. Recreating billions? Reduces down to closer to the scaling costs of simulating that many minds.
No, you don’t. For example the amount of information remaining about my grandfather who died in the 1950′s is pretty small. We could recover his DNA, and we have a few photos. We have some poetry he wrote, and letters. The total amount of information contained in the memories of living relatives is small, and will be even less by the time the tech is available.
So from my perspective the target is very wide. Personal identity is subjectively relative.
You wouldn’t. I think you misunderstand. You need the historical sims to recreate the dead in the first place. But once that is running, you can copy out their minds at any point. However you always need one copy to remain in the historical sim for consistency (until they die in the hist-sim).
You could also say we can’t simulate bacteria, but neither is relevant. I’m not familiar enough with C. Elegans sims to evaluate your claim that the current sims are complete failures, but even if this is true it doesn’t tell us much because only a tiny amount of resources have been spent on that.
Just to be clear—the historical ress-sims under discussion will be created by large-scale AGI (superintelligence). When I say this tech isn’t that far away, it’s because AGI isn’t that far away, and this follows shortly thereafter.
Hardly. You are assuming naive encoding without compression. Neural nets—especially large biological brains -are enormously redundant and highly compressible.
Look—it’s really hard to accurately estimate the resources for things like this, unless you actually know how to build it. 10^15 is a a reasonable upper bound, but the lower bound is much lower.
For the lower bound, consider compressing the inner monologue—which naturally includes everything a person has ever read, heard, and said (even to themselves).
200 wpm 500k words/year 8bits/word ~ 100 MB/year
So that gives a lower bound of 10^10 for a 100 year old. This doesn’t include visual information, but the visual cortex is also highly compressible due to translational invariance.
No—again naysayers will always be able to claim “these aren’t really the same people”. But their opinions are worthless. The only opinions that matter are those who actually knew the relevant people, and the turing test for resurrection is entirely subjective, relative to their limited knowledge of the resurrectee.
But the answer you go on to repeat is one I already explained wasn’t relevant, in the sentence after the one you quoted.
I’m not sure what you’re arguing. I agree that the additional cost is basically a (large) constant factor; that is, if it costs X to simulate a freshly made new mind, maybe it costs 1000X to recover the details of a long-dead one and simulate that instead. (The factor might well be much more than 1000.) I don’t understand how this is any sort of counterargument to my suggestion that it’s a reason to simulate new minds rather than old.
You say that like it’s a good thing, but what it actually means is that almost certainly we can’t bring your grandfather back to life, no matter what technology we have. Perhaps we could make someone who somewhat resembles your grandfather, but that’s all. Why would you prefer that over making new minds so much as to justify the large extra expense of getting the best approximation we can?
I’m not sure what that means. I’d expect that you use the historical simulation in the objective function for the (enormous) optimization problem of determining all the parameters that govern their brain, and then you throw it away and plug the resulting mind into your not-historical simulation. It will always have been the case that at one point you did the historical simulation, but the other simulation won’t start going wrong just because you shut down the historical one.
Anyway: as I said before, if you expect lots of historical simulation just to figure out what to put into the non-historical simulation, then that’s another reason to think that ancestor simulation is very expensive (because you have to do all that historical simulation). On the other hand, if you expect that a small amount of historical simulation will suffice then (1) I don’t believe you (if you’re estimating the parameters this way, you’ll need to do a lot of it; any optimization procedure needs to evaluate the objective function many times) and (2) in that case surely there are anthropic reasons to find this scenario unlikely, because then we should be very surprised to find ourselves in the historical sim rather than the non-historical one that’s the real purpose.
Perhaps I am just misinterpreting your tone (easily done with written communication) but it seems to me that you’re outrageously overconfident about what’s going to happen on what timescales. We don’t know whether, or when, AGI will be achieved. We don’t know whether when it is it will rapidly turn into way-superhuman intelligence, or whether that will happen much slower (e.g., depending on hardware technology development which may not be sped up much by slightly-superhuman AGI), or even whether actually the technological wins that would lead to very-superhuman AGI simply aren’t possible for some kind of fundamental physical reason we haven’t grasped. We don’t know whether, if we do make a strongly superhuman AGI, it will enable us to achieve anything resembling our current goals, or whether it will take us apart to use our atoms for something we don’t value at all.
No, I am assuming that smarter encoding doesn’t buy you more than the outrageous amount by which I shrank the complexity by assuming only one parameter per synapse.
Tried optimizing a function of 10^10 parameters recently? It tends to take a while and converge to the wrong local optimum.
What makes you think those are different people’s opinions? If you present me with a simulated person who purports to be my dead grandfather, and I learn that he’s reconstructed from as little information as (I think) we both expect actually to be available, then I will not regard it as the same person as my grandfather. Perhaps I will have no way of telling the difference (since my own reactions on interacting with this simulated person can be available to the optimization process—if I don’t mind hundreds of years of simulated-me being used for that purpose) but there’s a big difference between “I can’t prove it’s not him” and “I have good reason to think it’s him”.
I don’t really have a great deal of time to explain this so I”ll be brief. Basically this is something I’ve thought a great deal about and I have a rather detailed technical vision on how to achieve (At least to the extant that anyone can today. I’m an expert in the relevant fields—computer simulation/graphics and machine learning, and this is my long term life goal.). Fully explaining a rough roadmap would require a small book or long paper, so just keep that in mind.
Sorry—I meant a large constant, not a constant multiplier. Simulating a mind costs the same—doesn’t matter whether it’s in a historical sim world or a modern day sim or a futuristic sim or a fantasy sim … the cost of simulating the world to (our very crude ) sensory perception limits is always about the same.
The extra cost for an h-sim vs others is in the initial historical research/setup (a constant) and consistency guidance. The consistency enforcement can be achieved by replacing standard forward inference with a goal-directed hierarchical bidirectional inference. The cost ends up asymptotically about the same.
Instead of just a physical sim, or it’s more like a very deep hierarchy where at the highest levels of abstraction historical events are compressed down to text like form in some enormous evolving database written and rewritten by an army of historian AIs. Lower more detailed levels in the graph eventually resolve down into 3D objects and physical simulation sparsely as needed.
As I said earlier—you do not determine who is or is not my grandfather. Your beliefs have zero weight on that matter. This is such an enormously different perspective that it isn’t worth discussing more until you actually understand what I mean when I say personal identity is relative and subjective. Do you grok it?
Perhaps, but I’m not a random sample—not part of your ‘we’. I’ve spent a great deal of time researching the road to AGI. I’ve written a little about related issues in the past.
AGI will be achieved shortly after we have brain-scale machine learning models (such as ANNs) running on affordable (< 10K) machines. This is at most only about 5 years away. Today we can simulate a few tens of billions of synapses in real time on a single GPU, and another 1000x performance improvement is on the table in the near future—from some mix of software and hardware advances. In fact, it could very well happen in just a year. (I happen to be working on this directly, I know more about it than just about anyone).
AGI can mean many different things, so consider before arguing with the above.
Sure, but this whole conversation started with the assumption that we avoid such existential risks.
The number of parameters in the compressed model needs to be far less than the number of synapses—otherwise the model will overfit. Compression does not hurt performance, it improves it—enormously. More than that, it’s actually required at a fundamental level due to the connection between compression and prediction.
Obviously a model fitting a dataset of size 10^10 would need to compress that down even further to learn anything- so that’s an upper bound for the parameter bitsize.
Say you die tomorrow from some accident. You wakeup in ‘heaven’ - which you find out is really a sim in the year 2046. You discover that you are a sim (an AI really) recreated in a historical sim from the biological original. You have all the same memories, and your friends and family (or sims of them? admittedly confusing) still call you by the same name and consider you the same. Who are you?
Do you really think that in this situation you would say—“I’m not the same person! I’m just an AI simulacra. I don’t deserve to inherit any of my original’s wealth, status, or relationships! Just turn me off!”
Can you provide some links to your publications on the topic of machine learning?
Not yet. :) I meant expert only in “read up on the field”, not recognized academic expert. Besides, much industrial work is not published in academic journals for various reasons (time isn’t justified, secrecy, etc).
Historical versus other sims: I agree that if the simulation runs for infinitely long then the relevant difference is an additive rather than a multiplicative constant. But in practice it won’t do.
Yes, of course I understand your point that I don’t get to decide what counts as your grandfather; neither do you get to decide what counts as mine. You apparently expect that our successors will attach a lot of value to simulating people who for all they know (on the basis of a perhaps tiny amount of information) might as well be copies of their ancestors. I do not expect that. Not because I think I get to decide what counts as your grandfather, but because I don’t expect our successors to think in the way that you apparently expect them to think.
Yes, you’ll have terrible overfitting problems if you have too many parameters. But the relevant comparison isn’t between the number of parameters in the model and the number of synapses; it’s between the number of parameters in the model and the amount of information we have to nail the model down. If it takes more than (say) a gigabyte of maximally-compressed information to describe how one person differs from others, then it will take more than (something on the order of) 10^9 parameters to specify a person that accurately. I appreciate that you think something far cruder will suffice. I hope you appreciate that I disagree. (I also hope you don’t think I disagree because I’m an idiot.) Anyway, my point here is this: specifying a person accurately enough requires whatever amount of information it does (call it X), and our successors will have whatever amount of usable information they do (call it Y), and if Y<<X then the correct conclusion isn’t “excellent, our number of parameters[1] will be relatively small to avoid overfitting, so we don’t need to worry that the fitting process will take for ever”, it’s “damn, it turns out we can’t reconstruct this person”.
[1] It would be better to say something like “number of independent parameters”, of course; the right thing might be lots of parameters + regularization rather than few parameters.
I would expect a sim whose opinions resemble mine to say, on waking up in heaven, something like “well, gosh, this is nice, and I certainly don’t want it turned off, but do you really have good reason to think that I’m an accurate model of the person whose memories I think I have?”. Perhaps not out loud, since no doubt that sim would prefer not to be turned off. But the relevant point here isn’t about what the sim would want (and particularly not about whether the sim would want to be turned off, which I bet would generally not be the case even if they were convinced they weren’t an accurate model) but about whether for the people responsible for creating the sim a crude approximation was close enough to their ancestor for it to be worth a lot of extra trouble to create that sim rather than a completely new one.
(I could not, in the situation you describe, actually know that I had “all the same memories”. That’s a large part of the point.)
AGI will change our world in many ways, one of which concerns our views on personal identity. After AGI people will become accustomed to many different versions or branches of the same mind, mind forking, merging, etc.
Copy implies a version that is somehow lesser, which is not the case. Indeed in a successful sim scenario, almost everyone is technically a copy.
The amount of information we have to nail down is just that required for a human mind sim, which is exactly the amount of compressed information encoded in the synapses.
Right—again we know that it can’t be much more than 10^14 (number of synapses in human adult, it’s not 10^15 BTW), and it could be as low as 10^10. The average synapse stores only a bit or two at most (you can look it up, it’s been measured—the typical median synapse is tiny and has an extremely low SNR corresponding to a small number of bits.) We can argue about numbers in between, but it doesn’t really matter because either way it isn’t that much.
No—it just doesn’t work that way, because identity is not binary. It is infinite shades of grey. Different levels of success require only getting close enough in mindspace, and is highly relative to one’s subjective knowledge of the person.
What matters most is consistency. It’s not like the average person remembers everything they said a few years ago, so that 10^10 figure is extremely generous. Our memory is actually fairly poor.
There will be multiple versions of past people—just as we have multiple biographies today. Clearly there is some objective sense in which some versions are more authentic, but this isn’t nearly as important as you seem to think—and it is far less important than historical consistency with the rest of the world.
We are in the same situation today. For all I know all of my past life is a fantasy created on the fly. What actually matters is consistency—that my memories match the memories of others and recorded history. And in fact due to the malleability of memory, consistency is often imperfect in human memories.
We really don’t remember that much at all—not accurately.
I agree, but evidently we disagree about how our views on personal identity will change if and when AGI (and, which I think is what actually matters here, large-scale virtualization) comes along.
That’s not how I was intending to use the word.
You’ve been arguing that we need substantially less information than “exactly the amount of compressed information encoded in the synapses”.
I promise, I do understand this, and I don’t see that anything I wrote requires that identity be binary. (In particular, at no point have I been intending to claim that what’s required is the exact same neurons, or anything like that.)
These are value judgements, or something like them. My values are apparently different from yours, which is fair enough. But the question actually at issue wasn’t one about our values (where we could just agree to disagree) but about, in effect, the likely values of our superintelligent AI successors (or perhaps our roughly-normally-intelligent successors making use of superintelligent AI). So far you’ve offered no grounds for thinking that they will feel the same way about this as you do, you’ve just stated your own position as if it’s a matter of objective fact (albeit about matters of not-objective-fact).
Only if you don’t distinguish between what’s possible and what’s likely. Sure, I could have been created ten seconds ago with completely made-up memories. Or I could be in the hands of a malevolent demon determined to deceive me about everything. Or I could be suffering from some disastrous mental illness. But unless I adopt a position of radical skepticism (which I could; it would be completely irrefutable and completely useless) it seems reasonable not to worry about such possibilities until actual reason for thinking them likely comes along.
I will (of course!) agree that our situation has a thing or two in common with that one, because our perception and memory and inference are so limited and error-prone, and because even without simulation people change over time in ways that make identity a complicated and fuzzy affair. But for me—again, this involves value judgements and yours may differ from mine, and the real question is what our successors will think—the truer this is, the less attractive ancestor-simulation becomes for me. If you tell me you can simulate my great-great-great-great-great-aunt Olga about whom I know nothing at all, then I have absolutely no way of telling how closely the simulation resembles Olga-as-she-was, but that means that the simulation has little extra value for me compared with simulating some random person not claimed to be my great^5-aunt. As for whether I should be glad of it for Olga’s sake—well, if you mean new-Olga’s then an ancestor-sim is no better in this respect than a non-ancestor-sim; and if you mean old-Olga’s sake then the best I can do is to think how much it would please me to learn that 200 years from now someone will make a simulation that calls itself by my name and has a slightly similar personality and set of memories, but no more than that; the answer is that I couldn’t care less whether anyone does.
(It feels like I’m repeating myself, for which I apologize. But I’m doing so largely because it seems like you’re completely ignoring the main points I’m making. Perhaps you feel similarly, in which case I’m sorry; for what it’s worth, I’m not aware that I’m ignoring any strong or important point you’re making.)
That was misworded—I meant the amount of information actually encoded in the synapses, after advanced compression. As I said before, synapses in NNs are enormously redundant, such that trivial compression dramatically reduces the storage requirements. For the amount of memory/storage to represent a human mind level sim, we get that estimate range between 10^10 to 10^14, as discussed earlier. However a great deal of this will be redundant across minds, so the amount required to specify the differences of one individual will be even less.
Right. Well I have these values, and I am not alone. Most people’s values will also change in the era of AGI, as most people haven’t thought about this clearly. And finally, for a variety of reasons, I expect that people like me will have above average influence and wealth.
Your side discussion about your distant relatives suggests you don’t foresee how this is likely to come about in practice (which really is my fault as I haven’t explained it in this thread, although I have discussed bits of it previously).
It isn’t about distant ancestors. It starts with regular uploading. All these preserved brains will have damage of various kinds—some arising from the process itself, some from normal aging or disease. AI then steps in to fill in the gaps, using large scale inference. This demand just continues to grow, and it ties into the pervasive virtual world heaven tech that uploads want for other reasons.
In short order everyone in the world has proof that virtual heaven is real, and that uploading works. The world changes, and uploading becomes the norm. We become an em society.
Someone creates a real Harry Potter sim, and when Harry enters the ‘real’ world above he then wants to bring back his fictional parents. So it goes.
Then the next step is insurance for the living. Accidents can destroy or damage your brain—why risk that? So the AIs can create a simulated copy of the earth, kept up to date in real time through the ridiculous pervasive sensor monitoring of the future.
Eventually everyone realizes that they are already sims created by the AI.
It sucks to be an original—because there is no heaven if you die. It is awesome to be a sim, because we get a guaranteed afterlife.