The seminal result for chaos theory came from weather modeling. An atmospheric model was migrated to a more powerful computer, but it didn’t give the same results as it had on the old computer. It turned out that, in the process of migration, the initial condition data had been rounded to the eighth decimal place. The tiny errors compounded into larger errors, and over the course of an in-model month the predictions completely diverged. An error in the eighth decimal place is roughly comparable to the flap of a butterfly’s wing, which led to the cliche about butterflies and hurricanes.
If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model. It may still have some value; using standard models and available data it’s possible to predict the weather rather accurately for a few days and semi-accurately for a few days more, but it may not be able to predict what you need.
This is one reason I’ve always been skeptical of the “uploaded brain” idea. My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.
This is one reason I’ve always been skeptical of the “uploaded brain” idea. My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.
This is true, but also e.g. minor environmental perturbations like seeing something at a slightly different time would also cause one to diverge from what one otherwise would have been in a fairly short time, so it seems like any notion of personal identity just has to be robust to exponential divergence.
Consider—A typical human brain has ~100 trillion synapses. Any attempt to map it would have some error rate. Is it still “you” if the error rate is .1%? 1%? 10%? Do positive vs. negative errors make a difference (i.e. missing connections vs. spurious connections)?
Is this a way to get new and exciting psychiatric disorders?
I don’t know the answers, or even how we’d try to figure out the answers, but I don’t want to spend eternity as this guy.
Trial and error? E.g. first you upload animal subjects and see what’s fidelity seems to preserve all the animal traits you can find. At some point you then start with human volunteers (perhaps preferentially dying people?), and see whether the rates that seem to work for nonhuman animals also work for humans.
Also I guess once you have a mostly-working human upload, you can test perturbations to this upload to see what factors they are most sensitive to.
My position is that either (1) my brain is computationally stable, in the sense that what I think, how I think it and what I decide to do after thinking is fundamentally about my algorithm (personality/mind), and that tiny changes in the conditions (a random thermal fluctuation), are usually not important. Alternatively (2) my brain is not a reliable/robust machine, and my behaviour is very sensitive to the random thermal fluctuations of atoms in my brain.
In the first case, we wouldn’t expect small errors (for some value of small) in the uploaded brain to result in significant divergence from the real person (stability). In the second case I am left wondering why I would particularly care. Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
So, I don’t think uploaded brains can be ruled out a priori on precision grounds. There exists a non-infinite amount of precision that suffices, the necessary precision is upper bounded by the thermal randomness in a body temperature brain.
Surely both (1) and (2) are true, each to a certain extent.
Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
It depends. We know from experience how meat brains change over time. We have no idea how software brains change over time; it surely depends on the details of the technology used. The changes might be comparable, but they might be bizarre. The longer you run the program, the more extreme the changes are likely to be.
I can’t rule it out either. Nor can I rule it in. It’s conceivable, but there are enough issues that I’m highly skeptical.
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where “sufficiently advanced” here means “probably always science fiction”). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone’s future behaviour based on a simulation of their brain impossible. (Omega from Newcomb’s paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn’t seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn’t have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
I think part of the difference is that I’m considering the uploading process; it seems to me that you’re skipping past it, which amounts to assuming it works perfectly.
Consider the upload of Bob the volunteer. The idea that software = Bob is based on the idea that Bob’s connectome of roughly 100 trillion synapses is accurately captured by the upload process. It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions). It will miss a percentage and probably also invent some that meat-Bob doesn’t have.
This raises the question of how good a copy is good enough. If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation. In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn’t think or behave much like actual Bob.
Huh, interesting! So the way I’m thinking about this is, your loss landscape determines the attractor/repellor structure of your phase space (= network parameter space). For a (reasonable) optimization algorithm to have chaotic behavior on that landscape, it seems like the landscape would either have to have 1) a positive-measure flat region, on which the dynamics were ergodic, or 2) a strange attractor, which seems more plausible.
I’m not sure how that relates to the above link; it mentions the parameters “diverging”, but it’s not clear to me how neural network weights can diverge; aren’t they bounded?
The claim that uploaded brains don’t work because of chaos turns out not to work so well, because it’s usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions:
I think I might have also once saw this exact example of repeated-bouncing-balls done with robot control demonstrating how even with apparently-stationary plates (maybe using electromagnets instead of tilting?), a tiny bit of high-speed computer vision control could beat the chaos and make it bounce accurately many more times than the naive calculation says is possible, but I can’t immediately refind it.
To call something an “uploaded brain” is to make two claims. First, that it is a (stable) mind. Second, that it is in some important sense equivalent to a particular meat brain (e.g., that its output is the same as the meat brain, or that its experiences are the same as the meat brain’s). The sorts of methods you’re talking about to stabilize the mind help with the first claim, but not with the second.
I’ve always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism. As a materialist, it seems obvious to me that a brain is a brain, a program that replicates the brain’s output is a program (and will perform its task more or less well but probably not perfectly), and the two are not the same.
I’ve always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism. As a materialist, it seems obvious to me that a brain is a brain, a program that replicates the brain’s output is a program (and will perform its task more or less well but probably not perfectly), and the two are not the same.
I think that basically everything in the universe can be considered a program/computation, but I also think the notion of a program/computation is quite trivial.
More substantively, I think it might be possible to replicate at least some parts of the physical world with future computers that have what is called physical universality, where they can manipulate the physical world essentially arbitrarily.
So I don’t view brains and computer programs as being of 2 different types, but rather as the same type as a program/computation.
The ancients considered everything to be the work of spirits. The medievals considered the cosmos to be a kingdom. Early moderns likened the universe to a machine. Every age has its dominant metaphors. All of them are oversimplifications of a more complex truth.
Suppose you had an identical twin with identical genes and, until very recently, an identical history. From the perspective of anyone else, you’re similar enough to be interchangeable with each other. But from your perspective, the twin would be a different person.
The brain is you, full stop. It isn’t running a computer program; its hardware and software are inseparable and developed together over the course of your life. In other words, the hardware/software distinction doesn’t apply to brains.
If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model.
This seems like a very underpowered sentence that doesn’t actually need chaos theory. How do you know you’re in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long?
[For everyone else: Hastings is addressing these questions more directly. But I’m still interesting in what Jay or anyone else has to say].
Let’s try again. Chaotic systems usually don’t do exactly what you want them to, and they almost never do the right thing 1000 times in a row. If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way). This saves you the trouble of actually building a system that won’t work reliably. Basically, it marks off certain areas of solution space as not viable.
Also, there’s Lavarand. It turns out that lava lamps are chaotic.
Hastings clearly has more experience with chaos theory than I do (I upvoted his comment). I’m hoping that my rather simplistic grasp of the field might result in a simpler-to-understand answer (that’s still basically correct).
Chaos theory is a branch of math; it characterizes models (equations). If your model is terrible, it can’t help you. What the theory tells you is how wildly your model will react to small perturbations.
AFAIK the only invention made possible by chaos theory is random number generators. If a system you’re modeling is extremely chaotic, you can use its output for random numbers with confidence that nobody will ever be able to model or replicate your system with sufficient precision to reproduce its output.
The seminal result for chaos theory came from weather modeling. An atmospheric model was migrated to a more powerful computer, but it didn’t give the same results as it had on the old computer. It turned out that, in the process of migration, the initial condition data had been rounded to the eighth decimal place. The tiny errors compounded into larger errors, and over the course of an in-model month the predictions completely diverged. An error in the eighth decimal place is roughly comparable to the flap of a butterfly’s wing, which led to the cliche about butterflies and hurricanes.
If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model. It may still have some value; using standard models and available data it’s possible to predict the weather rather accurately for a few days and semi-accurately for a few days more, but it may not be able to predict what you need.
This is one reason I’ve always been skeptical of the “uploaded brain” idea. My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.
This is true, but also e.g. minor environmental perturbations like seeing something at a slightly different time would also cause one to diverge from what one otherwise would have been in a fairly short time, so it seems like any notion of personal identity just has to be robust to exponential divergence.
Consider—A typical human brain has ~100 trillion synapses. Any attempt to map it would have some error rate. Is it still “you” if the error rate is .1%? 1%? 10%? Do positive vs. negative errors make a difference (i.e. missing connections vs. spurious connections)?
Is this a way to get new and exciting psychiatric disorders?
I don’t know the answers, or even how we’d try to figure out the answers, but I don’t want to spend eternity as this guy.
Trial and error? E.g. first you upload animal subjects and see what’s fidelity seems to preserve all the animal traits you can find. At some point you then start with human volunteers (perhaps preferentially dying people?), and see whether the rates that seem to work for nonhuman animals also work for humans.
Also I guess once you have a mostly-working human upload, you can test perturbations to this upload to see what factors they are most sensitive to.
My position is that either (1) my brain is computationally stable, in the sense that what I think, how I think it and what I decide to do after thinking is fundamentally about my algorithm (personality/mind), and that tiny changes in the conditions (a random thermal fluctuation), are usually not important. Alternatively (2) my brain is not a reliable/robust machine, and my behaviour is very sensitive to the random thermal fluctuations of atoms in my brain.
In the first case, we wouldn’t expect small errors (for some value of small) in the uploaded brain to result in significant divergence from the real person (stability). In the second case I am left wondering why I would particularly care. Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
So, I don’t think uploaded brains can be ruled out a priori on precision grounds. There exists a non-infinite amount of precision that suffices, the necessary precision is upper bounded by the thermal randomness in a body temperature brain.
Surely both (1) and (2) are true, each to a certain extent.
Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
It depends. We know from experience how meat brains change over time. We have no idea how software brains change over time; it surely depends on the details of the technology used. The changes might be comparable, but they might be bizarre. The longer you run the program, the more extreme the changes are likely to be.
I can’t rule it out either. Nor can I rule it in. It’s conceivable, but there are enough issues that I’m highly skeptical.
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where “sufficiently advanced” here means “probably always science fiction”). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone’s future behaviour based on a simulation of their brain impossible. (Omega from Newcomb’s paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn’t seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn’t have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
I certainly agree that brains are complicated.
I think part of the difference is that I’m considering the uploading process; it seems to me that you’re skipping past it, which amounts to assuming it works perfectly.
Consider the upload of Bob the volunteer. The idea that software = Bob is based on the idea that Bob’s connectome of roughly 100 trillion synapses is accurately captured by the upload process. It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions). It will miss a percentage and probably also invent some that meat-Bob doesn’t have.
This raises the question of how good a copy is good enough. If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation. In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn’t think or behave much like actual Bob.
Well, now I’m wondering—is neural network training chaotic?
Sometimes!
https://sohl-dickstein.github.io/2024/02/12/fractal.html
Huh, interesting! So the way I’m thinking about this is, your loss landscape determines the attractor/repellor structure of your phase space (= network parameter space). For a (reasonable) optimization algorithm to have chaotic behavior on that landscape, it seems like the landscape would either have to have 1) a positive-measure flat region, on which the dynamics were ergodic, or 2) a strange attractor, which seems more plausible.
I’m not sure how that relates to the above link; it mentions the parameters “diverging”, but it’s not clear to me how neural network weights can diverge; aren’t they bounded?
The claim that uploaded brains don’t work because of chaos turns out not to work so well, because it’s usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions:
See more below:
https://www.lesswrong.com/posts/epgCXiv3Yy3qgcsys/you-can-t-predict-a-game-of-pinball#wjLFhiWWacByqyu6a
I also like tailcalled’s comment on the situation, too.
To call something an “uploaded brain” is to make two claims. First, that it is a (stable) mind. Second, that it is in some important sense equivalent to a particular meat brain (e.g., that its output is the same as the meat brain, or that its experiences are the same as the meat brain’s). The sorts of methods you’re talking about to stabilize the mind help with the first claim, but not with the second.
I’ve always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism. As a materialist, it seems obvious to me that a brain is a brain, a program that replicates the brain’s output is a program (and will perform its task more or less well but probably not perfectly), and the two are not the same.
I think the crux of it is here:
I think that basically everything in the universe can be considered a program/computation, but I also think the notion of a program/computation is quite trivial.
More substantively, I think it might be possible to replicate at least some parts of the physical world with future computers that have what is called physical universality, where they can manipulate the physical world essentially arbitrarily.
So I don’t view brains and computer programs as being of 2 different types, but rather as the same type as a program/computation.
See below for some intuition as to why.
http://www.amirrorclear.net/academic/ideas/simulation/index.html
The ancients considered everything to be the work of spirits. The medievals considered the cosmos to be a kingdom. Early moderns likened the universe to a machine. Every age has its dominant metaphors. All of them are oversimplifications of a more complex truth.
There are no properties of brain which define that brain is “you”, except for the program that it runs.
Suppose you had an identical twin with identical genes and, until very recently, an identical history. From the perspective of anyone else, you’re similar enough to be interchangeable with each other. But from your perspective, the twin would be a different person.
The brain is you, full stop. It isn’t running a computer program; its hardware and software are inseparable and developed together over the course of your life. In other words, the hardware/software distinction doesn’t apply to brains.
This seems like a very underpowered sentence that doesn’t actually need chaos theory. How do you know you’re in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long?
[For everyone else: Hastings is addressing these questions more directly. But I’m still interesting in what Jay or anyone else has to say].
Let’s try again. Chaotic systems usually don’t do exactly what you want them to, and they almost never do the right thing 1000 times in a row. If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way). This saves you the trouble of actually building a system that won’t work reliably. Basically, it marks off certain areas of solution space as not viable.
Also, there’s Lavarand. It turns out that lava lamps are chaotic.
For what it’s worth, I think you’re getting downvoted in part because what you write seems to indicate that you didn’t read the post.
Hastings clearly has more experience with chaos theory than I do (I upvoted his comment). I’m hoping that my rather simplistic grasp of the field might result in a simpler-to-understand answer (that’s still basically correct).
Chaos theory is a branch of math; it characterizes models (equations). If your model is terrible, it can’t help you. What the theory tells you is how wildly your model will react to small perturbations.
AFAIK the only invention made possible by chaos theory is random number generators. If a system you’re modeling is extremely chaotic, you can use its output for random numbers with confidence that nobody will ever be able to model or replicate your system with sufficient precision to reproduce its output.
That wasn’t well phrased. Oops.