Thanks, this is very useful, I have many follow up questions.
How possible is it to genuinely rule out a solution? how much does chaos beat a rock that says “most new things fail”? Is there any cache of details I could cite?
The good news is that chaos theory can rule out solutions with extreme prejudice- and because it’s a formal theory, it lets you be very clear if it’s ruling out a solution absolutely (FluttershAI and Clippy combined aren’t going to be able to predict the weather a decade in advance) vs if it’s ruling out a solution in all practicality, but teeeechnically (ie, predicing 4-5 swings of a double pendulum). Here are the concrete examples that come to mind:
I wrote a CPU N-Body simulator, and then ported it to CUDA. I can’t test that the port is correct by comparing long trajectories of the CPU simulator to the CUDA simulator, and because I know this is a chaos problem, I won’t try to fix this by adding epsilons to the test. Instead, I will fix this by running the test simulation for ~ < a Lyapunov Time.
I wrote a genetic algorithm for designing trebuchets. The final machine, while very efficient, is visibly using precise timing after several cycles of a double pendulum. Therefore, I know it can’t be built in real life.
I see a viral gif of black and white balls being dumped in a pile, so that when they come to rest they form a smiley face. I know it’s fake because that’s not something that can be achieved with careful orchestration
I prove a differential equation is chaotic, so I don’t need to try to find a closed form solution.
One thing that jumps out, writing this out explicitly, is that chaos theory concievably could be replaced with intuition of “well obviously that won’t work,” and so I don’t know to what extent chaos theory just formulated wisdom that existed pre-1950s, vs generated wisdom that got incorporated into modern common sense. Either way, formal is nice- in particular, the “can’t test end to end simulations by direct comparison” and “can’t find a closed form solution” cases saved me a lot of time.
One thing that jumps out, writing this out explicitly, is that chaos theory concievably could be replaced with intuition of “well obviously that won’t work,” and so I don’t know to what extent chaos theory just formulated wisdom that existed pre-1950s, vs generated wisdom that got incorporated into modern common sense.
Yeah, this is top of my mind as well. to get a sense of the cultural shift I’m trying to interview people who were around for the change or at least knew people who were. If anyone knows any boomer scientists or mathematicians with thoughts on this I’d love to talk to them.
I haven’t nailed this down yet, but there’s an interesting nugget along the lines of chaos being used to get scientists comfortable with existential uncertainty, which (some) mathematicians already were. My claim of the opposite triggered a great discussion on twitter[1], and @Jeffrey Heninger gave me a great concept: “the chaos paradigm shift was moving from “what is the solution” to “what can I know, even if the solution is unknowable?” That kind of cultural shift seems even more important than the math but harder to study.
BTW thank you for your time and all your great answers, this is really helpful and I plan on featuring “ruling things out” if I do a follow-up post.
Differential equation example: I wanted a closed form solution of the range of the simplest possible trebuchet- just a seesaw. This is perfectly achievable, see for example http://ffden-2.phys.uaf.edu/211.fall2000.web.projects/J%20Mann%20and%20J%20James/see_saw.html. I wanted a closed form solution of the second simplest trebuchet, a seesaw with a sling. This is impossible, because even though the motion of the trebuchet with sling isn’t chaotic during the throw, it can be made chaotic by just varying the initial conditions, which rules out a simple closed form solution for non-chaotic initial conditions.
Lyapunov exponent example: for the bouncing balls, if each ball travels 1 diameter between bounces, then a change in velocity angle of 1 degree pre-bounce becomes a change in angle of 4 degrees post bounce (this number may be 6- I am bad at geometry), so the exponent is 4 if time is measured in bounces.
even though the motion of the trebuchet with sling isn’t chaotic during the throw, it can be made chaotic by just varying the initial conditions, which rules out a simple closed form solution for non-chaotic initial conditions
Do you know what theorems/whatever this is from? It seems to me that if you know that “throws” constitute a subset of phase space that isn’t chaotic, then you should be able to have a closed-form solution for those trajectories.
So this turns out to be a doozy, but it’s really fascinating. I don’t have an answer- an answer would look like “normal chaotic differential equations don’t have general exact solutions” or “there is no relationship between being chaotic and not having an exact solution” but deciding which is which won’t just require proof, it would also require good definitions of “normal differential equation” and “exact solution.” (the good definition of “general” is “initial conditions with exact solutions have nonzero measure”) I have some work.
A chaotic differential equation has to be nonlinear and at least third order- and almost all nonlinear third order differential equations don’t admit general exact solutions. So, the statement “as a heuristic, chaotic differential equations don’t have general exact solutions” seems pretty unimpressive. However, I wrongly believed the strong version of this heuristic and that belief was useful: I wanted to model trebuchet arm-sling dynamics, recognized that the true form could not be solved, and switched to a simplified model based on what simplifications would prevent chaos (no gravity, sling is wrapped around a drum instead of fixed to the tip of an arm) and then was able to find an exact solution (note that this solvable system starts as nonlinear 4th order, but can be reduced using conservation of angular momentum hacks)
Now, it is known that a chaotic difference equation can have an exact solution: the equation x(n+1) = 2x(n) mod 1 is formally chaotic and has the exact solution 2^n x mod 1. A chaotic differential equation exhibiting chaotic behaviour can have an exact solution if it has discontinuous derivatives because this difference equation can be constructed:
equation is in three variables x, y, z
dz/dt always equals 1
if 0 < z < 1: if x > 0: dx/dt = 0 dy dt = 1
if x < 0:
dx/dt = 0
dy/dt = −1
if 1 < z < 2:
if y > 0
dx/dx = -.5
dy dt = 0
if y < 0
dy dt = 0
dx dt = .5
if 2 < z < 3:
dx/dt = x ln(2)
dy/dt = -(y)/(3 - t)
and then make it periodic by gluing z=0 to z=3 in phase space. (This is pretty similar to the structure of the lorentz attractor, except that in the lorentz system, the sheets of solutions get close together but don’t actually meet.) This is an awful,weird ode: the derivative is discontinuous, and not even bounded near the point where the sheets of solutions merge.
Plenty of prototypical chaotic differential equations have a sprinkling of exact solutions: e.g, three bodies orbiting in an equilateral triangle- hence the requirement for a “general” exact solution.
The three body problem “has” an “exact” “series” “solution” but it appears to be quite effed: for one thing, no one will tell me the coefficient of the first term. I suspect that in fact the first term is calculated by solving the motion for all time, and then finding increasingly good series approximations to that motion.
I strongly suspect that the correct answer to this question can be found in one of these stack overflow posts, but I have yet to fully understand them:
There are certainly billiards with chaotic and exactly solvable components- if nothing else, place a circular billiard next to an oval. So, for the original claim to be true in any meaningful way, this may have to involve excluding all differential equations with case statements- which sounds increasingly unlike a true, fundamental theorem.
If this isn’t an open problem, then there is somewhere on the internet a chaotic, normal-looking system of odes (would have aesthetics like x‴′ = sin(x‴) - x’y‴, y’ = (1-y / x’) etc) posted next to a general exact solution, perhaps only valid for non chaotic initial conditions, or a proof that no such system exists. The solvable system is probably out there and related to billiards
Final edit: the series solution to the three body problem is legit mathematically, see page 64 here
I am suddenly unsure whether it is true! It certainly would have to be more specific than how I phrased it, as it is trivially false if the differential equation is allowed to be discontinuous between closed form regions and chaotic regions
The seminal result for chaos theory came from weather modeling. An atmospheric model was migrated to a more powerful computer, but it didn’t give the same results as it had on the old computer. It turned out that, in the process of migration, the initial condition data had been rounded to the eighth decimal place. The tiny errors compounded into larger errors, and over the course of an in-model month the predictions completely diverged. An error in the eighth decimal place is roughly comparable to the flap of a butterfly’s wing, which led to the cliche about butterflies and hurricanes.
If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model. It may still have some value; using standard models and available data it’s possible to predict the weather rather accurately for a few days and semi-accurately for a few days more, but it may not be able to predict what you need.
This is one reason I’ve always been skeptical of the “uploaded brain” idea. My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.
This is one reason I’ve always been skeptical of the “uploaded brain” idea. My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.
This is true, but also e.g. minor environmental perturbations like seeing something at a slightly different time would also cause one to diverge from what one otherwise would have been in a fairly short time, so it seems like any notion of personal identity just has to be robust to exponential divergence.
Consider—A typical human brain has ~100 trillion synapses. Any attempt to map it would have some error rate. Is it still “you” if the error rate is .1%? 1%? 10%? Do positive vs. negative errors make a difference (i.e. missing connections vs. spurious connections)?
Is this a way to get new and exciting psychiatric disorders?
I don’t know the answers, or even how we’d try to figure out the answers, but I don’t want to spend eternity as this guy.
Trial and error? E.g. first you upload animal subjects and see what’s fidelity seems to preserve all the animal traits you can find. At some point you then start with human volunteers (perhaps preferentially dying people?), and see whether the rates that seem to work for nonhuman animals also work for humans.
Also I guess once you have a mostly-working human upload, you can test perturbations to this upload to see what factors they are most sensitive to.
My position is that either (1) my brain is computationally stable, in the sense that what I think, how I think it and what I decide to do after thinking is fundamentally about my algorithm (personality/mind), and that tiny changes in the conditions (a random thermal fluctuation), are usually not important. Alternatively (2) my brain is not a reliable/robust machine, and my behaviour is very sensitive to the random thermal fluctuations of atoms in my brain.
In the first case, we wouldn’t expect small errors (for some value of small) in the uploaded brain to result in significant divergence from the real person (stability). In the second case I am left wondering why I would particularly care. Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
So, I don’t think uploaded brains can be ruled out a priori on precision grounds. There exists a non-infinite amount of precision that suffices, the necessary precision is upper bounded by the thermal randomness in a body temperature brain.
Surely both (1) and (2) are true, each to a certain extent.
Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
It depends. We know from experience how meat brains change over time. We have no idea how software brains change over time; it surely depends on the details of the technology used. The changes might be comparable, but they might be bizarre. The longer you run the program, the more extreme the changes are likely to be.
I can’t rule it out either. Nor can I rule it in. It’s conceivable, but there are enough issues that I’m highly skeptical.
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where “sufficiently advanced” here means “probably always science fiction”). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone’s future behaviour based on a simulation of their brain impossible. (Omega from Newcomb’s paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn’t seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn’t have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
I think part of the difference is that I’m considering the uploading process; it seems to me that you’re skipping past it, which amounts to assuming it works perfectly.
Consider the upload of Bob the volunteer. The idea that software = Bob is based on the idea that Bob’s connectome of roughly 100 trillion synapses is accurately captured by the upload process. It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions). It will miss a percentage and probably also invent some that meat-Bob doesn’t have.
This raises the question of how good a copy is good enough. If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation. In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn’t think or behave much like actual Bob.
Huh, interesting! So the way I’m thinking about this is, your loss landscape determines the attractor/repellor structure of your phase space (= network parameter space). For a (reasonable) optimization algorithm to have chaotic behavior on that landscape, it seems like the landscape would either have to have 1) a positive-measure flat region, on which the dynamics were ergodic, or 2) a strange attractor, which seems more plausible.
I’m not sure how that relates to the above link; it mentions the parameters “diverging”, but it’s not clear to me how neural network weights can diverge; aren’t they bounded?
The claim that uploaded brains don’t work because of chaos turns out not to work so well, because it’s usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions:
I think I might have also once saw this exact example of repeated-bouncing-balls done with robot control demonstrating how even with apparently-stationary plates (maybe using electromagnets instead of tilting?), a tiny bit of high-speed computer vision control could beat the chaos and make it bounce accurately many more times than the naive calculation says is possible, but I can’t immediately refind it.
To call something an “uploaded brain” is to make two claims. First, that it is a (stable) mind. Second, that it is in some important sense equivalent to a particular meat brain (e.g., that its output is the same as the meat brain, or that its experiences are the same as the meat brain’s). The sorts of methods you’re talking about to stabilize the mind help with the first claim, but not with the second.
I’ve always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism. As a materialist, it seems obvious to me that a brain is a brain, a program that replicates the brain’s output is a program (and will perform its task more or less well but probably not perfectly), and the two are not the same.
I’ve always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism. As a materialist, it seems obvious to me that a brain is a brain, a program that replicates the brain’s output is a program (and will perform its task more or less well but probably not perfectly), and the two are not the same.
I think that basically everything in the universe can be considered a program/computation, but I also think the notion of a program/computation is quite trivial.
More substantively, I think it might be possible to replicate at least some parts of the physical world with future computers that have what is called physical universality, where they can manipulate the physical world essentially arbitrarily.
So I don’t view brains and computer programs as being of 2 different types, but rather as the same type as a program/computation.
The ancients considered everything to be the work of spirits. The medievals considered the cosmos to be a kingdom. Early moderns likened the universe to a machine. Every age has its dominant metaphors. All of them are oversimplifications of a more complex truth.
Suppose you had an identical twin with identical genes and, until very recently, an identical history. From the perspective of anyone else, you’re similar enough to be interchangeable with each other. But from your perspective, the twin would be a different person.
The brain is you, full stop. It isn’t running a computer program; its hardware and software are inseparable and developed together over the course of your life. In other words, the hardware/software distinction doesn’t apply to brains.
If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model.
This seems like a very underpowered sentence that doesn’t actually need chaos theory. How do you know you’re in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long?
[For everyone else: Hastings is addressing these questions more directly. But I’m still interesting in what Jay or anyone else has to say].
Let’s try again. Chaotic systems usually don’t do exactly what you want them to, and they almost never do the right thing 1000 times in a row. If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way). This saves you the trouble of actually building a system that won’t work reliably. Basically, it marks off certain areas of solution space as not viable.
Also, there’s Lavarand. It turns out that lava lamps are chaotic.
Hastings clearly has more experience with chaos theory than I do (I upvoted his comment). I’m hoping that my rather simplistic grasp of the field might result in a simpler-to-understand answer (that’s still basically correct).
Chaos theory is a branch of math; it characterizes models (equations). If your model is terrible, it can’t help you. What the theory tells you is how wildly your model will react to small perturbations.
AFAIK the only invention made possible by chaos theory is random number generators. If a system you’re modeling is extremely chaotic, you can use its output for random numbers with confidence that nobody will ever be able to model or replicate your system with sufficient precision to reproduce its output.
Thanks, this is very useful, I have many follow up questions.
How possible is it to genuinely rule out a solution? how much does chaos beat a rock that says “most new things fail”? Is there any cache of details I could cite?
The good news is that chaos theory can rule out solutions with extreme prejudice- and because it’s a formal theory, it lets you be very clear if it’s ruling out a solution absolutely (FluttershAI and Clippy combined aren’t going to be able to predict the weather a decade in advance) vs if it’s ruling out a solution in all practicality, but teeeechnically (ie, predicing 4-5 swings of a double pendulum). Here are the concrete examples that come to mind:
I wrote a CPU N-Body simulator, and then ported it to CUDA. I can’t test that the port is correct by comparing long trajectories of the CPU simulator to the CUDA simulator, and because I know this is a chaos problem, I won’t try to fix this by adding epsilons to the test. Instead, I will fix this by running the test simulation for ~ < a Lyapunov Time.
I wrote a genetic algorithm for designing trebuchets. The final machine, while very efficient, is visibly using precise timing after several cycles of a double pendulum. Therefore, I know it can’t be built in real life.
I see a viral gif of black and white balls being dumped in a pile, so that when they come to rest they form a smiley face. I know it’s fake because that’s not something that can be achieved with careful orchestration
I prove a differential equation is chaotic, so I don’t need to try to find a closed form solution.
One thing that jumps out, writing this out explicitly, is that chaos theory concievably could be replaced with intuition of “well obviously that won’t work,” and so I don’t know to what extent chaos theory just formulated wisdom that existed pre-1950s, vs generated wisdom that got incorporated into modern common sense. Either way, formal is nice- in particular, the “can’t test end to end simulations by direct comparison” and “can’t find a closed form solution” cases saved me a lot of time.
Yeah, this is top of my mind as well. to get a sense of the cultural shift I’m trying to interview people who were around for the change or at least knew people who were. If anyone knows any boomer scientists or mathematicians with thoughts on this I’d love to talk to them.
I haven’t nailed this down yet, but there’s an interesting nugget along the lines of chaos being used to get scientists comfortable with existential uncertainty, which (some) mathematicians already were. My claim of the opposite triggered a great discussion on twitter[1], and @Jeffrey Heninger gave me a great concept: “the chaos paradigm shift was moving from “what is the solution” to “what can I know, even if the solution is unknowable?” That kind of cultural shift seems even more important than the math but harder to study.
BTW thank you for your time and all your great answers, this is really helpful and I plan on featuring “ruling things out” if I do a follow-up post.
I’ve now had two ex-mathematicians, one ex-astrophysicist, and one of the foremost theoretical ecologists in the world convincingly disagree with me.
Very interesting. Could you give more details one one of these concrete examples?
e.g. calculating a lyapunov exponent or proving a differential equation is chaotic for a DE that came up in practice for you
Differential equation example: I wanted a closed form solution of the range of the simplest possible trebuchet- just a seesaw. This is perfectly achievable, see for example http://ffden-2.phys.uaf.edu/211.fall2000.web.projects/J%20Mann%20and%20J%20James/see_saw.html. I wanted a closed form solution of the second simplest trebuchet, a seesaw with a sling. This is impossible, because even though the motion of the trebuchet with sling isn’t chaotic during the throw, it can be made chaotic by just varying the initial conditions, which rules out a simple closed form solution for non-chaotic initial conditions.
Lyapunov exponent example: for the bouncing balls, if each ball travels 1 diameter between bounces, then a change in velocity angle of 1 degree pre-bounce becomes a change in angle of 4 degrees post bounce (this number may be 6- I am bad at geometry), so the exponent is 4 if time is measured in bounces.
I’m curious about this part;
Do you know what theorems/whatever this is from? It seems to me that if you know that “throws” constitute a subset of phase space that isn’t chaotic, then you should be able to have a closed-form solution for those trajectories.
So this turns out to be a doozy, but it’s really fascinating. I don’t have an answer- an answer would look like “normal chaotic differential equations don’t have general exact solutions” or “there is no relationship between being chaotic and not having an exact solution” but deciding which is which won’t just require proof, it would also require good definitions of “normal differential equation” and “exact solution.” (the good definition of “general” is “initial conditions with exact solutions have nonzero measure”) I have some work.
A chaotic differential equation has to be nonlinear and at least third order- and almost all nonlinear third order differential equations don’t admit general exact solutions. So, the statement “as a heuristic, chaotic differential equations don’t have general exact solutions” seems pretty unimpressive. However, I wrongly believed the strong version of this heuristic and that belief was useful: I wanted to model trebuchet arm-sling dynamics, recognized that the true form could not be solved, and switched to a simplified model based on what simplifications would prevent chaos (no gravity, sling is wrapped around a drum instead of fixed to the tip of an arm) and then was able to find an exact solution (note that this solvable system starts as nonlinear 4th order, but can be reduced using conservation of angular momentum hacks)
Now, it is known that a chaotic difference equation can have an exact solution: the equation x(n+1) = 2x(n) mod 1 is formally chaotic and has the exact solution 2^n x mod 1. A chaotic differential equation exhibiting chaotic behaviour can have an exact solution if it has discontinuous derivatives because this difference equation can be constructed:
equation is in three variables x, y, z
dz/dt always equals 1
if 0 < z < 1:
if x > 0:
dx/dt = 0
dy dt = 1
if x < 0:
dx/dt = 0
dy/dt = −1
if 1 < z < 2:
if y > 0
dx/dx = -.5
dy dt = 0
if y < 0
dy dt = 0
dx dt = .5
if 2 < z < 3:
dx/dt = x ln(2)
dy/dt = -(y)/(3 - t)
and then make it periodic by gluing z=0 to z=3 in phase space. (This is pretty similar to the structure of the lorentz attractor, except that in the lorentz system, the sheets of solutions get close together but don’t actually meet.) This is an awful,weird ode: the derivative is discontinuous, and not even bounded near the point where the sheets of solutions merge.
Plenty of prototypical chaotic differential equations have a sprinkling of exact solutions: e.g, three bodies orbiting in an equilateral triangle- hence the requirement for a “general” exact solution.
The three body problem “has” an “exact” “series” “solution” but it appears to be quite effed: for one thing, no one will tell me the coefficient of the first term. I suspect that in fact the first term is calculated by solving the motion for all time, and then finding increasingly good series approximations to that motion.
I strongly suspect that the correct answer to this question can be found in one of these stack overflow posts, but I have yet to fully understand them:
https://physics.stackexchange.com/questions/340795/why-are-we-sure-that-integrals-of-motion-dont-exist-in-a-chaotic-system?rq=1
https://physics.stackexchange.com/questions/201547/chaos-and-integrability-in-classical-mechanics
There are certainly billiards with chaotic and exactly solvable components- if nothing else, place a circular billiard next to an oval. So, for the original claim to be true in any meaningful way, this may have to involve excluding all differential equations with case statements- which sounds increasingly unlike a true, fundamental theorem.
If this isn’t an open problem, then there is somewhere on the internet a chaotic, normal-looking system of odes (would have aesthetics like x‴′ = sin(x‴) - x’y‴, y’ = (1-y / x’) etc) posted next to a general exact solution, perhaps only valid for non chaotic initial conditions, or a proof that no such system exists. The solvable system is probably out there and related to billiards
Final edit: the series solution to the three body problem is legit mathematically, see page 64 here
https://ntrs.nasa.gov/citations/19670005590
So “can’t find general exact solution to chaotic differential equation” is just uncomplicatedly false
I am suddenly unsure whether it is true! It certainly would have to be more specific than how I phrased it, as it is trivially false if the differential equation is allowed to be discontinuous between closed form regions and chaotic regions
I am afraid I don’t quite understand the bouncing balls example, could you give a little more detail? Thank you in advance !
Not a substantive response, just wanted to say that I really really like your comment for having so many detailed real-world examples.
The seminal result for chaos theory came from weather modeling. An atmospheric model was migrated to a more powerful computer, but it didn’t give the same results as it had on the old computer. It turned out that, in the process of migration, the initial condition data had been rounded to the eighth decimal place. The tiny errors compounded into larger errors, and over the course of an in-model month the predictions completely diverged. An error in the eighth decimal place is roughly comparable to the flap of a butterfly’s wing, which led to the cliche about butterflies and hurricanes.
If you’re trying to model a system, and the results of your model are extremely sensitive to miniscule data errors (i.e. the system is chaotic), and there is no practical way to obtain extremely accurate data, then chaos theory limits the usefulness of the model. It may still have some value; using standard models and available data it’s possible to predict the weather rather accurately for a few days and semi-accurately for a few days more, but it may not be able to predict what you need.
This is one reason I’ve always been skeptical of the “uploaded brain” idea. My intuition is that inevitable minor errors in the model of the brain would cause the model to diverge from the source in a fairly short time.
This is true, but also e.g. minor environmental perturbations like seeing something at a slightly different time would also cause one to diverge from what one otherwise would have been in a fairly short time, so it seems like any notion of personal identity just has to be robust to exponential divergence.
Consider—A typical human brain has ~100 trillion synapses. Any attempt to map it would have some error rate. Is it still “you” if the error rate is .1%? 1%? 10%? Do positive vs. negative errors make a difference (i.e. missing connections vs. spurious connections)?
Is this a way to get new and exciting psychiatric disorders?
I don’t know the answers, or even how we’d try to figure out the answers, but I don’t want to spend eternity as this guy.
Trial and error? E.g. first you upload animal subjects and see what’s fidelity seems to preserve all the animal traits you can find. At some point you then start with human volunteers (perhaps preferentially dying people?), and see whether the rates that seem to work for nonhuman animals also work for humans.
Also I guess once you have a mostly-working human upload, you can test perturbations to this upload to see what factors they are most sensitive to.
My position is that either (1) my brain is computationally stable, in the sense that what I think, how I think it and what I decide to do after thinking is fundamentally about my algorithm (personality/mind), and that tiny changes in the conditions (a random thermal fluctuation), are usually not important. Alternatively (2) my brain is not a reliable/robust machine, and my behaviour is very sensitive to the random thermal fluctuations of atoms in my brain.
In the first case, we wouldn’t expect small errors (for some value of small) in the uploaded brain to result in significant divergence from the real person (stability). In the second case I am left wondering why I would particularly care. Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
So, I don’t think uploaded brains can be ruled out a priori on precision grounds. There exists a non-infinite amount of precision that suffices, the necessary precision is upper bounded by the thermal randomness in a body temperature brain.
Surely both (1) and (2) are true, each to a certain extent.
Are the random thermal fluctuations pushing me around somehow better than the equally random measurement errors pushing my soft-copy around?
It depends. We know from experience how meat brains change over time. We have no idea how software brains change over time; it surely depends on the details of the technology used. The changes might be comparable, but they might be bizarre. The longer you run the program, the more extreme the changes are likely to be.
I can’t rule it out either. Nor can I rule it in. It’s conceivable, but there are enough issues that I’m highly skeptical.
I might be misunderstanding your point. My opinion is that software brains are extremely difficult (possibly impossibly difficult) because brains are complicated. Your position, as I understand it, is that they are extremely difficult (possibly impossibly difficult) because brains are chaotic.
If its the former (complexity) then there exists a sufficiently advanced model of the human brain that can work (where “sufficiently advanced” here means “probably always science fiction”). If brains are assumed to be chaotic then a lot of what people think and do is random, and the simulated brains will necessarily end up with a different random seed due to measurement errors. This would be important in some brain simulating contexts, for example it would make predicting someone’s future behaviour based on a simulation of their brain impossible. (Omega from Newcomb’s paradox would struggle to predict whether people would two-box or not.) However, from the point of view of chasing immortality for yourself or a loved one the chaos doesn’t seem to be an immediate problem. If my decision to one-box was fundamentally random (down to thermal fluctuations) and trivial changes on the day could have changed my mind, then it couldn’t have been part of my personality. My point was, from the immortality point of view, we only really care about preserving the signal, and can accept different noise.
I certainly agree that brains are complicated.
I think part of the difference is that I’m considering the uploading process; it seems to me that you’re skipping past it, which amounts to assuming it works perfectly.
Consider the upload of Bob the volunteer. The idea that software = Bob is based on the idea that Bob’s connectome of roughly 100 trillion synapses is accurately captured by the upload process. It seems fairly obvious to me that this process will not capture every single synapse with no errors (at least in early versions). It will miss a percentage and probably also invent some that meat-Bob doesn’t have.
This raises the question of how good a copy is good enough. If brains are chaotic, and I would expect them to be, even small error rates would have large consequences for the output of the simulation. In short, I would expect that for semi-realistic upload accuracy (whatever that means in this context), simulated Bob wouldn’t think or behave much like actual Bob.
Well, now I’m wondering—is neural network training chaotic?
Sometimes!
https://sohl-dickstein.github.io/2024/02/12/fractal.html
Huh, interesting! So the way I’m thinking about this is, your loss landscape determines the attractor/repellor structure of your phase space (= network parameter space). For a (reasonable) optimization algorithm to have chaotic behavior on that landscape, it seems like the landscape would either have to have 1) a positive-measure flat region, on which the dynamics were ergodic, or 2) a strange attractor, which seems more plausible.
I’m not sure how that relates to the above link; it mentions the parameters “diverging”, but it’s not clear to me how neural network weights can diverge; aren’t they bounded?
The claim that uploaded brains don’t work because of chaos turns out not to work so well, because it’s usually easier to control the divergence than it is to predict the divergence, because you can use strategies like fast-feedback control to prevent yourself from ever getting into the chaotic region, and more generally a lot of misapplication of chaos theory starts by incorrectly assuming that hardness of prediction equals hardness of controlling it, without other assumptions:
See more below:
https://www.lesswrong.com/posts/epgCXiv3Yy3qgcsys/you-can-t-predict-a-game-of-pinball#wjLFhiWWacByqyu6a
I also like tailcalled’s comment on the situation, too.
To call something an “uploaded brain” is to make two claims. First, that it is a (stable) mind. Second, that it is in some important sense equivalent to a particular meat brain (e.g., that its output is the same as the meat brain, or that its experiences are the same as the meat brain’s). The sorts of methods you’re talking about to stabilize the mind help with the first claim, but not with the second.
I’ve always struggled to make sense of the idea of brain uploading because it seems to rely on some sort of dualism. As a materialist, it seems obvious to me that a brain is a brain, a program that replicates the brain’s output is a program (and will perform its task more or less well but probably not perfectly), and the two are not the same.
I think the crux of it is here:
I think that basically everything in the universe can be considered a program/computation, but I also think the notion of a program/computation is quite trivial.
More substantively, I think it might be possible to replicate at least some parts of the physical world with future computers that have what is called physical universality, where they can manipulate the physical world essentially arbitrarily.
So I don’t view brains and computer programs as being of 2 different types, but rather as the same type as a program/computation.
See below for some intuition as to why.
http://www.amirrorclear.net/academic/ideas/simulation/index.html
The ancients considered everything to be the work of spirits. The medievals considered the cosmos to be a kingdom. Early moderns likened the universe to a machine. Every age has its dominant metaphors. All of them are oversimplifications of a more complex truth.
There are no properties of brain which define that brain is “you”, except for the program that it runs.
Suppose you had an identical twin with identical genes and, until very recently, an identical history. From the perspective of anyone else, you’re similar enough to be interchangeable with each other. But from your perspective, the twin would be a different person.
The brain is you, full stop. It isn’t running a computer program; its hardware and software are inseparable and developed together over the course of your life. In other words, the hardware/software distinction doesn’t apply to brains.
This seems like a very underpowered sentence that doesn’t actually need chaos theory. How do you know you’re in a system that is chaotic, as opposed to have shitty sensors or a terrible model? What do you get from the theory, as opposed to the empirical result that your predictions only stay accurate for so long?
[For everyone else: Hastings is addressing these questions more directly. But I’m still interesting in what Jay or anyone else has to say].
Let’s try again. Chaotic systems usually don’t do exactly what you want them to, and they almost never do the right thing 1000 times in a row. If you model a system using ordinary modeling techniques, chaos theory can tell you whether the system is going to be finicky and unreliable (in a specific way). This saves you the trouble of actually building a system that won’t work reliably. Basically, it marks off certain areas of solution space as not viable.
Also, there’s Lavarand. It turns out that lava lamps are chaotic.
For what it’s worth, I think you’re getting downvoted in part because what you write seems to indicate that you didn’t read the post.
Hastings clearly has more experience with chaos theory than I do (I upvoted his comment). I’m hoping that my rather simplistic grasp of the field might result in a simpler-to-understand answer (that’s still basically correct).
Chaos theory is a branch of math; it characterizes models (equations). If your model is terrible, it can’t help you. What the theory tells you is how wildly your model will react to small perturbations.
AFAIK the only invention made possible by chaos theory is random number generators. If a system you’re modeling is extremely chaotic, you can use its output for random numbers with confidence that nobody will ever be able to model or replicate your system with sufficient precision to reproduce its output.
That wasn’t well phrased. Oops.