Under the premise of spacetime being a static and eternal thing, doesn‘t any line of thought trying to answer this question necessarily make any intuitive notions of identity and the passing of time illusionary?
Illusory in what sense? The underlying laws of our universe are time symmetric[1], but the second law of thermodynamics means that entropy increases as you move away in time from a set low-entropy point (the big bang). This means that predicting bits at t+1 from bits at t tends to be a much more difficult exercise than predicting bits at t−1 from bits at t. Large amounts of detailed information (entropy) about t−1 can, with some tricks, often be read off from t with little computational effort. “Memory” is one way to do this.
There are fewer ways to read off lots of information about t+1 cheaply from t. It is only possible in some very specific situations. You could, for example, look at a couch in a windowless room at time t, and commit to look at that exact couch from that exact position at that exact angle again one week later. This would let you pretty reliably infer a large batch of future visual bits. But such techniques do not tend to generalise well, you can’t do this for arbitrary future visual information the way you can use “memory” to do so for a wide class of past visual information. Thermodynamics means the trick only works one way, for bits that are closer in time to the big bang. To do the same for future bits is theoretically possible, but it typically requires different techniques and a far, far larger compute investment.
For an observer operating under such physics, it is useful to conceptualise the world as consisting of a “past” that has “already happened”, and is thus amendable to inference through techniques like memory, a “future” that has “yet to happen” and is more uncertain, and a kind of “present”, where these regimes meet, close to which techniques for inference from both regimes tend to be most efficient, and where information can be processed directly, because physics is local in time as well as space. Thus memory from t−ϵ with ϵ<<1 tends to be easier to protect from degradation than memory at t−1, computing bit predictions for t+ϵ tends to cost way less than for t+1, and so on.
If you look at an intelligence inside such local physics, you will see that its internals at any point in time t tend to be busy computing stuff about t, the “present moment” which the computation can locally operate on and affect, and which can often take in information from the “past”, especially the “recent past” fairly easily, but has a harder time taking in information from the future, to the point that doing so usually involves totally different algorithms which feel totally different. So it feels to the computation at t that “it exists”, “now”.
I wouldn’t really call this an illusion, except in the sense that “trees” are an illusion. A tree is fundamentally just some quantum field[2] excitations from a particular class of excited states inside a finite 4D volume. But its medium scale, medium-range interactions with baryonic matter are often pretty well described by the human concept of “tree”.
Likewise, dividing time into a “future”, which has “yet to happen”, a “past”, which “has happened”, and a “present”, which “is happening”, is a leaky abstraction of the underlying laws about performing inference and decision computations in a physics with locality, the second law of thermodynamics, and a low-entropy state at some t0 (big bang). It’s not precise, but a good approximation under many circumstances.
Imagine someone in the desert thinks they see an oasis. If it is actually a mirage, I’d say it makes sense to call the oasis an illusion. If it is an actual oasis, I don’t think the moniker illusion is apt just because oases are really an imperfect abstraction of particular quantum field configurations.
Or, if you don’t believe in asymptotically safe quantum gravity, it might not really be quantum fields either. Substitute whatever your favoured guess for the true fundamental physical theory is.
The underlying laws of our universe are time symmetric[1], but the second law of thermodynamics means that entropy increases as you move away in time from a set low-entropy point (the big bang).
All that gives you is an asymmetry, a distinction between the past and future, within a static block universe. It doesn’t get you away from stasis to give you a dynamic “moving cursor” kind of present moment.
Likewise, dividing time into a “future”, which has “yet to happen”, a “past”, which “has happened”, and a “present”, which “is happening”, is a leaky abstraction of the underlying laws about performing inference and decision computations in a physics with locality, the second law of thermodynamics, and a low-entropy state at some t_0 (big bang). It’s not precise, but a good approximation under many circumstances.
So, where does the “present” come from specifically?
There are many popular hypotheses with all kinds of different implications related to time in some way, but those aren’t part of standard textbook physics. They’re proposed extensions of our current models. I’m talking about plain old general relativity+Standard Model QFT here. Spacetime is a four-dimensional manifold, fields in the SM Lagrangian have support on that manifold, all of those field have CPT symmetry. Don’t go asking for quantum gravity or other matters related to UV-completion.[1]
All that gives you is an asymmetry, a distinction between the past and future, within a static block universe. It doesn’t get you away from stasis to give you a dynamic “moving cursor” kind of present moment.
Combined with locality, the rule that things in spacetime can only affect things immediately adjacent to them, yeah, it does. Computations can only act on bits that are next to them in spacetime. To act on bits that are not adjacent, “channels” in spacetime have to connect those bits to the computation, carrying the information. So processing bits far removed from t at t is usually hard, due to thermodynamics, and takes place by proxy, using inference on bits near t that have mutual information with the past or future bits of interest. Thus computations at t effectively operate primarily on information near t, with everything else grasped from that local information. From the perspective of such a computation, that’s a “moving cursor”.
(I’d note though that asymmetry due to thermodynamics on its own could presumably already serve fine for distinguishing a “present”, even if there was no locality. In that case, the “cursor” would be a boundary to one side of which the computation loses a lot of its ability to act on bits. From the inside perspective, computations at t would be distinguishable from computations at t+1 and t−1 in such a universe, by what algorithms are used to calculate on specific bits, with algorithms that act on bits “after” t being more expensive at t≤t1. I don’t think self-aware algorithms in that world would have quite the same experience of “present” we do, but I’d guess they would have some “cursor-y” concept/sensation.
I’m not sure how hard constructing a universe without even approximate locality, but with thermodynamics-like behaviour and the possibility of Turing-complete computation would be though. Not sure if it is actually a coherent set-up. Maybe coupling to non-local points that hard just inevitably makes everything max-entropic everywhere and always.)
I mean, do ask, by all means, but the answer probably won’t be relevant for this discussion, because you can get planet earth and the human brains on it thinking and perceiving a present moment from a plain old SM lattice QFT simulation. Everyone in that simulation quickly dies because the planet has no gravity and spins itself apart, but they sure are experiencing a present until then.[2]
Except there also might not be a Born rule in the simulation, but let’s also ignore that, and just say we read off what’s happening in the high amplitude parts of the simulated earth wave-function without caring that the amplitude is pretty much a superfluous pre-factor that doesn’t do anything in the computation.
Combined with locality, the rule that things in spacetime can only affect things immediately adjacent to them, yeah, it does.
Along a worldline, you have a bunch of activity at time T0 that is locally affecting stuff, a bunch of stuff at time T1 that is locally affecting stuff, and so on. They’re all present moments. None is distinguished as the present moment, even from the perspective of a single worldline..
In that case, the “cursor” would be a boundary to one side of which the computation loses a lot of its ability to act on bits.
There could be any number of such approximate “boundaries” along a worldline.
Except there also might not be a Born rule in the simulation,
Assuming you mean collapse—the Born rule is a just a timeless relationship between probability and amplitude—there could be one in reality as well. That’s one of the reasons there isn’t a single model of time in physics. Collapse actually is a moving cursor.
Illusory in what sense? The underlying laws of our universe are time symmetric[1], but the second law of thermodynamics means that entropy increases as you move away in time from a set low-entropy point (the big bang). This means that predicting bits at t+1 from bits at t tends to be a much more difficult exercise than predicting bits at t−1 from bits at t. Large amounts of detailed information (entropy) about t−1 can, with some tricks, often be read off from t with little computational effort. “Memory” is one way to do this.
There are fewer ways to read off lots of information about t+1 cheaply from t. It is only possible in some very specific situations. You could, for example, look at a couch in a windowless room at time t, and commit to look at that exact couch from that exact position at that exact angle again one week later. This would let you pretty reliably infer a large batch of future visual bits. But such techniques do not tend to generalise well, you can’t do this for arbitrary future visual information the way you can use “memory” to do so for a wide class of past visual information. Thermodynamics means the trick only works one way, for bits that are closer in time to the big bang. To do the same for future bits is theoretically possible, but it typically requires different techniques and a far, far larger compute investment.
For an observer operating under such physics, it is useful to conceptualise the world as consisting of a “past” that has “already happened”, and is thus amendable to inference through techniques like memory, a “future” that has “yet to happen” and is more uncertain, and a kind of “present”, where these regimes meet, close to which techniques for inference from both regimes tend to be most efficient, and where information can be processed directly, because physics is local in time as well as space. Thus memory from t−ϵ with ϵ<<1 tends to be easier to protect from degradation than memory at t−1, computing bit predictions for t+ϵ tends to cost way less than for t+1, and so on.
If you look at an intelligence inside such local physics, you will see that its internals at any point in time t tend to be busy computing stuff about t, the “present moment” which the computation can locally operate on and affect, and which can often take in information from the “past”, especially the “recent past” fairly easily, but has a harder time taking in information from the future, to the point that doing so usually involves totally different algorithms which feel totally different. So it feels to the computation at t that “it exists”, “now”.
I wouldn’t really call this an illusion, except in the sense that “trees” are an illusion. A tree is fundamentally just some quantum field[2] excitations from a particular class of excited states inside a finite 4D volume. But its medium scale, medium-range interactions with baryonic matter are often pretty well described by the human concept of “tree”.
Likewise, dividing time into a “future”, which has “yet to happen”, a “past”, which “has happened”, and a “present”, which “is happening”, is a leaky abstraction of the underlying laws about performing inference and decision computations in a physics with locality, the second law of thermodynamics, and a low-entropy state at some t0 (big bang). It’s not precise, but a good approximation under many circumstances.
Imagine someone in the desert thinks they see an oasis. If it is actually a mirage, I’d say it makes sense to call the oasis an illusion. If it is an actual oasis, I don’t think the moniker illusion is apt just because oases are really an imperfect abstraction of particular quantum field configurations.
Well, actually CPT symmetric, but the distinction doesn’t matter for the intuition here.
Or, if you don’t believe in asymptotically safe quantum gravity, it might not really be quantum fields either. Substitute whatever your favoured guess for the true fundamental physical theory is.
There is no one theory of time in physics.
All that gives you is an asymmetry, a distinction between the past and future, within a static block universe. It doesn’t get you away from stasis to give you a dynamic “moving cursor” kind of present moment.
So, where does the “present” come from specifically?
There are many popular hypotheses with all kinds of different implications related to time in some way, but those aren’t part of standard textbook physics. They’re proposed extensions of our current models. I’m talking about plain old general relativity+Standard Model QFT here. Spacetime is a four-dimensional manifold, fields in the SM Lagrangian have support on that manifold, all of those field have CPT symmetry. Don’t go asking for quantum gravity or other matters related to UV-completion.[1]
Combined with locality, the rule that things in spacetime can only affect things immediately adjacent to them, yeah, it does. Computations can only act on bits that are next to them in spacetime. To act on bits that are not adjacent, “channels” in spacetime have to connect those bits to the computation, carrying the information. So processing bits far removed from t at t is usually hard, due to thermodynamics, and takes place by proxy, using inference on bits near t that have mutual information with the past or future bits of interest. Thus computations at t effectively operate primarily on information near t, with everything else grasped from that local information. From the perspective of such a computation, that’s a “moving cursor”.
(I’d note though that asymmetry due to thermodynamics on its own could presumably already serve fine for distinguishing a “present”, even if there was no locality. In that case, the “cursor” would be a boundary to one side of which the computation loses a lot of its ability to act on bits. From the inside perspective, computations at t would be distinguishable from computations at t+1 and t−1 in such a universe, by what algorithms are used to calculate on specific bits, with algorithms that act on bits “after” t being more expensive at t≤t1. I don’t think self-aware algorithms in that world would have quite the same experience of “present” we do, but I’d guess they would have some “cursor-y” concept/sensation.
I’m not sure how hard constructing a universe without even approximate locality, but with thermodynamics-like behaviour and the possibility of Turing-complete computation would be though. Not sure if it is actually a coherent set-up. Maybe coupling to non-local points that hard just inevitably makes everything max-entropic everywhere and always.)
I mean, do ask, by all means, but the answer probably won’t be relevant for this discussion, because you can get planet earth and the human brains on it thinking and perceiving a present moment from a plain old SM lattice QFT simulation. Everyone in that simulation quickly dies because the planet has no gravity and spins itself apart, but they sure are experiencing a present until then.[2]
Except there also might not be a Born rule in the simulation, but let’s also ignore that, and just say we read off what’s happening in the high amplitude parts of the simulated earth wave-function without caring that the amplitude is pretty much a superfluous pre-factor that doesn’t do anything in the computation.
Along a worldline, you have a bunch of activity at time T0 that is locally affecting stuff, a bunch of stuff at time T1 that is locally affecting stuff, and so on. They’re all present moments. None is distinguished as the present moment, even from the perspective of a single worldline..
There could be any number of such approximate “boundaries” along a worldline.
Assuming you mean collapse—the Born rule is a just a timeless relationship between probability and amplitude—there could be one in reality as well. That’s one of the reasons there isn’t a single model of time in physics. Collapse actually is a moving cursor.
Thank you, this has clarified the issue a lot for me regarding the time aspect of my problem, but the identity part still remains very elusive.