I’m not sure I understand, but are you saying there’s a reason to view a progression of configurations in one direction over another? I’d always (or at least for a long time) essentially considered time a series of states (I believe I once defined passage of time as a measurement of change), basically like a more complicated version of, say, the graph of y=ln(x). Inverting the x-axis (taking the mirror image of the graph) would basically give you the same series of points in reverse, but all the basic rules would be maintained—the height above the x-axis would always be the natural log of the x-value. Similarly, inverting the progression of configurations would maintain all physical laws. This seems to me fit all your posts on time up until this one.
This one, though, differs. Are you claiming in this post that one could invert the t-axis (or invert the progression of configurations in the timeless view) and obtain different physical laws (or at least violations of the ones in our given progression)? If so, I see a reason to consider a certain order to things. Otherwise, it seems that, while we can say y=ln(x) is “increasing” or describe a derivative at a point, we’re merely describing how the points relate to each other if we order them in increasing x-values, rather than claiming that the value of ln(5) depends somehow on the value of ln(4.98) as opposed to both merely depending on the definition of the function. We can use derivatives to determine the temporally local configurations just as we can use derivatives to approximate x-local function values, but as far as I can tell it is, in the end, a configuration A that happens to define brains that contain some information on another configuration B that defined brains that contained information on some configuration C, so we say C happened, then B, then A, just like in the analogy we have a set of points that has no inherent order so we read it in order of increasing x-values (which we generally place left-to-right) but it’s not inherently that—it’s just a set of y-values that depend on their respective x-values.
Short version: Are you saying there’s a physical reason to order the configurations C->B->A other than that A contains memories of B containing memories of C?
I’ve read this again (along with the rest of the Sequence up to it) and I think I have a better understanding of what it’s claiming. Inverting the axis of causality would require inverting the probabilities, such that an egg reforming is more likely than an egg breaking. It would also imply that our brains contain information on the ‘future’ and none on the ‘past’, meaning all our anticipations are about what led to the current state, not where the current state will lead.
All of this is internally consistent, but I see no reason to believe it gives us a “real” direction of causality. As far as I can tell, it just tells us that the direction we calculate our probabilities is the direction we don’t know.
Going from a low-entropy universe to a high-entropy universe seems more natural, but only because we calculate our probabilities in the direction of low-to-high entropy. If we based our probabilities on the same evidence perceived the opposite direction, it would be low-to-high that seemed to need universes discarded and high-to-low that seemed natural.
Inverting the axis of causality would require inverting the probabilities, such that an egg reforming is more likely than an egg breaking.
I don’t think this is a coherent notion. If we “invert the probabilities” in some literal sense, then yes, the egg reforming is more likely than the egg breaking, but still more likely is the egg turning into an elephant.
Hm. This is true.
Perhaps it would be better to say “Perceiving states in opposite-to-conventional order would give us reason to assume probabilities entirely consistent with considering a causality in opposite-to-conventional order.”
Unless I’m missing something, the only reason to believe causality goes in the order that places our memory-direction before our non-memory direction is that we base our probabilities on our memory.
Well, Eliezer seems to be claiming in this article that the low-to-high is more valid than the high-to-low, but I don’t see how they’re anything but both internally consistent
I’m not sure I understand, but are you saying there’s a reason to view a progression of configurations in one direction over another? I’d always (or at least for a long time) essentially considered time a series of states (I believe I once defined passage of time as a measurement of change), basically like a more complicated version of, say, the graph of y=ln(x). Inverting the x-axis (taking the mirror image of the graph) would basically give you the same series of points in reverse, but all the basic rules would be maintained—the height above the x-axis would always be the natural log of the x-value. Similarly, inverting the progression of configurations would maintain all physical laws. This seems to me fit all your posts on time up until this one.
This one, though, differs. Are you claiming in this post that one could invert the t-axis (or invert the progression of configurations in the timeless view) and obtain different physical laws (or at least violations of the ones in our given progression)? If so, I see a reason to consider a certain order to things. Otherwise, it seems that, while we can say y=ln(x) is “increasing” or describe a derivative at a point, we’re merely describing how the points relate to each other if we order them in increasing x-values, rather than claiming that the value of ln(5) depends somehow on the value of ln(4.98) as opposed to both merely depending on the definition of the function. We can use derivatives to determine the temporally local configurations just as we can use derivatives to approximate x-local function values, but as far as I can tell it is, in the end, a configuration A that happens to define brains that contain some information on another configuration B that defined brains that contained information on some configuration C, so we say C happened, then B, then A, just like in the analogy we have a set of points that has no inherent order so we read it in order of increasing x-values (which we generally place left-to-right) but it’s not inherently that—it’s just a set of y-values that depend on their respective x-values.
Short version: Are you saying there’s a physical reason to order the configurations C->B->A other than that A contains memories of B containing memories of C?
I’ve read this again (along with the rest of the Sequence up to it) and I think I have a better understanding of what it’s claiming. Inverting the axis of causality would require inverting the probabilities, such that an egg reforming is more likely than an egg breaking. It would also imply that our brains contain information on the ‘future’ and none on the ‘past’, meaning all our anticipations are about what led to the current state, not where the current state will lead.
All of this is internally consistent, but I see no reason to believe it gives us a “real” direction of causality. As far as I can tell, it just tells us that the direction we calculate our probabilities is the direction we don’t know.
Going from a low-entropy universe to a high-entropy universe seems more natural, but only because we calculate our probabilities in the direction of low-to-high entropy. If we based our probabilities on the same evidence perceived the opposite direction, it would be low-to-high that seemed to need universes discarded and high-to-low that seemed natural.
...right?
I don’t think this is a coherent notion. If we “invert the probabilities” in some literal sense, then yes, the egg reforming is more likely than the egg breaking, but still more likely is the egg turning into an elephant.
Hm. This is true. Perhaps it would be better to say “Perceiving states in opposite-to-conventional order would give us reason to assume probabilities entirely consistent with considering a causality in opposite-to-conventional order.”
Unless I’m missing something, the only reason to believe causality goes in the order that places our memory-direction before our non-memory direction is that we base our probabilities on our memory.
What do you want out of a “real” direction of causality, other than the above?
Well, Eliezer seems to be claiming in this article that the low-to-high is more valid than the high-to-low, but I don’t see how they’re anything but both internally consistent