A finite amount of mass contains a finite amount of information; this is physics, not to be overcome by Fun Theory. I may be mistaken about the amount of mass the Universe contains, in which case my upper bound on utility would be wrong; but unless you are asserting that there is infinite mass, or else that there are an infinite number of ways to arrange a finite number of quarks in a bounded space, there must exist some upper bound. My understanding of Fun Theory is that it is intended to be deployed against people who consider 1000-year lifespans and say “But wouldn’t you get bored?”, rather than an assertion that there is actually infinite Fun to be had. But when dealing with Omega, your thought experiment had better take the physical limits into account!
As for the self-modification, I gave my thoughts on this in my exchange with Wei_Dai; briefly, try doing Rationalist Taboo on “self-modify” and see what happens to your argument.
The assertion is necessary if you are reflectively consistent and you don’t take Omega up on offer n. If a future copy of you is likely to regret a decision not to take Omega up again, then the decision was the very definition of reflectively inconsistent.
So your scenario is that I stopped at some arbitrary point in the garden path; my future self has now reached the end of his vastly extended lifespan; and he wishes he’d taken Omega up on just one more offer? Ok, that’s a regret, right enough. But I invite you to consider the other scenario where I did accept Omega’s next offer, the randomness did not go my way, and I have an hour left to live, and regret not stopping one offer earlier. These scenarios have to be given some sort of weighting in my decision; the one that treats the numbers as plain arithmetic isn’t necessarily any better than the one that accept immediacy bias. They are both points in decision-algorithm space. The inconsistency that turns you into a money pump lies in trying to apply both.
The fact that Omega is offering unbounded lifespans implies that the universe isn’t going to crunch or rip in any finite time. Excluding them leaves you with a universe where the Hubble radius tends to infinity, which thus makes negentropy (information) unbounded above.
Self-modification is just an optimisation process over the design space for agents and run by some agent, with the constraint that only one agent can be instantiated at any time.
But I invite you to consider the other scenario where I did accept Omega’s next offer, the randomness did not go my way, and I have an hour left to live, and regret not stopping one offer earlier.
And regardless of what n is, only a 10^-6 portion of the (n-1)-survivors regret taking decision n. If you’re in the block that’s killed off by decision 1, then decisions 2,3,4,… are all irrelevant to you. Clearly attempting to apply both and applying neither consistently leads to money pumping.
Omega’s offers are not unbounded, they are merely very large. Further, even an infinite time would not imply an infinite amount of information, because information is a property of mass; adding more time just means that some configurations of quarks are going to be repeated. You are free to argue that it’s still Fun to ‘discover’ the same theorem for the second time, with no memory of the first time you did so, of course; but it looks to me as though that way Orgasmium lies. On the plus side, Orgasmium has no regrets.
Clearly attempting to apply both and applying neither consistently leads to money pumping.
Yes, that’s what I said; my prescription is to choose an arbitrary cutoff point, say where your survival probability drops to 75% - the difference between 80% and 75% seems ‘feelable’. You can treat this as all one decision, and consider that 1 in 20 future yous are going to strongly regret starting down the path at all; these are numbers that our brains can work with.
Failing an arbitrary cutoff, what is your alternative? Do you in fact accept the microscopic chance of a fantastically huge lifetime?
Omega’s offers are unbounded; 10^^n exceeds any finite bound with a finite n. If the Hubble distance (edge of the observable universe) recedes, then even with a fixed quantity of mass-energy the quantity of storable data increases. You have more potential configurations.
Yes, in the hypothetical situation given; I can’t consistently assert anything else. In any “real” analogue there are many issues with the premises I’d take, and would likely merely take omega up a few times with the intend of gaining Omega-style ability.
I believe you are confused about what ‘bounded’ means. Possibly you are thinking of the Busy Beaver function, which is not bounded by any computable function; this does not mean it is not bounded, merely that we cannot compute the bound on a Turing machine.
Further, ‘unbounded’ does not mean ‘infinite’; it means ‘can be made arbitrarily large’. Omega, however, has not taken this procedure to infinity; he has made a finite number of offers, hence the final lifespan is finite. Don’t take the limit at infinity where it is not required!
Finally, you are mistaken about the effects of increasing the available space: Even in a globally flat spacetime, it requires energy to move particles apart; consequently there is a maximum volume available for information storage which depends on the total energy, not on the ‘size’ of the spacetime. Consider the case of two gravitationally-attracted particles with fixed energy. There is only one piece of information in this universe: You may express it as the distance between the particles, the kinetic energy of one particle; or the potential energy of the system; but the size of the universe does not matter.
No, I mean quite simply that there is no finite bound that holds for all n; if the universe were to collapse/rip in a finite time t, then Omega could only offer you the deal some fixed number of times. We seem to disagree about the how many times Omega would offer this deal—I read the OP as Omega being willing to offer it as many times as desired.
AFAIK (I’m only a mathematician), your example only holds if the total energy of the system is negative. In a more complicated universe, having a subset of the universe with positive total energy is not unreasonable, at which point it could be distributed arbitrarily over any flat spacetime. Consider a photon moving away from a black hole; if the universe gets larger the set of possible distances increases.
I think we are both confused on what “increasing the size of the Universe” means. Consider first a flat spacetime; there is no spatial limit—space coordinates may take any value. If you know the distance of the photon from the black hole (and the other masses influencing it), you know its energy, and vice-versa. Consequently the distance is not an independent variable. Knowing the initial energy of the system tells you how many states are available; all you can do is redistribute the energy between kinetic and potential. In this universe “increasing the size” is meaningless; you can already travel to infinity.
Now consider a closed spacetime (and your “only a mathematician” seems un-necessarily modest to me; this is an area of physics where I wish to tread carefully and consult with a mathematician whenever possible). Here the distance between photon and black hole is limited, because the universe “wraps around”; travel far enough and you come back to your starting point. It follows that some of the high-distance, low-energy states available in the flat case are not available here, and you can indeed increase the information by decreasing the curvature.
Now, a closed spacetime will collapse, the time to collapse depending on the curvature, so every time Omega makes you an offer, he’s giving you information about the shape of the Universe: It becomes flatter. This increases the number of states available at a given energy. But it cannot increase above the bound imposed by a completely flat spacetime! (I’m not sure what happens in an open Universe, but since it’ll rip apart in finite time I do not think we need to care.) So, yes, whenever Omega gives you a new offer he increases your estimate of the total information in the Universe (at fixed energy), but he cannot increase it without bound—your estimate should go asymptotically towards the flat-Universe limit.
With that said, I suppose Omega could offer, instead or additionally, to increase the information available by pumping in energy from outside the Universe, on some similarly increasing scale—in effect this tells me that the energy in the Universe, which I needed fixed to bound my utility function, is not in fact fixed. In that case I don’t know what to do. :-) But on the plus side, at least now Omega is breaking conservation of energy rather than merely giving me new information within known physics, so perhaps I’m entitled to consider the offers a bit less plausible?
I think we’re talking on slightly different terms. I was thinking of the Hubble radius, which in the limit equates to Open/Flat/Closed iff there is no cosmological constant (Dark energy). This does not seem to be the case. With a cosmological constant, the Hubble radius is relevant because of results on black hole entropy, which would limit the entropy content of a patch of the universe which had a finitely bounded Hubble radius. I was referring to the regression of the boundary as the “expansion of the universe”. The two work roughly similarly in cases where there is a cosmological constant.
I have no formal training in cosmology. In a flat spacetime as you suggest, the number of potential states seems infinite; you have an infinite maximum distance and can have any multiple of the plank distance as a separation. In a flat universe, your causal boundary recedes at a constant c, and thus peak entropy in the patch containing your past light cone goes as t^2. It is not clear that there is a finite bound on the whole of a flat spacetime. I agree entirely on your closed/open comments.
Omega could alternatively assert that the majority of the universe is open with a negative cosmological constant, which would be both stable and have the energy in your cosmological horizon unbounded by any constant.
In a flat spacetime as you suggest, the number of potential states seems infinite; you have an infinite maximum distance and can have any multiple of the plank distance as a separation.
No; the energy is quantized and finite, which disallows some distance-basis states.
But in any case, it does seem that the physical constraint on maximum fun does not apply to Omega, so I must concede that this doesn’t repair the paradox.
Is this obvious? Consider two pebbles floating in space—do they indicate a distance? Could they indicate more information if they were floating further apart?
Is it possible that discoveries in physics could cause you to revise the claim “information is a property of mass”?
Two particles floating in space, with a given energy, have a given amount of entropy and therefore information. The entropy is the logarithm of the number of states available to them at that energy; if they move further apart, that is a conversion of kinetic to potential energy (I’m assuming they interact gravitationally, but other forces do not change the argument) which is already accounted for in the entropy. Therefore, no, the distance is not an additional piece of information, it has been counted in the number of possible states. You can only change the entropy by adding energy—this is equivalent to adding mass; I’ve been simplifying by saying ‘mass’ throughout.
As for discoveries in physics: I do not wish to say that this is impossible. But it would require new understandings in statistical mechanics and thermodynamics, which are by this point really well understood. You’re talking about something rather more unlikely than overthrowing general relativity, here; we know GR doesn’t work at all scales. In any case, I can only update on information I already have; if you bring in New Physics, you can justify anything.
A finite amount of mass contains a finite amount of information; this is physics, not to be overcome by Fun Theory. I may be mistaken about the amount of mass the Universe contains, in which case my upper bound on utility would be wrong; but unless you are asserting that there is infinite mass, or else that there are an infinite number of ways to arrange a finite number of quarks in a bounded space, there must exist some upper bound. My understanding of Fun Theory is that it is intended to be deployed against people who consider 1000-year lifespans and say “But wouldn’t you get bored?”, rather than an assertion that there is actually infinite Fun to be had. But when dealing with Omega, your thought experiment had better take the physical limits into account!
As for the self-modification, I gave my thoughts on this in my exchange with Wei_Dai; briefly, try doing Rationalist Taboo on “self-modify” and see what happens to your argument.
So your scenario is that I stopped at some arbitrary point in the garden path; my future self has now reached the end of his vastly extended lifespan; and he wishes he’d taken Omega up on just one more offer? Ok, that’s a regret, right enough. But I invite you to consider the other scenario where I did accept Omega’s next offer, the randomness did not go my way, and I have an hour left to live, and regret not stopping one offer earlier. These scenarios have to be given some sort of weighting in my decision; the one that treats the numbers as plain arithmetic isn’t necessarily any better than the one that accept immediacy bias. They are both points in decision-algorithm space. The inconsistency that turns you into a money pump lies in trying to apply both.
The fact that Omega is offering unbounded lifespans implies that the universe isn’t going to crunch or rip in any finite time. Excluding them leaves you with a universe where the Hubble radius tends to infinity, which thus makes negentropy (information) unbounded above.
Self-modification is just an optimisation process over the design space for agents and run by some agent, with the constraint that only one agent can be instantiated at any time.
And regardless of what n is, only a 10^-6 portion of the (n-1)-survivors regret taking decision n. If you’re in the block that’s killed off by decision 1, then decisions 2,3,4,… are all irrelevant to you. Clearly attempting to apply both and applying neither consistently leads to money pumping.
Omega’s offers are not unbounded, they are merely very large. Further, even an infinite time would not imply an infinite amount of information, because information is a property of mass; adding more time just means that some configurations of quarks are going to be repeated. You are free to argue that it’s still Fun to ‘discover’ the same theorem for the second time, with no memory of the first time you did so, of course; but it looks to me as though that way Orgasmium lies. On the plus side, Orgasmium has no regrets.
Yes, that’s what I said; my prescription is to choose an arbitrary cutoff point, say where your survival probability drops to 75% - the difference between 80% and 75% seems ‘feelable’. You can treat this as all one decision, and consider that 1 in 20 future yous are going to strongly regret starting down the path at all; these are numbers that our brains can work with.
Failing an arbitrary cutoff, what is your alternative? Do you in fact accept the microscopic chance of a fantastically huge lifetime?
Omega’s offers are unbounded; 10^^n exceeds any finite bound with a finite n. If the Hubble distance (edge of the observable universe) recedes, then even with a fixed quantity of mass-energy the quantity of storable data increases. You have more potential configurations.
Yes, in the hypothetical situation given; I can’t consistently assert anything else. In any “real” analogue there are many issues with the premises I’d take, and would likely merely take omega up a few times with the intend of gaining Omega-style ability.
I believe you are confused about what ‘bounded’ means. Possibly you are thinking of the Busy Beaver function, which is not bounded by any computable function; this does not mean it is not bounded, merely that we cannot compute the bound on a Turing machine.
Further, ‘unbounded’ does not mean ‘infinite’; it means ‘can be made arbitrarily large’. Omega, however, has not taken this procedure to infinity; he has made a finite number of offers, hence the final lifespan is finite. Don’t take the limit at infinity where it is not required!
Finally, you are mistaken about the effects of increasing the available space: Even in a globally flat spacetime, it requires energy to move particles apart; consequently there is a maximum volume available for information storage which depends on the total energy, not on the ‘size’ of the spacetime. Consider the case of two gravitationally-attracted particles with fixed energy. There is only one piece of information in this universe: You may express it as the distance between the particles, the kinetic energy of one particle; or the potential energy of the system; but the size of the universe does not matter.
No, I mean quite simply that there is no finite bound that holds for all n; if the universe were to collapse/rip in a finite time t, then Omega could only offer you the deal some fixed number of times. We seem to disagree about the how many times Omega would offer this deal—I read the OP as Omega being willing to offer it as many times as desired.
AFAIK (I’m only a mathematician), your example only holds if the total energy of the system is negative. In a more complicated universe, having a subset of the universe with positive total energy is not unreasonable, at which point it could be distributed arbitrarily over any flat spacetime. Consider a photon moving away from a black hole; if the universe gets larger the set of possible distances increases.
I think we are both confused on what “increasing the size of the Universe” means. Consider first a flat spacetime; there is no spatial limit—space coordinates may take any value. If you know the distance of the photon from the black hole (and the other masses influencing it), you know its energy, and vice-versa. Consequently the distance is not an independent variable. Knowing the initial energy of the system tells you how many states are available; all you can do is redistribute the energy between kinetic and potential. In this universe “increasing the size” is meaningless; you can already travel to infinity.
Now consider a closed spacetime (and your “only a mathematician” seems un-necessarily modest to me; this is an area of physics where I wish to tread carefully and consult with a mathematician whenever possible). Here the distance between photon and black hole is limited, because the universe “wraps around”; travel far enough and you come back to your starting point. It follows that some of the high-distance, low-energy states available in the flat case are not available here, and you can indeed increase the information by decreasing the curvature.
Now, a closed spacetime will collapse, the time to collapse depending on the curvature, so every time Omega makes you an offer, he’s giving you information about the shape of the Universe: It becomes flatter. This increases the number of states available at a given energy. But it cannot increase above the bound imposed by a completely flat spacetime! (I’m not sure what happens in an open Universe, but since it’ll rip apart in finite time I do not think we need to care.) So, yes, whenever Omega gives you a new offer he increases your estimate of the total information in the Universe (at fixed energy), but he cannot increase it without bound—your estimate should go asymptotically towards the flat-Universe limit.
With that said, I suppose Omega could offer, instead or additionally, to increase the information available by pumping in energy from outside the Universe, on some similarly increasing scale—in effect this tells me that the energy in the Universe, which I needed fixed to bound my utility function, is not in fact fixed. In that case I don’t know what to do. :-) But on the plus side, at least now Omega is breaking conservation of energy rather than merely giving me new information within known physics, so perhaps I’m entitled to consider the offers a bit less plausible?
I think we’re talking on slightly different terms. I was thinking of the Hubble radius, which in the limit equates to Open/Flat/Closed iff there is no cosmological constant (Dark energy). This does not seem to be the case. With a cosmological constant, the Hubble radius is relevant because of results on black hole entropy, which would limit the entropy content of a patch of the universe which had a finitely bounded Hubble radius. I was referring to the regression of the boundary as the “expansion of the universe”. The two work roughly similarly in cases where there is a cosmological constant.
I have no formal training in cosmology. In a flat spacetime as you suggest, the number of potential states seems infinite; you have an infinite maximum distance and can have any multiple of the plank distance as a separation. In a flat universe, your causal boundary recedes at a constant c, and thus peak entropy in the patch containing your past light cone goes as t^2. It is not clear that there is a finite bound on the whole of a flat spacetime. I agree entirely on your closed/open comments.
Omega could alternatively assert that the majority of the universe is open with a negative cosmological constant, which would be both stable and have the energy in your cosmological horizon unbounded by any constant.
As to attacking the premises; I entirely agree.
No; the energy is quantized and finite, which disallows some distance-basis states.
But in any case, it does seem that the physical constraint on maximum fun does not apply to Omega, so I must concede that this doesn’t repair the paradox.
You said “information is a property of mass”.
Is this obvious? Consider two pebbles floating in space—do they indicate a distance? Could they indicate more information if they were floating further apart?
Is it possible that discoveries in physics could cause you to revise the claim “information is a property of mass”?
Two particles floating in space, with a given energy, have a given amount of entropy and therefore information. The entropy is the logarithm of the number of states available to them at that energy; if they move further apart, that is a conversion of kinetic to potential energy (I’m assuming they interact gravitationally, but other forces do not change the argument) which is already accounted for in the entropy. Therefore, no, the distance is not an additional piece of information, it has been counted in the number of possible states. You can only change the entropy by adding energy—this is equivalent to adding mass; I’ve been simplifying by saying ‘mass’ throughout.
As for discoveries in physics: I do not wish to say that this is impossible. But it would require new understandings in statistical mechanics and thermodynamics, which are by this point really well understood. You’re talking about something rather more unlikely than overthrowing general relativity, here; we know GR doesn’t work at all scales. In any case, I can only update on information I already have; if you bring in New Physics, you can justify anything.