I’m interested how you would like to apply ADT in Big World cases, where e.g. there are infinitely many civilizations of observers.
I don’t know how to deal with infinite ethics, and I haven’t looked into that in detail. ADT was not designed with that in mind, and I think we must find specific ways of extending these types of theories to infinite situations. And once they are found, we can apply them to ADT (or to other theories).
Though on a person note, I have to say that non-standard reals are cool.
I’m partial to the surreals myself. Every ordered field is a subfield of the surreals, though this is slightly cheating since the elements of a field form a set by definition but there are also Fields, which have elements forming a proper class. The surreals themselves are usually a Field, depending on your preference of uselessly abstract set theory axioms. We know that we want utilities to form an ordered field (or maybe a Field?), but Dedekind completeness for utilities seems to violate our intuitions about infinite ethics.
I haven’t studied the hyperreals, though. Is there any reason that you think they might be useful (the transfer principle?) or do you just find them cool as a mathematical structure?
I haven’t studied the hyperreals, though. Is there any reason that you think they might be useful (the transfer principle?) or do you just find them cool as a mathematical structure?
They allow us to extend real-valued utilities, getting tractable “infinities” in at least some cases.
I think that hyper-reals (or non-standard reals) would model a universe of finite, but non-standard, size. In some cases, this would be deemed a reasonable model for an infinite universe. But in this case, I don’t think they help. The difficulty is still with SIA (or decision theories/utility functions with the same effect as SIA).
SIA will always shift weight (betting weight, decision weight) towards the biggest universe models available in a range of hypotheses. So if we have a range containing universe models with both standard finite and non-standard finite (hyper real) sizes, SIA will always cause agents to bet on the non-standard ones, and ignore the standard ones. And it will further cause betting on the “biggest” non-standard sizes allowed in the range. If our models have different orders of non-standard real - R, R^2, R^3 etc. where R is bigger than each standard real—then it will shift weight up to the highest order models allowed. If there are no highest orders in the range, then we get an improper probability distribution which can’t be normalised, not even using a non-standard normalisation constant. Finally, if our range contains any models of truly infinite size (infinite cardinality of galaxies, say) then since these are bigger than all the non-standard finite sizes, the betting weight shifts entirely to those. So the non-standard analysis may not help much.
Generally, this is my biggest bug-bear with SIA; it forces us to deal explicitly with the infinite case, but then ducks out when we get to that case (can’t compare infinities in a meaningful way; sorry).
Stuart, thanks for your comments on the infinite case by the way. I agree it is not obvious how to treat it, and one strategy is to avoid it completely. We could model infinite worlds by suitable big finite worlds (say with 3^^^3 galaxies) and assign zero prior probability to anything bigger. SIA will now shove all the weight up to those big finite worlds, but at least all utilities and populations are still finite and all probability distributions normalise.
This is roughly how it would go: and then we try to take a limit construction by making the 3^^^3 a variable N and looking at a limiting decision as N goes towards infinity. If we’re still making the same decisions asymptotically, then we declare those the correct decisions in the strictly infinite case. Sounds promising.
There are a couple of problems though. One is that this version of SIA will favour models where most star systems develop civilizations of observers, and then most of those civilizations go Doom. The reason is that such models maximize the number of observers who observe a universe like what we are observing right now, and hence become favoured by SIA. We still get a Doomsday argument.
A second problem is that this version will favour universe models which are even weirder, and packed very densely with observers (inside computer simulations, where the computers fill the whole of space and time). In those dense models, there are many more agents with experiences like ours (they are part of simulations of sparsely-populated universes) than there are agents in truly sparse universes. So SIA now implies a form of the simulation argument, and an external universe outside the simulation which looks very different from the one inside.
And now, perhaps even worse, among these dense worlds, some will use their simulation resources simulating mostly long-lived civilizations, while others will use the same resources simulating mostly short-lived civilizations (every time a simulation Dooms, they start another one). So dense worlds which simulate short-lived civilizations spend more of their resources simulating people like us, and generally contain more agents with experiences like ours, than dense worlds which simulate long-lived civilizations. So we STILL get a Doomsday Argument, on top of the simulation argument.
OK, the “surreals” contain the transfinite ordinals, hence they contain the infinite cardinals as well. So, surreals can indeed model universes of strictly infinite size i.e. not just non-standard finite size.
I think the SIA problem of weighting towards the “largest possible” models still applies though. Suppose we have identified two models of an infinite universe; one says there are aleph0 galaxies; the other says there are aleph1 galaxies. Under SIA, the aleph1 model gets all the probability weight (or decision weight).
If we have a range of models with infinities of different cardinalities, and no largest cardinal (as in Zermelo Fraenkel set theory) then the SIA probability function becomes wild, and in a certain sense vanishes completely. (Given any cardinal X, models of size X or smaller have zero probability.)
Yes, this doesn’t solve the problem of divergence of expected utility, it just lets us say that our infinite expected utilities are not converging rather than only having arbitrarily large real utilities fail to converge.
I was under the impression that you could represent all hyperreals as taking limits (though not the other way around), is that wrong? They could still be useful if they simplify the analysis a good deal, though.
I don’t know how to deal with infinite ethics, and I haven’t looked into that in detail. ADT was not designed with that in mind, and I think we must find specific ways of extending these types of theories to infinite situations. And once they are found, we can apply them to ADT (or to other theories).
Though on a person note, I have to say that non-standard reals are cool.
I’m partial to the surreals myself. Every ordered field is a subfield of the surreals, though this is slightly cheating since the elements of a field form a set by definition but there are also Fields, which have elements forming a proper class. The surreals themselves are usually a Field, depending on your preference of uselessly abstract set theory axioms. We know that we want utilities to form an ordered field (or maybe a Field?), but Dedekind completeness for utilities seems to violate our intuitions about infinite ethics.
I haven’t studied the hyperreals, though. Is there any reason that you think they might be useful (the transfer principle?) or do you just find them cool as a mathematical structure?
They allow us to extend real-valued utilities, getting tractable “infinities” in at least some cases.
I think that hyper-reals (or non-standard reals) would model a universe of finite, but non-standard, size. In some cases, this would be deemed a reasonable model for an infinite universe. But in this case, I don’t think they help. The difficulty is still with SIA (or decision theories/utility functions with the same effect as SIA).
SIA will always shift weight (betting weight, decision weight) towards the biggest universe models available in a range of hypotheses. So if we have a range containing universe models with both standard finite and non-standard finite (hyper real) sizes, SIA will always cause agents to bet on the non-standard ones, and ignore the standard ones. And it will further cause betting on the “biggest” non-standard sizes allowed in the range. If our models have different orders of non-standard real - R, R^2, R^3 etc. where R is bigger than each standard real—then it will shift weight up to the highest order models allowed. If there are no highest orders in the range, then we get an improper probability distribution which can’t be normalised, not even using a non-standard normalisation constant. Finally, if our range contains any models of truly infinite size (infinite cardinality of galaxies, say) then since these are bigger than all the non-standard finite sizes, the betting weight shifts entirely to those. So the non-standard analysis may not help much.
Generally, this is my biggest bug-bear with SIA; it forces us to deal explicitly with the infinite case, but then ducks out when we get to that case (can’t compare infinities in a meaningful way; sorry).
Stuart, thanks for your comments on the infinite case by the way. I agree it is not obvious how to treat it, and one strategy is to avoid it completely. We could model infinite worlds by suitable big finite worlds (say with 3^^^3 galaxies) and assign zero prior probability to anything bigger. SIA will now shove all the weight up to those big finite worlds, but at least all utilities and populations are still finite and all probability distributions normalise.
This is roughly how it would go: and then we try to take a limit construction by making the 3^^^3 a variable N and looking at a limiting decision as N goes towards infinity. If we’re still making the same decisions asymptotically, then we declare those the correct decisions in the strictly infinite case. Sounds promising.
There are a couple of problems though. One is that this version of SIA will favour models where most star systems develop civilizations of observers, and then most of those civilizations go Doom. The reason is that such models maximize the number of observers who observe a universe like what we are observing right now, and hence become favoured by SIA. We still get a Doomsday argument.
A second problem is that this version will favour universe models which are even weirder, and packed very densely with observers (inside computer simulations, where the computers fill the whole of space and time). In those dense models, there are many more agents with experiences like ours (they are part of simulations of sparsely-populated universes) than there are agents in truly sparse universes. So SIA now implies a form of the simulation argument, and an external universe outside the simulation which looks very different from the one inside.
And now, perhaps even worse, among these dense worlds, some will use their simulation resources simulating mostly long-lived civilizations, while others will use the same resources simulating mostly short-lived civilizations (every time a simulation Dooms, they start another one). So dense worlds which simulate short-lived civilizations spend more of their resources simulating people like us, and generally contain more agents with experiences like ours, than dense worlds which simulate long-lived civilizations. So we STILL get a Doomsday Argument, on top of the simulation argument.
Hyper-reals ugly. No good model. Use Levi-Civita field instead. Hahn series also ok.
(me know math, but me no know grammar)
Surreals also allow this and they are more general, as the hyperreals and the Levi-Civita field are subfields of the surreals.
OK, the “surreals” contain the transfinite ordinals, hence they contain the infinite cardinals as well. So, surreals can indeed model universes of strictly infinite size i.e. not just non-standard finite size.
I think the SIA problem of weighting towards the “largest possible” models still applies though. Suppose we have identified two models of an infinite universe; one says there are aleph0 galaxies; the other says there are aleph1 galaxies. Under SIA, the aleph1 model gets all the probability weight (or decision weight).
If we have a range of models with infinities of different cardinalities, and no largest cardinal (as in Zermelo Fraenkel set theory) then the SIA probability function becomes wild, and in a certain sense vanishes completely. (Given any cardinal X, models of size X or smaller have zero probability.)
Yes, this doesn’t solve the problem of divergence of expected utility, it just lets us say that our infinite expected utilities are not converging rather than only having arbitrarily large real utilities fail to converge.
I was under the impression that you could represent all hyperreals as taking limits (though not the other way around), is that wrong? They could still be useful if they simplify the analysis a good deal, though.