in most[1] kinds of infinite worlds, values which are quantitative[2] become fanatical in a way, because they are constrained to:
making something valued occur with at least >0% frequency, or:
making something disvalued occur with exactly 0% frequency
“how is either possible?”—as a simple case, if there’s infinite copies of one small world, then making either true in that small world snaps the overall quantity between 0 and infinity. then generalize this possibility to more-diverse worlds. (we can abstract away ‘infinity’ and write about presence-at-all in a diverse set)
(neither is true of the ‘set of everything’, only of ‘constrained’ infinite sets, wrote about this in fn.2)
---
that was just an observation, pointing out the possibility of that and its difference to portional decreases. below is how i value this / some implications / how this (weakly-)could be done in a very-diverse infinite world.
if i have option A: decrease x from 0.01% to 0%, and option B: decrease x from 50% to 1%, and if x is some extreme kind of suffering only caused from superintelligence or Boltzmann-brain events (i’ll call this hypersuffering), then i prefer option A.
that’s contingent on the quantity being unaffected by option B. (i.e on infinity of something being the same amount as half of infinity of that something, in reality).
also, i might prefer B to some sufficiently low probability of the A, i’m not sure how low. to me, ‘there being zero instead of infinite hypersuffering’ does need to be very improbable before it becomes outweighed by values about the isolated {‘shape’ of the universe/distribution of events}, but it’s plausible that it is that improbable in a very diverse world.
a superintelligent version of me would probably check: is this logically a thing i can cause, i.e is there some clever trick i can use to make all superintelligent things who would do this instead not do it despite some having robust decision theories, and despite the contradiction where such a trick could also be used to prevent me from using it, and if so, then do it, if not, pursue ‘portional’ values. that is to say, how much one values quantity vs portion-of-infinity probably does not imply different action in practice, apart from the initial action of making sure ASI is aligned to not just quantitative or portional (assuming the designer cares to some extent about both).
(also, even if there is such a clever trick to prevent it from being intentionally caused, it also has to not occur randomly (Boltzmann brain -like), or the universe has to be able to be acausally influenced to make it not occur randomly (mentioned in this, better explanation below))
‘how to acausally influence non-agentic areas of physics?’ - your choices are downstream of ‘the specification of reality from the beginning’. so you have at least a chance to influence that specification, if you(/ASI) does this:
don’t compute that specification immediately, because that is itself an action (so correlated to it) and ‘locks it in’ from your frame.
instead, compute some space of what it would be when conditional on your future behavior being any from a wide space.
you’re hoping that you find some logical-worlds where the ‘specification’ is upstream of both that behavior from you and <other things in the universe that you care about, such as whether hypersuffering is ever present in non-agentic areas of physics>.
it could be that you won’t find any, though, e.g if your future actions have close to no correlative influence. as such i’m not saying anything about whether this is logically likely to work, just that it’s possible.
if possible, a kind of this which prevents hypersuffering-causer ASIs from existing could prevent the need to cleverly effect their choices
a ‘quantitative value’ is one about quantities of things rather than ‘portions of infinity’/the thing that determines probability of observations in a quantitatively infinite world.
possible values respond differently to infinite quantities.
for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity. (at least, unless something they (dis)value occurs with exactly 0% frequency, implying a quantity of 0 - which could, i think, be influenced by portional acausal influence in certain logically-possible circumstances. (i.e maybe not the case in ‘actual reality’ if it’s infinite, but possible at least in some mathematically-definable infinite universes; as a trivial case, a set of infinite 1s contains no 0s. more fundamentally, an infinite set of universes can be a finitely diverse set occurring infinite times, or an infinitely diverse set where the diversity is constrained.))
other values might care about portion—that is, portion of / percentage-frequency within the infinite amount of worlds—the thing that determines the probability of an observation in an infinitely large world—rather than quantity. (e.g., i think my altruism still cares about this, though it’s really tragic that there’s infinite suffering).
note this difference is separate from whether the agent conceptualizes the world as finite-increasing or infinite (or something else).
in most[1] kinds of infinite worlds, values which are quantitative[2] become fanatical in a way, because they are constrained to:
making something valued occur with at least >0% frequency, or:
making something disvalued occur with exactly 0% frequency
“how is either possible?”—as a simple case, if there’s infinite copies of one small world, then making either true in that small world snaps the overall quantity between 0 and infinity. then generalize this possibility to more-diverse worlds. (we can abstract away ‘infinity’ and write about presence-at-all in a diverse set)
(neither is true of the ‘set of everything’, only of ‘constrained’ infinite sets, wrote about this in fn.2)
---
that was just an observation, pointing out the possibility of that and its difference to portional decreases. below is how i value this / some implications / how this (weakly-)could be done in a very-diverse infinite world.
if i have option A: decrease x from 0.01% to 0%, and option B: decrease x from 50% to 1%, and if x is some extreme kind of suffering only caused from superintelligence or Boltzmann-brain events (i’ll call this hypersuffering), then i prefer option A.
that’s contingent on the quantity being unaffected by option B. (i.e on infinity of something being the same amount as half of infinity of that something, in reality).
also, i might prefer B to some sufficiently low probability of the A, i’m not sure how low. to me, ‘there being zero instead of infinite hypersuffering’ does need to be very improbable before it becomes outweighed by values about the isolated {‘shape’ of the universe/distribution of events}, but it’s plausible that it is that improbable in a very diverse world.
a superintelligent version of me would probably check: is this logically a thing i can cause, i.e is there some clever trick i can use to make all superintelligent things who would do this instead not do it despite some having robust decision theories, and despite the contradiction where such a trick could also be used to prevent me from using it, and if so, then do it, if not, pursue ‘portional’ values. that is to say, how much one values quantity vs portion-of-infinity probably does not imply different action in practice, apart from the initial action of making sure ASI is aligned to not just quantitative or portional (assuming the designer cares to some extent about both).
(also, even if there is such a clever trick to prevent it from being intentionally caused, it also has to not occur randomly (Boltzmann brain -like), or the universe has to be able to be acausally influenced to make it not occur randomly (mentioned in this, better explanation below))
‘how to acausally influence non-agentic areas of physics?’ - your choices are downstream of ‘the specification of reality from the beginning’. so you have at least a chance to influence that specification, if you(/ASI) does this:
don’t compute that specification immediately, because that is itself an action (so correlated to it) and ‘locks it in’ from your frame.
instead, compute some space of what it would be when conditional on your future behavior being any from a wide space.
you’re hoping that you find some logical-worlds where the ‘specification’ is upstream of both that behavior from you and <other things in the universe that you care about, such as whether hypersuffering is ever present in non-agentic areas of physics>.
it could be that you won’t find any, though, e.g if your future actions have close to no correlative influence. as such i’m not saying anything about whether this is logically likely to work, just that it’s possible.
if possible, a kind of this which prevents hypersuffering-causer ASIs from existing could prevent the need to cleverly effect their choices
it is possible for an infinite set to have a finite amount of something, like the set of one
1
and infinite0
s, but i don’t mean this kinda ‘quantitative value’ is one about quantities of things rather than ‘portions of infinity’/the thing that determines probability of observations in a quantitatively infinite world.
longer explanation copied from https://forum.effectivealtruism.org/posts/jGoExJpGgLnsNPKD8/does-ultimate-neartermism-via-eternal-inflation-dominate#zAp9JJnABYruJyhhD: