It seems like the only reasonable way to compute expected utility is to compute SSA or pseudo-SSA in Bigworld and Smallworld, thus computing the average utility in each infinite world, with an implied factor of omega.
Be careful about using an infinity that is not the limit of an infinite sequence; it might not be well defined.
An infinite, causally connected chain?
It depends on the specifics. This is a very underdefinded structure.
A series of larger and larger worlds, with no single average utility?
A divergent expected utility would always be preferable to a convergent one. How to compare two divergent possible universes depends on the specifics of the divergence.
I will formalize my intuitions, in accordance with your first point, and thereby clarify what I’m talking about in the third point.
Suppose agents exist on the real line, and their utilities are real numbers. Intuitively, going from u(x)=1 to u(x)=2 is good, and going from u(x)=1 to u(x)=1+sin(x) is neutral.
The obvious way to formalize this is with the limiting process:
limit as M goes to infinity of ( the integral from -M to M of u(x)dx, divided by 2M )
This gives well-defined and nice answers to some situations but not others.
However, you can construct functions u(x) where ( the integral from -M to M of u(x)dx, divided by 2M ) is an arbitrary differentiable function of M, in particular, one that has no limit as M goes to infinity. However, it is not necessarily divergent—it may oscillate between 0 and 1, for instance.
I’m fairly certain that if I have a description of a single universe, and a description of another universe, I can produce a description in the same language of a universe consisting of the two, next to each other, with no causal connection. Depending on the description language, for some universes, I may or may not be able to tell that they cannot be written as the limit of a sum of finite universes.
For any decision-making process you’re using, I can probably tell you what an infinite causal chain looks like in it.
Suppose agents exist on the real line, and their utilities are real numbers. Intuitively, going from u(x)=1 to u(x)=2 is good, and going from u(x)=1 to u(x)=1+sin(x) is neutral.
Why must there be a universe that corresponds to this situation? The number of agents has cardinality beth-1. A suitable generalization of Pascal’s wager would require that we bet on the amount of utility having a larger cardinality, if that even makes sense. Of course, there is no maximum cardinality, but there is a maximum cardinality expressible by humans with a finite lifespan.
The obvious way to formalize this is with the limiting process:
limit as M goes to infinity of ( the integral from -M to M of u(x)dx, divided by 2M )
That is intuitively appealing, but it is arbitrary. Consider the step function that is 1 for positive agents and −1 for negative agents. Agent 0 can have a utility of 0 for symmetry, but we should not care about the utility of one agent out of infinity unless that agent is able to experience an infinity of utility. The limit of the integral from -M to M of u(x)dx/2M is 0, but the limit of the integral from 1-M to 1+M of u(x)dx/2M is 2 and the limit of the integral from -M to 2M of u(x)dx/3M is +infinity. While your case has an some appealing symmetry, it is arbitrary to privilege it over these other integrals. This can also work with a sigmoid function, if you like continuity and differentiability.
I’m fairly certain that if I have a description of a single universe, and a description of another universe, I can produce a description in the same language of a universe consisting of the two, next to each other, with no causal connection.
Wouldn’t you just add the two functions, if you are talking about just the utilities, or run the (possibly hyper)computations in parallel, if you are talking about the whole universes?
Depending on the description language, for some universes, I may or may not be able to tell that they cannot be written as the limit of a sum of finite universes.
Yes, how to handle certain cases of infinite utility looks extremely non-obvious. It is also necessary.
Why must there be a universe that corresponds to this situation?
So that the math can be as simple as possible. Solving simple cases is advisable. beth-1 is easier to deal with in mathematical notation than beth-0, and anything bigger is so complicated that I have no idea.
The limit of the integral from -M to M of u(x)dx/2M is 0, but the limit of the integral from 1-M to 1+M of u(x)dx/2M is 2 and the limit of the integral from -M to 2M of u(x)dx/3M is +infinity. While your case has an some appealing symmetry, it is arbitrary to privilege it over these other integrals. This can also work with a sigmoid function, if you like continuity and differentiability.
Actually, those mostly go to 0
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
This doesn’t matter, as even this method, the most appealing and simple, fails in some cases, and there do not appear to be other, better ones.
Wouldn’t you just add the two functions, if you are talking about just the utilities, or run the (possibly hyper)computations in parallel, if you are talking about the whole universes?
Yes, indeed. I would run the computations in parallel, stick the Bayes nets next to each other, add the functions from policies to utilities, etc. In the first two cases, I would be able to tell how many separate universes seem to exist. In the second, I would not.
Yes, how to handle certain cases of infinite utility looks extremely non-obvious. It is also necessary.
I agree. I have no idea how to do it. We have two options:
Find some valid argument why infinities are logically impossible, and worry only about the finite case.
Find some method for dealing with infinities.
Most people seem to assume 1, but I’m not sure why.
Oh, and I think I forgot to say earlier that I have the pdf but not your email address.
My email address is endoself (at) yahoo (dot) com.
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
I seem to have forgotten to divide by M.
Why must there be a universe that corresponds to this situation?
So that the math can be as simple as possible. Solving simple cases is advisable.
I didn’t mean to ask why you chose this case; I was asking why you thought it corresponded to any possible world. I doubt any universe could be described by this model, because it is impossible to make predictions about. If you are an agent in this universe, what is the probability that you are found to the right of the y-axis? Unless the agents do not have equal measure, such as if agents have measure proportional to the complexity of locating them in the universe, as Wei Dai proposed, this probability is undefined, due to the same argument that shows the utility is undefined.
This could be the first step in proving that infinities are logically impossible, or it could be the first step in ruling out impossible infinities, until we are only left with ones that are easy to calculate utilities for. There are some infinities that seem possible: consider an infinite number of identical agents. This situation is indistinguishable from a single agent, yet has infinitely more moral value. This could be impossible however, if identical agents have no more reality-fluid than single agents or, more generally, if a theory of mind or of physics, or, more likely, one of each, is developed that allows you to calculate the amount of reality-fluid from first principles.
In general, an infinity only seems to make sense for describing conscious observers if it can be given a probability measure. I know of two possible sets of axioms for a probability space. Cox’s theorem looks good, but it is unable to handle any infinite sums, even reasonable ones like those used in the Solomonoff prior or finite, well defined integrals. There’s also Kolmogorov’s axioms, but they are not self evident, so it is not certain that they can handle any possible situation.
Once you assign a probability measure to each observer-moment, it seems likely that the right way to calculate utility is to integrate the utility function over the probability space, times some overall possibly infinite constant representing the amount of reality fluid. Of course this can’t be a normal integral, since utilities, probabilities, and the reality-fluid coefficient could all take infinite/infinitesimal values. That pdf might be a start on the utility side; the probability side seems harder, but that may just be because I haven’t read the paper on Cox’s theorem; and the reality-fluid problem is pretty close to the hard problem of consciousness, so that could take a while. This seems like it will take a lot of axiomatization, but I feel closer to solving this than when I started writing/researching this comment. Of course, if there is no need for a probability measure, much of this is negated.
So, of course, the infinities for which probabilities are ill-defined are just those nasty infinities I was talking about where the expected utility is incalculable.
What we actually want to produce is a probability measure on the set of individual experiences that are copied, or whatever thing has moral value, not on single instantiations of those experiences. We can do so with a limiting sequence of probability measures of the whole thing, but probably not a single measure.
This will probably lead to a situation where SIA turns into SSA.
What bothers me about this line of argument is that, according to UDT, there’s nothing fundamental about probabilities. So why should undefined probabilities be more convincing than undefined expected utilities?
We still need something very much like a probability measure to compute our expected utility function.
Kolomogorov should be what you want. A Kolomogorov probability measure is just a measure where the measure of the whole space is 1. Is there something non-self-evident or non-robust about that? It’s just real analysis.
I think the whole integral can probably contained within real—analytic conceptions. For example, you can use an alternate definition of measurable sets.
I disagree with your interpretation of UDT. UDT says that, when making choices, you should evaluate all consequences of your choices, not just those that are causally connected to whatever object is instantiating your algorithm. However, while probabilities of different experiences are part of our optimization criteria, they do not need to play a role in the theory of optimization in general. I think we should determine more concretely whether these probabilities exist, but their absence from UDT is not very strong evidence against them.
The difference between SIA and SSA is essentially an overall factor for each universe describing its total reality-fluid. Under certain infinite models, there could be real-valued ratios.
The thing that worries me second-most about standard measure theory is infinitesimals. A Kolomogorov measure simply cannot handle a case with a finite measure of agents with finite utility and an infinitesimal measure of agents with an infinite utility.
The thing that worries me most about standard measure theory is my own uncertainty. Until I have time to read more deeply about it, I cannot be sure whether a surprise even bigger than infinitesimals is waiting for me.
Be careful about using an infinity that is not the limit of an infinite sequence; it might not be well defined.
It depends on the specifics. This is a very underdefinded structure.
A divergent expected utility would always be preferable to a convergent one. How to compare two divergent possible universes depends on the specifics of the divergence.
I will formalize my intuitions, in accordance with your first point, and thereby clarify what I’m talking about in the third point.
Suppose agents exist on the real line, and their utilities are real numbers. Intuitively, going from u(x)=1 to u(x)=2 is good, and going from u(x)=1 to u(x)=1+sin(x) is neutral.
The obvious way to formalize this is with the limiting process:
limit as M goes to infinity of ( the integral from -M to M of u(x)dx, divided by 2M )
This gives well-defined and nice answers to some situations but not others.
However, you can construct functions u(x) where ( the integral from -M to M of u(x)dx, divided by 2M ) is an arbitrary differentiable function of M, in particular, one that has no limit as M goes to infinity. However, it is not necessarily divergent—it may oscillate between 0 and 1, for instance.
I’m fairly certain that if I have a description of a single universe, and a description of another universe, I can produce a description in the same language of a universe consisting of the two, next to each other, with no causal connection. Depending on the description language, for some universes, I may or may not be able to tell that they cannot be written as the limit of a sum of finite universes.
For any decision-making process you’re using, I can probably tell you what an infinite causal chain looks like in it.
Why must there be a universe that corresponds to this situation? The number of agents has cardinality beth-1. A suitable generalization of Pascal’s wager would require that we bet on the amount of utility having a larger cardinality, if that even makes sense. Of course, there is no maximum cardinality, but there is a maximum cardinality expressible by humans with a finite lifespan.
That is intuitively appealing, but it is arbitrary. Consider the step function that is 1 for positive agents and −1 for negative agents. Agent 0 can have a utility of 0 for symmetry, but we should not care about the utility of one agent out of infinity unless that agent is able to experience an infinity of utility. The limit of the integral from -M to M of u(x)dx/2M is 0, but the limit of the integral from 1-M to 1+M of u(x)dx/2M is 2 and the limit of the integral from -M to 2M of u(x)dx/3M is +infinity. While your case has an some appealing symmetry, it is arbitrary to privilege it over these other integrals. This can also work with a sigmoid function, if you like continuity and differentiability.
Wouldn’t you just add the two functions, if you are talking about just the utilities, or run the (possibly hyper)computations in parallel, if you are talking about the whole universes?
Yes, how to handle certain cases of infinite utility looks extremely non-obvious. It is also necessary.
So that the math can be as simple as possible. Solving simple cases is advisable. beth-1 is easier to deal with in mathematical notation than beth-0, and anything bigger is so complicated that I have no idea.
Actually, those mostly go to 0
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
This doesn’t matter, as even this method, the most appealing and simple, fails in some cases, and there do not appear to be other, better ones.
Yes, indeed. I would run the computations in parallel, stick the Bayes nets next to each other, add the functions from policies to utilities, etc. In the first two cases, I would be able to tell how many separate universes seem to exist. In the second, I would not.
I agree. I have no idea how to do it. We have two options:
Find some valid argument why infinities are logically impossible, and worry only about the finite case.
Find some method for dealing with infinities.
Most people seem to assume 1, but I’m not sure why.
Oh, and I think I forgot to say earlier that I have the pdf but not your email address.
My email address is endoself (at) yahoo (dot) com.
I seem to have forgotten to divide by M.
I didn’t mean to ask why you chose this case; I was asking why you thought it corresponded to any possible world. I doubt any universe could be described by this model, because it is impossible to make predictions about. If you are an agent in this universe, what is the probability that you are found to the right of the y-axis? Unless the agents do not have equal measure, such as if agents have measure proportional to the complexity of locating them in the universe, as Wei Dai proposed, this probability is undefined, due to the same argument that shows the utility is undefined.
This could be the first step in proving that infinities are logically impossible, or it could be the first step in ruling out impossible infinities, until we are only left with ones that are easy to calculate utilities for. There are some infinities that seem possible: consider an infinite number of identical agents. This situation is indistinguishable from a single agent, yet has infinitely more moral value. This could be impossible however, if identical agents have no more reality-fluid than single agents or, more generally, if a theory of mind or of physics, or, more likely, one of each, is developed that allows you to calculate the amount of reality-fluid from first principles.
In general, an infinity only seems to make sense for describing conscious observers if it can be given a probability measure. I know of two possible sets of axioms for a probability space. Cox’s theorem looks good, but it is unable to handle any infinite sums, even reasonable ones like those used in the Solomonoff prior or finite, well defined integrals. There’s also Kolmogorov’s axioms, but they are not self evident, so it is not certain that they can handle any possible situation.
Once you assign a probability measure to each observer-moment, it seems likely that the right way to calculate utility is to integrate the utility function over the probability space, times some overall possibly infinite constant representing the amount of reality fluid. Of course this can’t be a normal integral, since utilities, probabilities, and the reality-fluid coefficient could all take infinite/infinitesimal values. That pdf might be a start on the utility side; the probability side seems harder, but that may just be because I haven’t read the paper on Cox’s theorem; and the reality-fluid problem is pretty close to the hard problem of consciousness, so that could take a while. This seems like it will take a lot of axiomatization, but I feel closer to solving this than when I started writing/researching this comment. Of course, if there is no need for a probability measure, much of this is negated.
So, of course, the infinities for which probabilities are ill-defined are just those nasty infinities I was talking about where the expected utility is incalculable.
What we actually want to produce is a probability measure on the set of individual experiences that are copied, or whatever thing has moral value, not on single instantiations of those experiences. We can do so with a limiting sequence of probability measures of the whole thing, but probably not a single measure.
This will probably lead to a situation where SIA turns into SSA.
What bothers me about this line of argument is that, according to UDT, there’s nothing fundamental about probabilities. So why should undefined probabilities be more convincing than undefined expected utilities?
We still need something very much like a probability measure to compute our expected utility function.
Kolomogorov should be what you want. A Kolomogorov probability measure is just a measure where the measure of the whole space is 1. Is there something non-self-evident or non-robust about that? It’s just real analysis.
I think the whole integral can probably contained within real—analytic conceptions. For example, you can use an alternate definition of measurable sets.
I disagree with your interpretation of UDT. UDT says that, when making choices, you should evaluate all consequences of your choices, not just those that are causally connected to whatever object is instantiating your algorithm. However, while probabilities of different experiences are part of our optimization criteria, they do not need to play a role in the theory of optimization in general. I think we should determine more concretely whether these probabilities exist, but their absence from UDT is not very strong evidence against them.
The difference between SIA and SSA is essentially an overall factor for each universe describing its total reality-fluid. Under certain infinite models, there could be real-valued ratios.
The thing that worries me second-most about standard measure theory is infinitesimals. A Kolomogorov measure simply cannot handle a case with a finite measure of agents with finite utility and an infinitesimal measure of agents with an infinite utility.
The thing that worries me most about standard measure theory is my own uncertainty. Until I have time to read more deeply about it, I cannot be sure whether a surprise even bigger than infinitesimals is waiting for me.