My email address is endoself (at) yahoo (dot) com.
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
I seem to have forgotten to divide by M.
Why must there be a universe that corresponds to this situation?
So that the math can be as simple as possible. Solving simple cases is advisable.
I didn’t mean to ask why you chose this case; I was asking why you thought it corresponded to any possible world. I doubt any universe could be described by this model, because it is impossible to make predictions about. If you are an agent in this universe, what is the probability that you are found to the right of the y-axis? Unless the agents do not have equal measure, such as if agents have measure proportional to the complexity of locating them in the universe, as Wei Dai proposed, this probability is undefined, due to the same argument that shows the utility is undefined.
This could be the first step in proving that infinities are logically impossible, or it could be the first step in ruling out impossible infinities, until we are only left with ones that are easy to calculate utilities for. There are some infinities that seem possible: consider an infinite number of identical agents. This situation is indistinguishable from a single agent, yet has infinitely more moral value. This could be impossible however, if identical agents have no more reality-fluid than single agents or, more generally, if a theory of mind or of physics, or, more likely, one of each, is developed that allows you to calculate the amount of reality-fluid from first principles.
In general, an infinity only seems to make sense for describing conscious observers if it can be given a probability measure. I know of two possible sets of axioms for a probability space. Cox’s theorem looks good, but it is unable to handle any infinite sums, even reasonable ones like those used in the Solomonoff prior or finite, well defined integrals. There’s also Kolmogorov’s axioms, but they are not self evident, so it is not certain that they can handle any possible situation.
Once you assign a probability measure to each observer-moment, it seems likely that the right way to calculate utility is to integrate the utility function over the probability space, times some overall possibly infinite constant representing the amount of reality fluid. Of course this can’t be a normal integral, since utilities, probabilities, and the reality-fluid coefficient could all take infinite/infinitesimal values. That pdf might be a start on the utility side; the probability side seems harder, but that may just be because I haven’t read the paper on Cox’s theorem; and the reality-fluid problem is pretty close to the hard problem of consciousness, so that could take a while. This seems like it will take a lot of axiomatization, but I feel closer to solving this than when I started writing/researching this comment. Of course, if there is no need for a probability measure, much of this is negated.
So, of course, the infinities for which probabilities are ill-defined are just those nasty infinities I was talking about where the expected utility is incalculable.
What we actually want to produce is a probability measure on the set of individual experiences that are copied, or whatever thing has moral value, not on single instantiations of those experiences. We can do so with a limiting sequence of probability measures of the whole thing, but probably not a single measure.
This will probably lead to a situation where SIA turns into SSA.
What bothers me about this line of argument is that, according to UDT, there’s nothing fundamental about probabilities. So why should undefined probabilities be more convincing than undefined expected utilities?
We still need something very much like a probability measure to compute our expected utility function.
Kolomogorov should be what you want. A Kolomogorov probability measure is just a measure where the measure of the whole space is 1. Is there something non-self-evident or non-robust about that? It’s just real analysis.
I think the whole integral can probably contained within real—analytic conceptions. For example, you can use an alternate definition of measurable sets.
I disagree with your interpretation of UDT. UDT says that, when making choices, you should evaluate all consequences of your choices, not just those that are causally connected to whatever object is instantiating your algorithm. However, while probabilities of different experiences are part of our optimization criteria, they do not need to play a role in the theory of optimization in general. I think we should determine more concretely whether these probabilities exist, but their absence from UDT is not very strong evidence against them.
The difference between SIA and SSA is essentially an overall factor for each universe describing its total reality-fluid. Under certain infinite models, there could be real-valued ratios.
The thing that worries me second-most about standard measure theory is infinitesimals. A Kolomogorov measure simply cannot handle a case with a finite measure of agents with finite utility and an infinitesimal measure of agents with an infinite utility.
The thing that worries me most about standard measure theory is my own uncertainty. Until I have time to read more deeply about it, I cannot be sure whether a surprise even bigger than infinitesimals is waiting for me.
My email address is endoself (at) yahoo (dot) com.
I seem to have forgotten to divide by M.
I didn’t mean to ask why you chose this case; I was asking why you thought it corresponded to any possible world. I doubt any universe could be described by this model, because it is impossible to make predictions about. If you are an agent in this universe, what is the probability that you are found to the right of the y-axis? Unless the agents do not have equal measure, such as if agents have measure proportional to the complexity of locating them in the universe, as Wei Dai proposed, this probability is undefined, due to the same argument that shows the utility is undefined.
This could be the first step in proving that infinities are logically impossible, or it could be the first step in ruling out impossible infinities, until we are only left with ones that are easy to calculate utilities for. There are some infinities that seem possible: consider an infinite number of identical agents. This situation is indistinguishable from a single agent, yet has infinitely more moral value. This could be impossible however, if identical agents have no more reality-fluid than single agents or, more generally, if a theory of mind or of physics, or, more likely, one of each, is developed that allows you to calculate the amount of reality-fluid from first principles.
In general, an infinity only seems to make sense for describing conscious observers if it can be given a probability measure. I know of two possible sets of axioms for a probability space. Cox’s theorem looks good, but it is unable to handle any infinite sums, even reasonable ones like those used in the Solomonoff prior or finite, well defined integrals. There’s also Kolmogorov’s axioms, but they are not self evident, so it is not certain that they can handle any possible situation.
Once you assign a probability measure to each observer-moment, it seems likely that the right way to calculate utility is to integrate the utility function over the probability space, times some overall possibly infinite constant representing the amount of reality fluid. Of course this can’t be a normal integral, since utilities, probabilities, and the reality-fluid coefficient could all take infinite/infinitesimal values. That pdf might be a start on the utility side; the probability side seems harder, but that may just be because I haven’t read the paper on Cox’s theorem; and the reality-fluid problem is pretty close to the hard problem of consciousness, so that could take a while. This seems like it will take a lot of axiomatization, but I feel closer to solving this than when I started writing/researching this comment. Of course, if there is no need for a probability measure, much of this is negated.
So, of course, the infinities for which probabilities are ill-defined are just those nasty infinities I was talking about where the expected utility is incalculable.
What we actually want to produce is a probability measure on the set of individual experiences that are copied, or whatever thing has moral value, not on single instantiations of those experiences. We can do so with a limiting sequence of probability measures of the whole thing, but probably not a single measure.
This will probably lead to a situation where SIA turns into SSA.
What bothers me about this line of argument is that, according to UDT, there’s nothing fundamental about probabilities. So why should undefined probabilities be more convincing than undefined expected utilities?
We still need something very much like a probability measure to compute our expected utility function.
Kolomogorov should be what you want. A Kolomogorov probability measure is just a measure where the measure of the whole space is 1. Is there something non-self-evident or non-robust about that? It’s just real analysis.
I think the whole integral can probably contained within real—analytic conceptions. For example, you can use an alternate definition of measurable sets.
I disagree with your interpretation of UDT. UDT says that, when making choices, you should evaluate all consequences of your choices, not just those that are causally connected to whatever object is instantiating your algorithm. However, while probabilities of different experiences are part of our optimization criteria, they do not need to play a role in the theory of optimization in general. I think we should determine more concretely whether these probabilities exist, but their absence from UDT is not very strong evidence against them.
The difference between SIA and SSA is essentially an overall factor for each universe describing its total reality-fluid. Under certain infinite models, there could be real-valued ratios.
The thing that worries me second-most about standard measure theory is infinitesimals. A Kolomogorov measure simply cannot handle a case with a finite measure of agents with finite utility and an infinitesimal measure of agents with an infinite utility.
The thing that worries me most about standard measure theory is my own uncertainty. Until I have time to read more deeply about it, I cannot be sure whether a surprise even bigger than infinitesimals is waiting for me.