I know because of anthropics. It is a logical impossibility for more than 1/3^^^3 individuals to have that power. You and I cannot both have power over the same thing, so the total amount of power is bounded, hopefully by the same population count we use to calculate anthropics.
Not in the least convenient possible world. What if someone told you that 3^^^3 copies of you were made before you must make your decision and that their behaviour was highly correlated as applies to UDT? What if the beings who would suffer had no consciousness, but would have moral worth as judged by you(r extrapolated self)? What if there was one being who was able to experience 3^^^3 times as much eudaimonia as everyone else? What if the self-indication assumption is right?
If you’re going to engage in motivated cognition at least consider the least convenient possible world.
Am I talking to Omega now, or just some random guy? I don’t understand what is being discussed. Please elaborate?
Then my expected utility would not be defined. There would be relatively simple worlds with arbitrarily many of them. I honestly don’t know what to do.
Then my expected utility would not be defined. There would be relatively simple agents with arbitrarily sensitive utilities.
Then I would certainly live in a world with infinitely many agents (or I would not live in any worlds with any probability), and the SIA would be meaningless.
My cognition is motivated by something else—by the desire to avoid infinities.
1) Sorry, I confused this with another problem; I meant some random guy.
2⁄3) Isn’t how you decision process handles infinities rather important? Is there any corresponding theorem to the Von Neumann–Morgenstern utility theorem but without using either version of axiom 3? I have been meaning to look into this and depending on what I find I may do a top-level post about it. Have you heard of one?
edit: I found Fishburn, 1971, A Study of Lexicographic Expected Utility, Management Science. It’s behind a paywall at http://www.jstor.org/pss/2629309. Can anyone find a non-paywall version or email it to me?
4) Yeah, my fourth one doesn’t work. I really should have known better.
Sometimes, infinities must be made rigourous rather than eliminated. I feel that, in this case, it’s worth a shot.
What worries me about infinities is, I suppose, the infinite Pascal’s mugging—whenever there’s a single infinite broken symmetry, nothing that happens in any finite world matters to determine the outcome.
This implies that all are thought should be devoted to infinite rather than finite worlds. And if all worlds are infinite, it looks like we need to do some form of SSA dealing with utility again.
This is all very convenient and not very rigorous, I agree. I cannot see a better way, but I agree that we should look. I will use university library powers to read that article and send it to you, but not right now.
I don’t see any way to avoid the infinite Pascal’s mugging conclusion. I think that it is probably discouraged due to a history of association with bad arguments and the actual way to maximize the chance of infinite benefit will seem more acceptable.
I will use university library powers to read that article and send it to you, but not right now.
Consider an infinite universe consisting of infinitely many copies of Smallworld, and other one consisting of infinitely many copies of Bigworld.
It seems like the only reasonable way to compute expected utility is to compute SSA or pseudo-SSA in Bigworld and Smallworld, thus computing the average utility in each infinite world, with an implied factor of omega.
Reasoning about infinite worlds that are made of several different, causally independent, finite components may produce an intuitively reasonable measure on finite worlds. But what about infinite worlds that are not composed in this manner? An infinite, causally connected chain? A series of larger and larger worlds, with no single average utility?
It seems like the only reasonable way to compute expected utility is to compute SSA or pseudo-SSA in Bigworld and Smallworld, thus computing the average utility in each infinite world, with an implied factor of omega.
Be careful about using an infinity that is not the limit of an infinite sequence; it might not be well defined.
An infinite, causally connected chain?
It depends on the specifics. This is a very underdefinded structure.
A series of larger and larger worlds, with no single average utility?
A divergent expected utility would always be preferable to a convergent one. How to compare two divergent possible universes depends on the specifics of the divergence.
I will formalize my intuitions, in accordance with your first point, and thereby clarify what I’m talking about in the third point.
Suppose agents exist on the real line, and their utilities are real numbers. Intuitively, going from u(x)=1 to u(x)=2 is good, and going from u(x)=1 to u(x)=1+sin(x) is neutral.
The obvious way to formalize this is with the limiting process:
limit as M goes to infinity of ( the integral from -M to M of u(x)dx, divided by 2M )
This gives well-defined and nice answers to some situations but not others.
However, you can construct functions u(x) where ( the integral from -M to M of u(x)dx, divided by 2M ) is an arbitrary differentiable function of M, in particular, one that has no limit as M goes to infinity. However, it is not necessarily divergent—it may oscillate between 0 and 1, for instance.
I’m fairly certain that if I have a description of a single universe, and a description of another universe, I can produce a description in the same language of a universe consisting of the two, next to each other, with no causal connection. Depending on the description language, for some universes, I may or may not be able to tell that they cannot be written as the limit of a sum of finite universes.
For any decision-making process you’re using, I can probably tell you what an infinite causal chain looks like in it.
Suppose agents exist on the real line, and their utilities are real numbers. Intuitively, going from u(x)=1 to u(x)=2 is good, and going from u(x)=1 to u(x)=1+sin(x) is neutral.
Why must there be a universe that corresponds to this situation? The number of agents has cardinality beth-1. A suitable generalization of Pascal’s wager would require that we bet on the amount of utility having a larger cardinality, if that even makes sense. Of course, there is no maximum cardinality, but there is a maximum cardinality expressible by humans with a finite lifespan.
The obvious way to formalize this is with the limiting process:
limit as M goes to infinity of ( the integral from -M to M of u(x)dx, divided by 2M )
That is intuitively appealing, but it is arbitrary. Consider the step function that is 1 for positive agents and −1 for negative agents. Agent 0 can have a utility of 0 for symmetry, but we should not care about the utility of one agent out of infinity unless that agent is able to experience an infinity of utility. The limit of the integral from -M to M of u(x)dx/2M is 0, but the limit of the integral from 1-M to 1+M of u(x)dx/2M is 2 and the limit of the integral from -M to 2M of u(x)dx/3M is +infinity. While your case has an some appealing symmetry, it is arbitrary to privilege it over these other integrals. This can also work with a sigmoid function, if you like continuity and differentiability.
I’m fairly certain that if I have a description of a single universe, and a description of another universe, I can produce a description in the same language of a universe consisting of the two, next to each other, with no causal connection.
Wouldn’t you just add the two functions, if you are talking about just the utilities, or run the (possibly hyper)computations in parallel, if you are talking about the whole universes?
Depending on the description language, for some universes, I may or may not be able to tell that they cannot be written as the limit of a sum of finite universes.
Yes, how to handle certain cases of infinite utility looks extremely non-obvious. It is also necessary.
Why must there be a universe that corresponds to this situation?
So that the math can be as simple as possible. Solving simple cases is advisable. beth-1 is easier to deal with in mathematical notation than beth-0, and anything bigger is so complicated that I have no idea.
The limit of the integral from -M to M of u(x)dx/2M is 0, but the limit of the integral from 1-M to 1+M of u(x)dx/2M is 2 and the limit of the integral from -M to 2M of u(x)dx/3M is +infinity. While your case has an some appealing symmetry, it is arbitrary to privilege it over these other integrals. This can also work with a sigmoid function, if you like continuity and differentiability.
Actually, those mostly go to 0
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
This doesn’t matter, as even this method, the most appealing and simple, fails in some cases, and there do not appear to be other, better ones.
Wouldn’t you just add the two functions, if you are talking about just the utilities, or run the (possibly hyper)computations in parallel, if you are talking about the whole universes?
Yes, indeed. I would run the computations in parallel, stick the Bayes nets next to each other, add the functions from policies to utilities, etc. In the first two cases, I would be able to tell how many separate universes seem to exist. In the second, I would not.
Yes, how to handle certain cases of infinite utility looks extremely non-obvious. It is also necessary.
I agree. I have no idea how to do it. We have two options:
Find some valid argument why infinities are logically impossible, and worry only about the finite case.
Find some method for dealing with infinities.
Most people seem to assume 1, but I’m not sure why.
Oh, and I think I forgot to say earlier that I have the pdf but not your email address.
My email address is endoself (at) yahoo (dot) com.
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
I seem to have forgotten to divide by M.
Why must there be a universe that corresponds to this situation?
So that the math can be as simple as possible. Solving simple cases is advisable.
I didn’t mean to ask why you chose this case; I was asking why you thought it corresponded to any possible world. I doubt any universe could be described by this model, because it is impossible to make predictions about. If you are an agent in this universe, what is the probability that you are found to the right of the y-axis? Unless the agents do not have equal measure, such as if agents have measure proportional to the complexity of locating them in the universe, as Wei Dai proposed, this probability is undefined, due to the same argument that shows the utility is undefined.
This could be the first step in proving that infinities are logically impossible, or it could be the first step in ruling out impossible infinities, until we are only left with ones that are easy to calculate utilities for. There are some infinities that seem possible: consider an infinite number of identical agents. This situation is indistinguishable from a single agent, yet has infinitely more moral value. This could be impossible however, if identical agents have no more reality-fluid than single agents or, more generally, if a theory of mind or of physics, or, more likely, one of each, is developed that allows you to calculate the amount of reality-fluid from first principles.
In general, an infinity only seems to make sense for describing conscious observers if it can be given a probability measure. I know of two possible sets of axioms for a probability space. Cox’s theorem looks good, but it is unable to handle any infinite sums, even reasonable ones like those used in the Solomonoff prior or finite, well defined integrals. There’s also Kolmogorov’s axioms, but they are not self evident, so it is not certain that they can handle any possible situation.
Once you assign a probability measure to each observer-moment, it seems likely that the right way to calculate utility is to integrate the utility function over the probability space, times some overall possibly infinite constant representing the amount of reality fluid. Of course this can’t be a normal integral, since utilities, probabilities, and the reality-fluid coefficient could all take infinite/infinitesimal values. That pdf might be a start on the utility side; the probability side seems harder, but that may just be because I haven’t read the paper on Cox’s theorem; and the reality-fluid problem is pretty close to the hard problem of consciousness, so that could take a while. This seems like it will take a lot of axiomatization, but I feel closer to solving this than when I started writing/researching this comment. Of course, if there is no need for a probability measure, much of this is negated.
So, of course, the infinities for which probabilities are ill-defined are just those nasty infinities I was talking about where the expected utility is incalculable.
What we actually want to produce is a probability measure on the set of individual experiences that are copied, or whatever thing has moral value, not on single instantiations of those experiences. We can do so with a limiting sequence of probability measures of the whole thing, but probably not a single measure.
This will probably lead to a situation where SIA turns into SSA.
What bothers me about this line of argument is that, according to UDT, there’s nothing fundamental about probabilities. So why should undefined probabilities be more convincing than undefined expected utilities?
We still need something very much like a probability measure to compute our expected utility function.
Kolomogorov should be what you want. A Kolomogorov probability measure is just a measure where the measure of the whole space is 1. Is there something non-self-evident or non-robust about that? It’s just real analysis.
I think the whole integral can probably contained within real—analytic conceptions. For example, you can use an alternate definition of measurable sets.
I disagree with your interpretation of UDT. UDT says that, when making choices, you should evaluate all consequences of your choices, not just those that are causally connected to whatever object is instantiating your algorithm. However, while probabilities of different experiences are part of our optimization criteria, they do not need to play a role in the theory of optimization in general. I think we should determine more concretely whether these probabilities exist, but their absence from UDT is not very strong evidence against them.
The difference between SIA and SSA is essentially an overall factor for each universe describing its total reality-fluid. Under certain infinite models, there could be real-valued ratios.
The thing that worries me second-most about standard measure theory is infinitesimals. A Kolomogorov measure simply cannot handle a case with a finite measure of agents with finite utility and an infinitesimal measure of agents with an infinite utility.
The thing that worries me most about standard measure theory is my own uncertainty. Until I have time to read more deeply about it, I cannot be sure whether a surprise even bigger than infinitesimals is waiting for me.
I’ve been thinking about Pascal’s Mugging with regard to decision making and Friendly AI design, and wanted to sum up my current thoughts below.
1a: Assuming you are Pascal Mugged once, it greatly increases the chance of you being Pascal Mugged again.
1b: If the first mugger threatens 3^^^3 people, the next mugger can simply threaten 3^^^^3 people. The mugger after that can simply threaten 3^^^^^3 people.
1c: It seems like you would have to take that into account as well. You could simply say to the mugger, “I’m sorry, but I must keep my Money because the chance of their being a second Mugger who threatens one Knuth up arrow more people then you is sufficiently likely that I have to keep my money to protect those people against that threat, which is much more probable now that you have shown up.”
1d: Even if the Pascal Mugger threatens an Infinite number of people with death, a second Pascal Mugger might threaten an Infinite number of people with a slow, painful death. I still have what appears to be a plausible reason to not give the money.
1e: Assume the Pascal Mugger attempts to simply skip that and say that he will threaten me with infinite disutility. The Second Pascallian Mugger could simply threaten me with an infinite disutility of a greater cardinality.
1f: Assume the Pascalian Mugger attempts to threaten me with an Infinite Disutility with the greatest possible infinite Cardinality. A subsequent Pascallian Mugger could simply say “You have made a mathematical error in processing the previous threats, and you are going to make a mathematical error in processing future threats. The amount of any other past or future Pascal’s mugger threat is essentially 0 disutility compared to the amount of disutility I am threatening you with, which will be infinitely greater.”
I think this gets into the Berry Paradox when considering threats. “A threat infinitely worse then the greatest possible threat statable in one minute.” can be stated in less then one minute, so it seems as if it is possible for a Pascal’s mugger to make a threat which is infinite and incalculable.
I am still working through the implications of this but I wanted to put down what I had so far to make sure I could avoid errors.
That is a good point, but my reading of that topic is that it was the least convenient possible world. I honestly do not see how it is possible to word a greatest threat.
Once someone actually says out loud what any particular threat is, you always seem to be vulnerable to someone coming along and generating a threat, which when taken in the context of threats you have heard, seems greater then any previous threat.
I mean, I suppose to make it more inconvenient for me, The Pascal Mugger could add “Oh by the way. I’m going to KILL you afterward, regardless of your choice. You will find it impossible to consider another Pascal’s Mugger coming along and asking you for your money.”
“But what if the second Pascal’s Mugger resurrects me? I mean sure, it seems oddly improbable that he would do that just to demand 5 dollars which I wouldn’t have if I gave them to you if I was already dead, and frankly it seems odd to even consider resurrection at all, but it could happen with a non 0 chance!”
I mean yes, the idea of someone ressurecting you to mug you does seem completely, totally ridiculous. but the entire idea behind Pascal’s Mugging appears to be that we can’t throw out those tiny, tiny, out of the way chances if there is a large enough threat backing them up.
So let’s think of another possible least convenient world: The Mugger is Omega or Nomega. He knows exactly what to say to convince me that despite the fact that right now it seems logical that a greater threat could be made later, somehow this is the greatest threat I will ever face in my entire life, and the concept of a greater threat then this is literally inconceivable.
Except now the scenario requires me to believe that I can make a choice to give the Mugger 5$, but NOT make a choice to retain my belief that a larger threat exists later.
That doesn’t quite sound like a good formulation of an inconvenient world either. (I can make choices except when I can’t?) I will keep trying to think of a more inconvenient world once I get home and will post it here if I think of one.
You may be wrong about such threats. In thinking about this question, you reduce your chance of being wrong. This has a massive expected utility gain.
Conclusion: You should spend all your time thinking about this question.
Another version:
There’s a tiny probability of 3^^3 deaths. A tinier one of 3^^^3. A tinier one of 3^^^^3..… Oops, looks like my expected utility is a divergent sum! I can’t use expected utility theory to figure out what to do any more!
Number one is a very good point, but I don’t think the conclusion would necessarily follow:
1: You always may need outside information to solve the problem. For instance, If I am looking for a Key to Room 3, under the assumption that it is in Room 1 because I saw someone drop it in Room 1, I cannot search only Room 1 and never search Room 2 and find the key in all cases because there may be a way for the key to have moved to Room 2 without my knowledge.
For instance, as an example of something I might expect, the Mouse could have grabbed it and quietly went back to it’s nest in Room 2. Now, that’s something I would expect, so while searching for the key I should also note any mice I see. They might have moved it.
But I also have to have a method for handling situations I would not expect. Maybe the Key activated a small device which moved it to room 2 through a hidden passage in the wall which then quietly self destructed, leaving no trace of the device that is within my ability to detect in Room 1. (Plenty of traces were left in Room 2, but I can’t see Room 2 from Room 1.) That is an outside possibility. But it doesn’t break laws of physics or require incomprehensible technology that it could have happened.
2: There are also a large number of alternative thought experiments which have massive expected utility gain. Because of the Halting problem, I can’t necessarily determine how long it is going to take to figure these problems out, if they can be figured out. If I allow myself to get stuck on any one problem, I may have picked an unsolvable one, while the NEXT problem with a massive expected utility gain is actually solvable. under that logic, it’s still bad to spend all my time thinking about one particular question.
3: Thanks to Paralellism, it is entirely possible for a program to run multiple different problems all at the same time. Even I can do this to a lesser extent. I can think about a Philosophy problem and also eat at the same time. A FAI running into a Pascal’s Mugger could begin weighing the utility of giving in to the mugging, ignoring the mugging, attempting to knock out the mugger, or simply saying: “Let me think about that. I will let you know when I have decided to give you the money or not and will get back to you.” all at the same time.
Having reviewed this discussion, I realize that I may just be restating of the problem going on here. A lot of the proposed situations I’m discussing seem to have a “But what if this OTHER situation exists and the utilities indicate you pick the counter intuitive solution? But what if this OTHER situation exists and the utilities indicate you pick the intuitive solution?”
To approach the problem more directly, Maybe it would be a better approach might be to consider Gödel’s incompleteness theorems. Quoting from wikipedia:
“The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an “effective procedure” (essentially, a computer program) is capable of proving all facts about the natural numbers. For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system.”
If the FAI in question is considering utility in terms of natural numbers, It seems to make sense that there are things it should do to maximize utility that it would not be able to prove inside it’s system. So to take into account that, we would have to design it to call for help in the case of situations which had the appearance of being likely to be unprovable.
Based on Alan Turings solution of the Halting problem again, If the FAI can only be treated as a Turing Machine, it can’t establish whether or not some situations are provable. That seems like it means it would have to at some point have some kind of hard point to do something like “Call for help and do nothing but call for help if you have been running for one hour and can’t figure this out.” or alternatively “Take an action based on your current guess of the probabilities if you can’t figure this out after one hour, and if at least one of the two probabilities is still incalculable, choose randomly.”
This is again getting a bit long, so I’ll stop writing for a bit to double check that this seems reasonable and that I didn’t miss something.
You seem to be going far afield. The technical conclusion of the first argument is that one should spend all one’s resources dealing with cases with infinite or very high utility, even if they are massively improbable. The way I said it earlier was imprecise.
When humans deal with a problem they can’t solve, they guess. It should not be difficult to build an AI that can solve everything humans can solve. I think the “solution” to Godelization is a mathematical intuition module that finds rough guesses, not asking another agent. What special powers does the other agent have? Why can’t the AI just duplicate them.
Thinking about it more, I agree with you that I should have phrased asking for Help better.
Using Humans as the other agents, just duplicating all powers available to Humans seems like it would causes a noteworthy problem. Assume an AI Researcher named Maria follows my understanding of your idea. She creates a Friendly AI and includes a critical block of code:
If UNFRIENDLY=TRUE then HALT;
(Un)friendliness isn’t a Binary, but it seems like it makes a simpler example.
The AI (since it has duplicated the special powers of human agents.) overwrites that block of code and replaces it with a CONTINUE command. Certainly it’s creator Maria could do that.
Well clearly we can’t let the AI duplicate that PARTICULAR power. Even if it would never use it under any circumstances of normal processing (Something which I don’t think it can actually tell you under the halting problem.) It’s very insecure for that power to be available to the AI if anyone were to try to Hack the AI.
When you think about it, something like The Pascal’s Mugging formulation is itself a hack, at least in the sense I can describe both as “Here is a string of letters and numbers from an untrusted source. By giving it to you for processing, I am attempting to get you to do something that harms you for my benefit.”
So if I attempt to give our Friendly AI Security Measures to protect it from hacks turning it to an Unfriendly AI, These Security Measures seem like they would require it to lose some powers that it would have if the code was more open.
I think it makes more sense to design an AI that is robust to hacks due to a fundamental logic than to try to patch over the issues. I would not like to discuss this in detail, though—it doesn’t interest me.
I know because of anthropics. It is a logical impossibility for more than 1/3^^^3 individuals to have that power. You and I cannot both have power over the same thing, so the total amount of power is bounded, hopefully by the same population count we use to calculate anthropics.
Not in the least convenient possible world. What if someone told you that 3^^^3 copies of you were made before you must make your decision and that their behaviour was highly correlated as applies to UDT? What if the beings who would suffer had no consciousness, but would have moral worth as judged by you(r extrapolated self)? What if there was one being who was able to experience 3^^^3 times as much eudaimonia as everyone else? What if the self-indication assumption is right?
If you’re going to engage in motivated cognition at least consider the least convenient possible world.Am I talking to Omega now, or just some random guy? I don’t understand what is being discussed. Please elaborate?
Then my expected utility would not be defined. There would be relatively simple worlds with arbitrarily many of them. I honestly don’t know what to do.
Then my expected utility would not be defined. There would be relatively simple agents with arbitrarily sensitive utilities.
Then I would certainly live in a world with infinitely many agents (or I would not live in any worlds with any probability), and the SIA would be meaningless.
My cognition is motivated by something else—by the desire to avoid infinities.
1) Sorry, I confused this with another problem; I meant some random guy.
2⁄3) Isn’t how you decision process handles infinities rather important? Is there any corresponding theorem to the Von Neumann–Morgenstern utility theorem but without using either version of axiom 3? I have been meaning to look into this and depending on what I find I may do a top-level post about it. Have you heard of one?
edit: I found Fishburn, 1971, A Study of Lexicographic Expected Utility, Management Science. It’s behind a paywall at http://www.jstor.org/pss/2629309. Can anyone find a non-paywall version or email it to me?
4) Yeah, my fourth one doesn’t work. I really should have known better.
Sometimes, infinities must be made rigourous rather than eliminated. I feel that, in this case, it’s worth a shot.
What worries me about infinities is, I suppose, the infinite Pascal’s mugging—whenever there’s a single infinite broken symmetry, nothing that happens in any finite world matters to determine the outcome.
This implies that all are thought should be devoted to infinite rather than finite worlds. And if all worlds are infinite, it looks like we need to do some form of SSA dealing with utility again.
This is all very convenient and not very rigorous, I agree. I cannot see a better way, but I agree that we should look. I will use university library powers to read that article and send it to you, but not right now.
I don’t see any way to avoid the infinite Pascal’s mugging conclusion. I think that it is probably discouraged due to a history of association with bad arguments and the actual way to maximize the chance of infinite benefit will seem more acceptable.
Thank you.
Consider an infinite universe consisting of infinitely many copies of Smallworld, and other one consisting of infinitely many copies of Bigworld.
It seems like the only reasonable way to compute expected utility is to compute SSA or pseudo-SSA in Bigworld and Smallworld, thus computing the average utility in each infinite world, with an implied factor of omega.
Reasoning about infinite worlds that are made of several different, causally independent, finite components may produce an intuitively reasonable measure on finite worlds. But what about infinite worlds that are not composed in this manner? An infinite, causally connected chain? A series of larger and larger worlds, with no single average utility?
How can we consider them?
Be careful about using an infinity that is not the limit of an infinite sequence; it might not be well defined.
It depends on the specifics. This is a very underdefinded structure.
A divergent expected utility would always be preferable to a convergent one. How to compare two divergent possible universes depends on the specifics of the divergence.
I will formalize my intuitions, in accordance with your first point, and thereby clarify what I’m talking about in the third point.
Suppose agents exist on the real line, and their utilities are real numbers. Intuitively, going from u(x)=1 to u(x)=2 is good, and going from u(x)=1 to u(x)=1+sin(x) is neutral.
The obvious way to formalize this is with the limiting process:
limit as M goes to infinity of ( the integral from -M to M of u(x)dx, divided by 2M )
This gives well-defined and nice answers to some situations but not others.
However, you can construct functions u(x) where ( the integral from -M to M of u(x)dx, divided by 2M ) is an arbitrary differentiable function of M, in particular, one that has no limit as M goes to infinity. However, it is not necessarily divergent—it may oscillate between 0 and 1, for instance.
I’m fairly certain that if I have a description of a single universe, and a description of another universe, I can produce a description in the same language of a universe consisting of the two, next to each other, with no causal connection. Depending on the description language, for some universes, I may or may not be able to tell that they cannot be written as the limit of a sum of finite universes.
For any decision-making process you’re using, I can probably tell you what an infinite causal chain looks like in it.
Why must there be a universe that corresponds to this situation? The number of agents has cardinality beth-1. A suitable generalization of Pascal’s wager would require that we bet on the amount of utility having a larger cardinality, if that even makes sense. Of course, there is no maximum cardinality, but there is a maximum cardinality expressible by humans with a finite lifespan.
That is intuitively appealing, but it is arbitrary. Consider the step function that is 1 for positive agents and −1 for negative agents. Agent 0 can have a utility of 0 for symmetry, but we should not care about the utility of one agent out of infinity unless that agent is able to experience an infinity of utility. The limit of the integral from -M to M of u(x)dx/2M is 0, but the limit of the integral from 1-M to 1+M of u(x)dx/2M is 2 and the limit of the integral from -M to 2M of u(x)dx/3M is +infinity. While your case has an some appealing symmetry, it is arbitrary to privilege it over these other integrals. This can also work with a sigmoid function, if you like continuity and differentiability.
Wouldn’t you just add the two functions, if you are talking about just the utilities, or run the (possibly hyper)computations in parallel, if you are talking about the whole universes?
Yes, how to handle certain cases of infinite utility looks extremely non-obvious. It is also necessary.
So that the math can be as simple as possible. Solving simple cases is advisable. beth-1 is easier to deal with in mathematical notation than beth-0, and anything bigger is so complicated that I have no idea.
Actually, those mostly go to 0
1-M to 1+M gets you 2/2M=1/M, which goes to 0. -M to 2M gets you M/3M=1/3.
This doesn’t matter, as even this method, the most appealing and simple, fails in some cases, and there do not appear to be other, better ones.
Yes, indeed. I would run the computations in parallel, stick the Bayes nets next to each other, add the functions from policies to utilities, etc. In the first two cases, I would be able to tell how many separate universes seem to exist. In the second, I would not.
I agree. I have no idea how to do it. We have two options:
Find some valid argument why infinities are logically impossible, and worry only about the finite case.
Find some method for dealing with infinities.
Most people seem to assume 1, but I’m not sure why.
Oh, and I think I forgot to say earlier that I have the pdf but not your email address.
My email address is endoself (at) yahoo (dot) com.
I seem to have forgotten to divide by M.
I didn’t mean to ask why you chose this case; I was asking why you thought it corresponded to any possible world. I doubt any universe could be described by this model, because it is impossible to make predictions about. If you are an agent in this universe, what is the probability that you are found to the right of the y-axis? Unless the agents do not have equal measure, such as if agents have measure proportional to the complexity of locating them in the universe, as Wei Dai proposed, this probability is undefined, due to the same argument that shows the utility is undefined.
This could be the first step in proving that infinities are logically impossible, or it could be the first step in ruling out impossible infinities, until we are only left with ones that are easy to calculate utilities for. There are some infinities that seem possible: consider an infinite number of identical agents. This situation is indistinguishable from a single agent, yet has infinitely more moral value. This could be impossible however, if identical agents have no more reality-fluid than single agents or, more generally, if a theory of mind or of physics, or, more likely, one of each, is developed that allows you to calculate the amount of reality-fluid from first principles.
In general, an infinity only seems to make sense for describing conscious observers if it can be given a probability measure. I know of two possible sets of axioms for a probability space. Cox’s theorem looks good, but it is unable to handle any infinite sums, even reasonable ones like those used in the Solomonoff prior or finite, well defined integrals. There’s also Kolmogorov’s axioms, but they are not self evident, so it is not certain that they can handle any possible situation.
Once you assign a probability measure to each observer-moment, it seems likely that the right way to calculate utility is to integrate the utility function over the probability space, times some overall possibly infinite constant representing the amount of reality fluid. Of course this can’t be a normal integral, since utilities, probabilities, and the reality-fluid coefficient could all take infinite/infinitesimal values. That pdf might be a start on the utility side; the probability side seems harder, but that may just be because I haven’t read the paper on Cox’s theorem; and the reality-fluid problem is pretty close to the hard problem of consciousness, so that could take a while. This seems like it will take a lot of axiomatization, but I feel closer to solving this than when I started writing/researching this comment. Of course, if there is no need for a probability measure, much of this is negated.
So, of course, the infinities for which probabilities are ill-defined are just those nasty infinities I was talking about where the expected utility is incalculable.
What we actually want to produce is a probability measure on the set of individual experiences that are copied, or whatever thing has moral value, not on single instantiations of those experiences. We can do so with a limiting sequence of probability measures of the whole thing, but probably not a single measure.
This will probably lead to a situation where SIA turns into SSA.
What bothers me about this line of argument is that, according to UDT, there’s nothing fundamental about probabilities. So why should undefined probabilities be more convincing than undefined expected utilities?
We still need something very much like a probability measure to compute our expected utility function.
Kolomogorov should be what you want. A Kolomogorov probability measure is just a measure where the measure of the whole space is 1. Is there something non-self-evident or non-robust about that? It’s just real analysis.
I think the whole integral can probably contained within real—analytic conceptions. For example, you can use an alternate definition of measurable sets.
I disagree with your interpretation of UDT. UDT says that, when making choices, you should evaluate all consequences of your choices, not just those that are causally connected to whatever object is instantiating your algorithm. However, while probabilities of different experiences are part of our optimization criteria, they do not need to play a role in the theory of optimization in general. I think we should determine more concretely whether these probabilities exist, but their absence from UDT is not very strong evidence against them.
The difference between SIA and SSA is essentially an overall factor for each universe describing its total reality-fluid. Under certain infinite models, there could be real-valued ratios.
The thing that worries me second-most about standard measure theory is infinitesimals. A Kolomogorov measure simply cannot handle a case with a finite measure of agents with finite utility and an infinitesimal measure of agents with an infinite utility.
The thing that worries me most about standard measure theory is my own uncertainty. Until I have time to read more deeply about it, I cannot be sure whether a surprise even bigger than infinitesimals is waiting for me.
I’ve been thinking about Pascal’s Mugging with regard to decision making and Friendly AI design, and wanted to sum up my current thoughts below.
1a: Assuming you are Pascal Mugged once, it greatly increases the chance of you being Pascal Mugged again.
1b: If the first mugger threatens 3^^^3 people, the next mugger can simply threaten 3^^^^3 people. The mugger after that can simply threaten 3^^^^^3 people.
1c: It seems like you would have to take that into account as well. You could simply say to the mugger, “I’m sorry, but I must keep my Money because the chance of their being a second Mugger who threatens one Knuth up arrow more people then you is sufficiently likely that I have to keep my money to protect those people against that threat, which is much more probable now that you have shown up.”
1d: Even if the Pascal Mugger threatens an Infinite number of people with death, a second Pascal Mugger might threaten an Infinite number of people with a slow, painful death. I still have what appears to be a plausible reason to not give the money.
1e: Assume the Pascal Mugger attempts to simply skip that and say that he will threaten me with infinite disutility. The Second Pascallian Mugger could simply threaten me with an infinite disutility of a greater cardinality.
1f: Assume the Pascalian Mugger attempts to threaten me with an Infinite Disutility with the greatest possible infinite Cardinality. A subsequent Pascallian Mugger could simply say “You have made a mathematical error in processing the previous threats, and you are going to make a mathematical error in processing future threats. The amount of any other past or future Pascal’s mugger threat is essentially 0 disutility compared to the amount of disutility I am threatening you with, which will be infinitely greater.”
I think this gets into the Berry Paradox when considering threats. “A threat infinitely worse then the greatest possible threat statable in one minute.” can be stated in less then one minute, so it seems as if it is possible for a Pascal’s mugger to make a threat which is infinite and incalculable.
I am still working through the implications of this but I wanted to put down what I had so far to make sure I could avoid errors.
Surely this will not work in the least convenient world?
That is a good point, but my reading of that topic is that it was the least convenient possible world. I honestly do not see how it is possible to word a greatest threat.
Once someone actually says out loud what any particular threat is, you always seem to be vulnerable to someone coming along and generating a threat, which when taken in the context of threats you have heard, seems greater then any previous threat.
I mean, I suppose to make it more inconvenient for me, The Pascal Mugger could add “Oh by the way. I’m going to KILL you afterward, regardless of your choice. You will find it impossible to consider another Pascal’s Mugger coming along and asking you for your money.”
“But what if the second Pascal’s Mugger resurrects me? I mean sure, it seems oddly improbable that he would do that just to demand 5 dollars which I wouldn’t have if I gave them to you if I was already dead, and frankly it seems odd to even consider resurrection at all, but it could happen with a non 0 chance!”
I mean yes, the idea of someone ressurecting you to mug you does seem completely, totally ridiculous. but the entire idea behind Pascal’s Mugging appears to be that we can’t throw out those tiny, tiny, out of the way chances if there is a large enough threat backing them up.
So let’s think of another possible least convenient world: The Mugger is Omega or Nomega. He knows exactly what to say to convince me that despite the fact that right now it seems logical that a greater threat could be made later, somehow this is the greatest threat I will ever face in my entire life, and the concept of a greater threat then this is literally inconceivable.
Except now the scenario requires me to believe that I can make a choice to give the Mugger 5$, but NOT make a choice to retain my belief that a larger threat exists later.
That doesn’t quite sound like a good formulation of an inconvenient world either. (I can make choices except when I can’t?) I will keep trying to think of a more inconvenient world once I get home and will post it here if I think of one.
Here’s another version:
You may be wrong about such threats. In thinking about this question, you reduce your chance of being wrong. This has a massive expected utility gain.
Conclusion: You should spend all your time thinking about this question.
Another version:
There’s a tiny probability of 3^^3 deaths. A tinier one of 3^^^3. A tinier one of 3^^^^3..… Oops, looks like my expected utility is a divergent sum! I can’t use expected utility theory to figure out what to do any more!
Number one is a very good point, but I don’t think the conclusion would necessarily follow:
1: You always may need outside information to solve the problem. For instance, If I am looking for a Key to Room 3, under the assumption that it is in Room 1 because I saw someone drop it in Room 1, I cannot search only Room 1 and never search Room 2 and find the key in all cases because there may be a way for the key to have moved to Room 2 without my knowledge.
For instance, as an example of something I might expect, the Mouse could have grabbed it and quietly went back to it’s nest in Room 2. Now, that’s something I would expect, so while searching for the key I should also note any mice I see. They might have moved it.
But I also have to have a method for handling situations I would not expect. Maybe the Key activated a small device which moved it to room 2 through a hidden passage in the wall which then quietly self destructed, leaving no trace of the device that is within my ability to detect in Room 1. (Plenty of traces were left in Room 2, but I can’t see Room 2 from Room 1.) That is an outside possibility. But it doesn’t break laws of physics or require incomprehensible technology that it could have happened.
2: There are also a large number of alternative thought experiments which have massive expected utility gain. Because of the Halting problem, I can’t necessarily determine how long it is going to take to figure these problems out, if they can be figured out. If I allow myself to get stuck on any one problem, I may have picked an unsolvable one, while the NEXT problem with a massive expected utility gain is actually solvable. under that logic, it’s still bad to spend all my time thinking about one particular question.
3: Thanks to Paralellism, it is entirely possible for a program to run multiple different problems all at the same time. Even I can do this to a lesser extent. I can think about a Philosophy problem and also eat at the same time. A FAI running into a Pascal’s Mugger could begin weighing the utility of giving in to the mugging, ignoring the mugging, attempting to knock out the mugger, or simply saying: “Let me think about that. I will let you know when I have decided to give you the money or not and will get back to you.” all at the same time.
Having reviewed this discussion, I realize that I may just be restating of the problem going on here. A lot of the proposed situations I’m discussing seem to have a “But what if this OTHER situation exists and the utilities indicate you pick the counter intuitive solution? But what if this OTHER situation exists and the utilities indicate you pick the intuitive solution?”
To approach the problem more directly, Maybe it would be a better approach might be to consider Gödel’s incompleteness theorems. Quoting from wikipedia:
“The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an “effective procedure” (essentially, a computer program) is capable of proving all facts about the natural numbers. For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system.”
If the FAI in question is considering utility in terms of natural numbers, It seems to make sense that there are things it should do to maximize utility that it would not be able to prove inside it’s system. So to take into account that, we would have to design it to call for help in the case of situations which had the appearance of being likely to be unprovable.
Based on Alan Turings solution of the Halting problem again, If the FAI can only be treated as a Turing Machine, it can’t establish whether or not some situations are provable. That seems like it means it would have to at some point have some kind of hard point to do something like “Call for help and do nothing but call for help if you have been running for one hour and can’t figure this out.” or alternatively “Take an action based on your current guess of the probabilities if you can’t figure this out after one hour, and if at least one of the two probabilities is still incalculable, choose randomly.”
This is again getting a bit long, so I’ll stop writing for a bit to double check that this seems reasonable and that I didn’t miss something.
You seem to be going far afield. The technical conclusion of the first argument is that one should spend all one’s resources dealing with cases with infinite or very high utility, even if they are massively improbable. The way I said it earlier was imprecise.
When humans deal with a problem they can’t solve, they guess. It should not be difficult to build an AI that can solve everything humans can solve. I think the “solution” to Godelization is a mathematical intuition module that finds rough guesses, not asking another agent. What special powers does the other agent have? Why can’t the AI just duplicate them.
Thinking about it more, I agree with you that I should have phrased asking for Help better.
Using Humans as the other agents, just duplicating all powers available to Humans seems like it would causes a noteworthy problem. Assume an AI Researcher named Maria follows my understanding of your idea. She creates a Friendly AI and includes a critical block of code:
If UNFRIENDLY=TRUE then HALT;
(Un)friendliness isn’t a Binary, but it seems like it makes a simpler example.
The AI (since it has duplicated the special powers of human agents.) overwrites that block of code and replaces it with a CONTINUE command. Certainly it’s creator Maria could do that.
Well clearly we can’t let the AI duplicate that PARTICULAR power. Even if it would never use it under any circumstances of normal processing (Something which I don’t think it can actually tell you under the halting problem.) It’s very insecure for that power to be available to the AI if anyone were to try to Hack the AI.
When you think about it, something like The Pascal’s Mugging formulation is itself a hack, at least in the sense I can describe both as “Here is a string of letters and numbers from an untrusted source. By giving it to you for processing, I am attempting to get you to do something that harms you for my benefit.”
So if I attempt to give our Friendly AI Security Measures to protect it from hacks turning it to an Unfriendly AI, These Security Measures seem like they would require it to lose some powers that it would have if the code was more open.
I think it makes more sense to design an AI that is robust to hacks due to a fundamental logic than to try to patch over the issues. I would not like to discuss this in detail, though—it doesn’t interest me.