One very critical factor you forgot is goal uncertainty! Your argument is actually even better than you think it is. If you assign an extremely low but non-zero probability that your utility function is unbounded, then you must still multiply it with infinity. And 1 is not a probability… There is no possible state that represent sufficient certainty that your utility function is bounded to justify not giving all your money to the mugging.
I WOULD send you my money, except the SIAI is a lot of orders of magnitude more likely than you to be a god (you didn’t define it’d be instant or direct) and they have a similar offer, so I’m mugged into maximizing amount of help given to the SIAI instead. But I DO bite the bullet of small probabilities of extremely large utilities, however repugnant and counter-intuitive it seems.
I suspect that calling your utility function itself into question like that isn’t valid in terms of expected utility calculations.
I think what you’re suggesting is that on top of our utility function we have some sort of meta-utility function that just says “maximize your utility function, whatever it is.” That would fall into your uncertainty trap, but I don’t think that is the case, I don’t think we have a meta-function like that, I think we just have our utility function.
If you were allowed to cast your entire utility function into doubt you would be completely paralyzed. How do you know you don’t have an unbounded utility function for paperclips? How do you know you don’t have an unbounded utility function for, and assign infinite utility to, the universe being exactly the way it would be if you never made a fully rational decision again and just went around your life on autopilot? The end result is that there are a number of possible courses of action that would all generate infinity utility and no way to choose between them because infinity=infinity. The only reason your argument sounds logical is because you are allowing the questioning of the boundedness of the utility function, but not its contents.
I think that knowledge of your utility function is probably a basic, prerational thing, like deciding to use expected utility maximization and Bayesian updating in the first place. Attempting to insert your utility function itself into your calculations seems like a basic logical error.
You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there’s uncertainty about it… That is, you display uncertainty about your utility function. Check mate.
Also, “infinity=infinity” is not the case. Infinity ixs not a number, and the problem goes away if you use limits. otherwise, yes, I even probaböly have unbounded but very slow growing facotrs for s bunch of thigns like that.
You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there’s uncertainty about it… That is, you display uncertainty about your utility function. Check mate.
Even if I was uncertain about my utility function, you’re still wrong. The factor you are forgetting about is uncertainty. With a bounded utility function infinite utility scores the same as a smaller amount of utility. So you should always assume a bounded utility function, because unbounded utility functions don’t offer any more utility than bounded ones and bounded ones outperform unbounded ones in situations like Pascal’s Mugging. There’s really no point to believing you have an unbounded function.
I just used the same logic you did. But the difference is that I assumed a bounded utility function was the default standard for comparison, whereas you assumed, for no good reason, that the unbounded one was.
I don’t know what the proper way to calculate utility when you are uncertain about your utility function. But I know darn well that doing an expected-utility calculation about what utility each function will yield and using one of the two functions that are currently in dispute to calculate that utility is a crime against logic. If you do that you’re effectively assigning “having an unbounded function” a probability of 1. And 1 isn’t a probability.
Your formulation of “unbounded utility function always scores infinity so it always wins” is not the correct way to compare two utility functions under uncertainty. You could just as easily say “unbounded and bounded both score the same, except in Pascal’s mugging where bounded scores higher, so bounded always wins.”
I think that using expected utility calculation might be valid for things like deciding whether you assign any utility at all to object or consequence. But for big meta-level questions about what your utility function even is attempting to use them is a huge violation of logic.
If I am a god, then it will be instant and direct; also, I’ll break the laws of physics/the Matrix/the meta-Matrix/etc. to reach states the SIAI can’t reach. If I am a god and you do not give me any money, then I’ll change the universe into the most similar universe where SIAI’s probability of success is divided by 2.
One very critical factor you forgot is goal uncertainty! Your argument is actually even better than you think it is. If you assign an extremely low but non-zero probability that your utility function is unbounded, then you must still multiply it with infinity. And 1 is not a probability… There is no possible state that represent sufficient certainty that your utility function is bounded to justify not giving all your money to the mugging.
I WOULD send you my money, except the SIAI is a lot of orders of magnitude more likely than you to be a god (you didn’t define it’d be instant or direct) and they have a similar offer, so I’m mugged into maximizing amount of help given to the SIAI instead. But I DO bite the bullet of small probabilities of extremely large utilities, however repugnant and counter-intuitive it seems.
I suspect that calling your utility function itself into question like that isn’t valid in terms of expected utility calculations.
I think what you’re suggesting is that on top of our utility function we have some sort of meta-utility function that just says “maximize your utility function, whatever it is.” That would fall into your uncertainty trap, but I don’t think that is the case, I don’t think we have a meta-function like that, I think we just have our utility function.
If you were allowed to cast your entire utility function into doubt you would be completely paralyzed. How do you know you don’t have an unbounded utility function for paperclips? How do you know you don’t have an unbounded utility function for, and assign infinite utility to, the universe being exactly the way it would be if you never made a fully rational decision again and just went around your life on autopilot? The end result is that there are a number of possible courses of action that would all generate infinity utility and no way to choose between them because infinity=infinity. The only reason your argument sounds logical is because you are allowing the questioning of the boundedness of the utility function, but not its contents.
I think that knowledge of your utility function is probably a basic, prerational thing, like deciding to use expected utility maximization and Bayesian updating in the first place. Attempting to insert your utility function itself into your calculations seems like a basic logical error.
You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there’s uncertainty about it… That is, you display uncertainty about your utility function. Check mate.
Also, “infinity=infinity” is not the case. Infinity ixs not a number, and the problem goes away if you use limits. otherwise, yes, I even probaböly have unbounded but very slow growing facotrs for s bunch of thigns like that.
Even if I was uncertain about my utility function, you’re still wrong. The factor you are forgetting about is uncertainty. With a bounded utility function infinite utility scores the same as a smaller amount of utility. So you should always assume a bounded utility function, because unbounded utility functions don’t offer any more utility than bounded ones and bounded ones outperform unbounded ones in situations like Pascal’s Mugging. There’s really no point to believing you have an unbounded function.
I just used the same logic you did. But the difference is that I assumed a bounded utility function was the default standard for comparison, whereas you assumed, for no good reason, that the unbounded one was.
I don’t know what the proper way to calculate utility when you are uncertain about your utility function. But I know darn well that doing an expected-utility calculation about what utility each function will yield and using one of the two functions that are currently in dispute to calculate that utility is a crime against logic. If you do that you’re effectively assigning “having an unbounded function” a probability of 1. And 1 isn’t a probability.
Your formulation of “unbounded utility function always scores infinity so it always wins” is not the correct way to compare two utility functions under uncertainty. You could just as easily say “unbounded and bounded both score the same, except in Pascal’s mugging where bounded scores higher, so bounded always wins.”
I think that using expected utility calculation might be valid for things like deciding whether you assign any utility at all to object or consequence. But for big meta-level questions about what your utility function even is attempting to use them is a huge violation of logic.
If I am a god, then it will be instant and direct; also, I’ll break the laws of physics/the Matrix/the meta-Matrix/etc. to reach states the SIAI can’t reach. If I am a god and you do not give me any money, then I’ll change the universe into the most similar universe where SIAI’s probability of success is divided by 2.
Can I get money?
The probability of the AI doing all of that (hey, time travel) is still much much larger.