If you are god, then the universe allows for “gods” which can arbitrarily alter the state of the universe. Therefore, any utility gains I make have an unknown duration—it’s entirely possible that an instant after you grant my utility, you’ll take it away. Furthermore, if you are god, you’re (a) flipping a coin and (b) requiring a donation, so I strongly suspect you are neither friendly nor omni-benevolent. Therefore, I have no reason to favour “god will help me for $1″ over “god will hurt me for $1”—you could just as easily be trying to trap me, and punish anyone who irrationally sends you $1.
1b) I have no reason to select you as a likely god candidate, compared to the ~infinite number of people who exist across all of space-time and all Everett branches.
Counterargument #2:
There are finite many states of “N”.
2a) Eventually the universe will succumb to heat death. Entropy means that we can’t gain information from the coin flip without approaching this state.
2b) Even if you flip coins incredibly fast and in parallel, I will still eventually die, so we can only count the number of coin flips that happen before then.
Counterargument #3:
Assume a utility function which is finite but unbounded. It cannot handle infinity, and thus your mugging relies on an invalid input (infinite utility), and is discarded as malformed.
3b: Assume that my utility function fails in a universe as arbitrary as the one implied by you being god, since I would have witnessed a proof that state(t+1) does not naturally follow state(t)
Counterargument #4:
Carefully assign p(you are god) = 1/N, where N approaches infinity in such a way as to cancel out the infinite sum you are working with. This seems contrived, but my mind assigns p(you are god) = “bullshit, prove it”, and this is about the closest I can come to expressing that mathematically ;)
Counterargument #5:
Assign probabilities by frequency of occurrence. There have been no instances of god yet, so p(god) = 0. Once god has been demonstrated, I can update off of this 0, unlike with Bayesian statistics. My utility function may very well be poorly designed, and I believe this can still allow for FAI research, etc.: social standing, an interest in writing code, peer pressure, etc. all provide motivations even if p(FAI) = 0. One could also assume that even where p(x) = 0, a different function rewards utility for investigating and trying to update even zero-probability events (in which case I’d get some utility from mailing you $1 to satisfy my curiosity, although I suspect not enough to overcome the cost of setting up a PayPal account and losing $1)
Counterargument 3(b) is the most convincing of these to me.
If my decision theory is predicated on some kind of continuity in states of the universe, and my decision is based on some discontinuity in the state of the universe, my decision theory can’t handle this.
This is troubling, but to try to make it more formal: if I believe something like “all mathematically possible universes exist” then promising to “change universes to UN(N)” is a meaningless statement. Perhaps the wager should be rephrased as “increase the measure of universes of higher utility”?
Counterargument #1 is similar to argument against Pascal’s wager that weights Christianity and anti-Christianity equally. Carl’s comment addresses this sort of thing pretty well. That TimFreeman has asserted that you should suspect he is a god is (very small) positive evidence that he is one, that he has the requisite power and intelligence to write a lesswrong post is also very small but positive evidence, &c.
Counterargument #2 implies the nonexistence of gods. I agree that gods are implausible given what we know, but on the other hand, the necessity of entropy and heat-death need not apply to the entire range of UN(X),
I don’t understand Counterargument #3. Could you elaborate a little?
Counterargument #4 seems similar to Robin Hanson’s argument against the 3^^^3 dust specks variant of Pascal’s Mugging, where if I recall correctly he said that you have to discount by the improbability of a single entity exercising such power over N distinct persons, a discount that monotonically scales positively with N. If the discount scales up fast enough, it may not be possible to construct a Pascal’s Mugging of infinite expected value. You could maybe justify a similar principle for an otherwise unsupported claim that you can provide N utilons.
Counterargument #5 raises an interesting point: the post implicitly assumes a consistent utility function that recognizes the standard laws of probability, an assumption that is not satisfied by the ability to update from 0.
I don’t understand Counterargument #3. Could you elaborate a little?
It’s playing on the mathematical difference between infinite and unbounded.
In plain but debatably-accurate terms, infinity isn’t a number. If my utility function only works on numbers, you can no more give it “infinity” than you can give it an apple.
As a couple examples: Any given polygon has ‘n’ sides, and there are thus infinite many polygons, but no polygon has ‘infinity’ sides. Conversely, there are infinitely many real numbers such that 0 < x < 1, but x is bounded (it has finite limits).
So I’m asserting that while I cannot have “infinity” utility, there isn’t any finite bound on my utility: it can be 1, a million, 3^^^3, but not “infinity” because “infinity” isn’t a valid input.
Utility doesn’t have to take infinity as an argument in order to be infinite. It just has to have a finite output that can be summed over possible outcomes. In other words, if Sum(p X U(a) + (1-p) X U(^a)) is a valid expression of expected utility, then by induction, Sumi=1 to n X U(i)) should also be a valid expression for any finite n. When you take the limit as n->infinity you run into the problem of no finite expectation, but an arbitrarily large finite sum (which you can get with a stopping rule) ought to be able to establish the same point.
I still don’t understand 3b. TimFreeman wasn’t postulating an acausal universe, just one in which there are things we weren’t expecting.
magfrump seems to have nailed it. I find it interesting how controversial that one has been :)
For infinite sums, basically, if the sum is infinite, then any finite probability gives it infinite expected utility (infinity [1/N] = infinity). If both the sum and probability are finite, then one can argue the details (N [1/N^2] < 1). The math is different between an arbitrarily large finite and an infinite. Or, at least, I’ve always assumed Pascal’s Wager relied on that, because otherwise I don’t see how it produces an infinite expected utility regardless of scepticism.
If the utility can be arbitrarily large depending on N, then an arbitrarily large finite skepticism discount can be overcome by considering a sufficiently large N.
Of course a skepticism discount factor that scales with N might be enough to obviate Pascal’s Wager.
I have no reason to select you as a likely god candidate, compared to the ~infinite number of people who exist across all of space-time and all Everett branches.
Agreed. However, you also have no reason to carry on your business dealing with ordinary things rather than focusing exclusively on the various unlikely gods that might be trying to jerk you around. I don’t win, but you lose.
2b) Even if you flip coins incredibly fast and in parallel, I will still eventually die, so we can only count the number of coin flips that happen before then.
Yes, I forgot to mention that if I’m a god I can stop time while I’m flipping coins.
Assume a utility function which is finite but unbounded. It cannot handle infinity, and thus your mugging relies on an invalid input (infinite utility), and is discarded as malformed.
If you play by those rules, you can’t assign a utility to the infinite gamble, so you can’t make decisions about it. If the infinite gamble is possible, your utility function is failing to do its job, which is to help you make decisions. Tell me how you want to fix that without bounded utility.
my mind assigns p(you are god) = “bullshit, prove it”, and this is about the closest I can come to expressing that mathematically
p(I am god) = 0 is simpler and gets the job done. That appears to be more restrictive than the Universal Prior—I think the universal prior would give positive probability to me being god. There might be a general solution here to specifying a prior that doesn’t fall into these pits, but I don’t know what it is. Do you?
Assign probabilities by frequency of occurrence. There have been no instances of god yet, so p(god) = 0. Once god has been demonstrated, I can update off of this 0, unlike with Bayesian statistics.
How would this work in general? How could you plan for landing on the moon if it hasn’t been done before? You need to distinguish “failure is certain because we put a large bomb in the rocket that will blow up before it gets anywhere” from “failure is certain because it hasn’t been done before and thus p(success) = 0″.
you also have no reason to carry on your business dealing with ordinary things
Yes I do. Dealing with ordinary things has a positive expected utility. Analysing anything that looks like a Pascal’s Mugging has ~zero expected utility as far as the wager itself goes, plus that derived from curiosity and a desire to study logical problems. I believe that Counterargument #5 can be tuned and expanded to apply to all such muggings, so I’ll be writing that up in a bit :)
p(I am god) = 0 is simpler and gets the job done
Assuming Bayesian probability, p=0 means “I refuse to consider new evidence”, which is contrary to the goal of “bullshit, prove it” (I suspect that p=1/infinity might have practically the same issue unless dealing with a god who can provide infinite bits of evidence; fortunately in this case you are making exactly that claim :))
Yes, I forgot to mention that if I’m a god I can stop time while I’m flipping coins.
This falls back to 3b, then: My utility function isn’t calibrated to a universe where you can ignore physics. Furthermore, it also falls back to 1b: Once we assume physics doesn’t apply, we get an infinite number of theories to choose from, all with equal likelihood, so once again why select your theory out of that chaos?
How would this work in general? How could you plan for landing on the moon if it hasn’t been done before?
p(moon landing) = 0.
p(I will enjoy trying despite the inevitable failure) > 0.
p(I will feel bad if I ignore the math saying this IS possible) > 0.
p(People who did the moon landing had different priors) > 0.
etc.
It’s not elegant, but it occurred to me as a seed of a thought, and I should have a more robust version in a little bit :)
Dealing with ordinary things has a positive expected utility. Analysing anything that looks like a Pascal’s Mugging has ~zero expected utility as far as the wager itself goes, plus that derived from curiosity and a desire to study logical problems.
I agree with your conclusion, but don’t follow the reasoning. Can you say more about how you identify something that looks like a Pascal’s Mugging?
If something looks like a Pascal’s Mugging when it involves ridiculously large utilities, then maybe you agree with me that you should have bounded utilities.
This falls back to 3b, then: My utility function isn’t calibrated to a universe where you can ignore physics.
The laws of physics are discovered, not known a-priori, so you can’t really use that as a way to make decisions.
Furthermore, it also falls back to 1b: Once we assume physics doesn’t apply, we get an infinite number of theories to choose from, all with equal likelihood
Not equal likelihood. Universal Prior, Solmonoff induction.
so once again why select your theory out of that chaos?
Once you have chaos, you have a problem. Selecting my theory over the others is only an issue for me if I want to collect money, but the chaos is a problem for you even if you don’t select my theory. You’ll end up being jerked around by some other unlikely god.
It’s not elegant, but it occurred to me as a seed of a thought, and I should have a more robust version in a little bit
I’ll be interested to read about it. Good luck. I hope there’s something there for you to find.
If something looks like a Pascal’s Mugging when it involves ridiculously large utilities, then maybe you agree with me that you should have bounded utilities.
“Pascal’s Mugging” seems to be any scam that involves ridiculously large utilities, and probably specifically those that try to exploit the payoff vs likelihood ratio in that way. A scam is approximately “an assertion that you should give me something, despite a lack of strong evidence supporting my assertion”. So if you offered me $1,000, it’d be just a scam. If you offer me eternal salvation, it’s Pascal’s Mugging.
Counterargument #1:
If you are god, then the universe allows for “gods” which can arbitrarily alter the state of the universe. Therefore, any utility gains I make have an unknown duration—it’s entirely possible that an instant after you grant my utility, you’ll take it away. Furthermore, if you are god, you’re (a) flipping a coin and (b) requiring a donation, so I strongly suspect you are neither friendly nor omni-benevolent. Therefore, I have no reason to favour “god will help me for $1″ over “god will hurt me for $1”—you could just as easily be trying to trap me, and punish anyone who irrationally sends you $1.
1b) I have no reason to select you as a likely god candidate, compared to the ~infinite number of people who exist across all of space-time and all Everett branches.
Counterargument #2:
There are finite many states of “N”.
2a) Eventually the universe will succumb to heat death. Entropy means that we can’t gain information from the coin flip without approaching this state. 2b) Even if you flip coins incredibly fast and in parallel, I will still eventually die, so we can only count the number of coin flips that happen before then.
Counterargument #3:
Assume a utility function which is finite but unbounded. It cannot handle infinity, and thus your mugging relies on an invalid input (infinite utility), and is discarded as malformed.
3b: Assume that my utility function fails in a universe as arbitrary as the one implied by you being god, since I would have witnessed a proof that state(t+1) does not naturally follow state(t)
Counterargument #4:
Carefully assign p(you are god) = 1/N, where N approaches infinity in such a way as to cancel out the infinite sum you are working with. This seems contrived, but my mind assigns p(you are god) = “bullshit, prove it”, and this is about the closest I can come to expressing that mathematically ;)
Counterargument #5:
Assign probabilities by frequency of occurrence. There have been no instances of god yet, so p(god) = 0. Once god has been demonstrated, I can update off of this 0, unlike with Bayesian statistics. My utility function may very well be poorly designed, and I believe this can still allow for FAI research, etc.: social standing, an interest in writing code, peer pressure, etc. all provide motivations even if p(FAI) = 0. One could also assume that even where p(x) = 0, a different function rewards utility for investigating and trying to update even zero-probability events (in which case I’d get some utility from mailing you $1 to satisfy my curiosity, although I suspect not enough to overcome the cost of setting up a PayPal account and losing $1)
Counterargument 3(b) is the most convincing of these to me.
If my decision theory is predicated on some kind of continuity in states of the universe, and my decision is based on some discontinuity in the state of the universe, my decision theory can’t handle this.
This is troubling, but to try to make it more formal: if I believe something like “all mathematically possible universes exist” then promising to “change universes to UN(N)” is a meaningless statement. Perhaps the wager should be rephrased as “increase the measure of universes of higher utility”?
Counterargument #1 is similar to argument against Pascal’s wager that weights Christianity and anti-Christianity equally. Carl’s comment addresses this sort of thing pretty well. That TimFreeman has asserted that you should suspect he is a god is (very small) positive evidence that he is one, that he has the requisite power and intelligence to write a lesswrong post is also very small but positive evidence, &c.
Counterargument #2 implies the nonexistence of gods. I agree that gods are implausible given what we know, but on the other hand, the necessity of entropy and heat-death need not apply to the entire range of UN(X),
I don’t understand Counterargument #3. Could you elaborate a little?
Counterargument #4 seems similar to Robin Hanson’s argument against the 3^^^3 dust specks variant of Pascal’s Mugging, where if I recall correctly he said that you have to discount by the improbability of a single entity exercising such power over N distinct persons, a discount that monotonically scales positively with N. If the discount scales up fast enough, it may not be possible to construct a Pascal’s Mugging of infinite expected value. You could maybe justify a similar principle for an otherwise unsupported claim that you can provide N utilons.
Counterargument #5 raises an interesting point: the post implicitly assumes a consistent utility function that recognizes the standard laws of probability, an assumption that is not satisfied by the ability to update from 0.
It’s playing on the mathematical difference between infinite and unbounded.
In plain but debatably-accurate terms, infinity isn’t a number. If my utility function only works on numbers, you can no more give it “infinity” than you can give it an apple.
As a couple examples: Any given polygon has ‘n’ sides, and there are thus infinite many polygons, but no polygon has ‘infinity’ sides. Conversely, there are infinitely many real numbers such that 0 < x < 1, but x is bounded (it has finite limits).
So I’m asserting that while I cannot have “infinity” utility, there isn’t any finite bound on my utility: it can be 1, a million, 3^^^3, but not “infinity” because “infinity” isn’t a valid input.
Utility doesn’t have to take infinity as an argument in order to be infinite. It just has to have a finite output that can be summed over possible outcomes. In other words, if Sum(p X U(a) + (1-p) X U(^a)) is a valid expression of expected utility, then by induction, Sumi=1 to n X U(i)) should also be a valid expression for any finite n. When you take the limit as n->infinity you run into the problem of no finite expectation, but an arbitrarily large finite sum (which you can get with a stopping rule) ought to be able to establish the same point.
I still don’t understand 3b. TimFreeman wasn’t postulating an acausal universe, just one in which there are things we weren’t expecting.
magfrump seems to have nailed it. I find it interesting how controversial that one has been :)
For infinite sums, basically, if the sum is infinite, then any finite probability gives it infinite expected utility (infinity [1/N] = infinity). If both the sum and probability are finite, then one can argue the details (N [1/N^2] < 1). The math is different between an arbitrarily large finite and an infinite. Or, at least, I’ve always assumed Pascal’s Wager relied on that, because otherwise I don’t see how it produces an infinite expected utility regardless of scepticism.
If the utility can be arbitrarily large depending on N, then an arbitrarily large finite skepticism discount can be overcome by considering a sufficiently large N.
Of course a skepticism discount factor that scales with N might be enough to obviate Pascal’s Wager.
Agreed. However, you also have no reason to carry on your business dealing with ordinary things rather than focusing exclusively on the various unlikely gods that might be trying to jerk you around. I don’t win, but you lose.
Yes, I forgot to mention that if I’m a god I can stop time while I’m flipping coins.
If you play by those rules, you can’t assign a utility to the infinite gamble, so you can’t make decisions about it. If the infinite gamble is possible, your utility function is failing to do its job, which is to help you make decisions. Tell me how you want to fix that without bounded utility.
p(I am god) = 0 is simpler and gets the job done. That appears to be more restrictive than the Universal Prior—I think the universal prior would give positive probability to me being god. There might be a general solution here to specifying a prior that doesn’t fall into these pits, but I don’t know what it is. Do you?
How would this work in general? How could you plan for landing on the moon if it hasn’t been done before? You need to distinguish “failure is certain because we put a large bomb in the rocket that will blow up before it gets anywhere” from “failure is certain because it hasn’t been done before and thus p(success) = 0″.
Yes I do. Dealing with ordinary things has a positive expected utility. Analysing anything that looks like a Pascal’s Mugging has ~zero expected utility as far as the wager itself goes, plus that derived from curiosity and a desire to study logical problems. I believe that Counterargument #5 can be tuned and expanded to apply to all such muggings, so I’ll be writing that up in a bit :)
Assuming Bayesian probability, p=0 means “I refuse to consider new evidence”, which is contrary to the goal of “bullshit, prove it” (I suspect that p=1/infinity might have practically the same issue unless dealing with a god who can provide infinite bits of evidence; fortunately in this case you are making exactly that claim :))
This falls back to 3b, then: My utility function isn’t calibrated to a universe where you can ignore physics. Furthermore, it also falls back to 1b: Once we assume physics doesn’t apply, we get an infinite number of theories to choose from, all with equal likelihood, so once again why select your theory out of that chaos?
p(moon landing) = 0. p(I will enjoy trying despite the inevitable failure) > 0. p(I will feel bad if I ignore the math saying this IS possible) > 0. p(People who did the moon landing had different priors) > 0. etc.
It’s not elegant, but it occurred to me as a seed of a thought, and I should have a more robust version in a little bit :)
I agree with your conclusion, but don’t follow the reasoning. Can you say more about how you identify something that looks like a Pascal’s Mugging?
If something looks like a Pascal’s Mugging when it involves ridiculously large utilities, then maybe you agree with me that you should have bounded utilities.
The laws of physics are discovered, not known a-priori, so you can’t really use that as a way to make decisions.
Not equal likelihood. Universal Prior, Solmonoff induction.
Once you have chaos, you have a problem. Selecting my theory over the others is only an issue for me if I want to collect money, but the chaos is a problem for you even if you don’t select my theory. You’ll end up being jerked around by some other unlikely god.
I’ll be interested to read about it. Good luck. I hope there’s something there for you to find.
“Pascal’s Mugging” seems to be any scam that involves ridiculously large utilities, and probably specifically those that try to exploit the payoff vs likelihood ratio in that way. A scam is approximately “an assertion that you should give me something, despite a lack of strong evidence supporting my assertion”. So if you offered me $1,000, it’d be just a scam. If you offer me eternal salvation, it’s Pascal’s Mugging.