Making the choice infinitely often doesn’t really cure anything. Sure you can say that you are risk averse and that risk averseness is more warranted in runs that are known to be short. But forgoing a fininte chance of infinite payoff for finite chance of finite payoff needs/expresses infinite risk aversion.
You totally wan tto be aware what makes you okay the expection. One possilbitry is that you ahve a hidden assumtion that the pascal chance plays by different rules than ordinary small finite chances. You could formalise this as the chance being transfinite ie infinitely small and then transfinite times a infinite payoff would be comparable in expectation to finite chance for finite payoff. This means looking hard at an assumption like “neglible chances can be always adequately expressed by a (finite precision) real number”. “infinidesimals play by differnt rules” is less arbitrary but then you got a new field of interest.
Also infinite payoffs can be doubbted whether they make sense or not. You coudl imagine that if you are book character your actions might ordianrilöy have appriciable chances to affect your fictional story world. But if there were some actions that could affect the reader of the story and shape the “real world” it could make sense to call that any positive or negataive impact for the real world can not be overcome with fictional outcomes. This would them effectively relatively infinite in value. Then the choice of a single infinite value choice would be 99% chance of defeating the big bad vs 0.0001% chance of saving the reader from alcholism (or whatever real people impact). it is less clear here that certainty for the victory is the morally attractive good.
Thanks for your comment In some sense I would agree that foregoing a finite chance of infinite payoff for finite chance of finite payoff needs infinite risk aversion. Nevertheless, I think that even such extreme risk aversion could be justified in some specific cases. When I consider this issue, I usually do it in terms of a thought experiment, similar to this with Sue, which I presented in my post. Imagine you are the only being in the entire universe. You know with certainty that you have just one decision to make and after that you will magically disappear. You are faced with a choice between option A, which gives you 0,001 probability of creating really bad outcome and 0,999 probability of creating a moderately good outcome. You have also option B, and if you choose it nothing happens and universe just remains empty. I think that A is the right choice in this case. In my view, when faced with such a single decision it makes sense to go with whatever option which gives you above 0,5 probability of the best possible outcome. However, this has very counterintuitive implications of its own. On this account, when faced with such only-one-case scenario as described above, it would be rational to choose option A, which gives you 0,49 probability of infinite negative utility and 0,51 probability of some tiny positive outcome, over option B on which just nothing happens. This is an extremely counterintuitive result, but ,,pure” expected utility theory (EU) also can generate extremely counterintuitive results, such as choosing the option B (nothing happens) instead of the option A with 0,0000000000000000000000000001 probability of creating infinite negative value and 0,999999999999999999999999999 probability of creating an enormously good, but finite outcome. In a reply to the different comment by Wolajacy above I described how I think about this issue, so I don’t want to repeat it here again. Also, I in my view we should not trust our intuitions in such cases, since they evolved to help us spread our genes in the familiar environment, not to tackle the infinity paradoxes. Therefore I’m ready to accept even counterintuitive results based on explicit reasoning.
You mentioned that maybe it would be worth to revise the assumption that ,,negligible chances can be always adequately expressed by a (finite precision) real number”. That would be a way out of the paradox. However, I don’t think that this is a very promising approach. Surely, in some (maybe most) cases it is hard to speak about precise probabilities that we attach to different beliefs. Nevertheless, I doubt that infinitesimals would be an adequate representation of such probabilities, especially in the case of a belief in God, where I think we can do better.
I agree that whether infinite payoffs make sense or not may be problematic. On the most basic, standard formulation of EU it seems that we are facing a problem, since if there are multiple options which can lead to infinite payoffs then we have no standard to choose between them. However, I think that this could be fix by a relatively uncontroversial addition, stating that when we are faced with multiple options of infinite value, we just go for the one with the highest probability. There may be other issues connected with the comparability of different outcomes, as you also mentioned that kind of problem in your example with being a fictional character, but it seems that discussing those issues would lead us even further away for the original topic.
If you haven’t yet, you can also check my reply to the other comment under this post, where I’ve tried to express myself more clearly. Of course if you have any objections to my reasoning outlined here feel free to criticize it, I really appreciate a well-thought feedback.
In standard utlity theory you really need the numbers to answer which one is better for “really bad outcome” and “moderate good outcome”. The scheme you are proposing is more of “maximising the value of the expected outcome” rather than maximing the expected utility. This is a signifcant difference and not a mere technicality. For example under that scheme buying a lottery ticket could never be worth it if the oods are fixed no matter how much (finitely) the payout increases or the ticket price lowers. Torture vs dust specks content is probably relevant for that stuff.
The pascal stuff makes material use that while determining that there is a non zero positive chance. If you can imagine only real (as in non-imaginaary or infinidesimal) odds that leaves very little options. Can you describe how or why infinidesimals describe the chance badly?
Just because to values that might represent values are inifinte doesn’t mean they are equal. Transfinite quanities can have differnt magnitudes while being relatively infinite to finite values.
You’ve written that ,,In standard utility theory you really need the numbers to answer which one is better for “really bad outcome” and “moderate good outcome’’’’ I agree, probably I should have put there some numbers to be more precise. ,,The scheme you are proposing is more of “maximising the value of the expected outcome” rather than maximing the expected utility.” If I understood correctly what you mean by that, then I would say that I agree also with that. But, with one important remark. I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside.
I call it the ,,first order” approach. However, using such an approach in every single decision would lead to a disaster. I think that the correct way to think about it is to look holistically and adopt a strategy, which based on the approach outlined here will lead to the desirable results. I think that EU is such an approach, since adopting the EU over the long run will lead with probability higher than 0,5 to achieving a net positive result with the highest upside. At least for me, this is the rationale behind the EU which justifies it. Although probably some people would disagree with that, and claim that EU is just self-evident in itself (e.g. https://reducing-suffering.org/why-maximize-expected-value/). Accepting this first order approach as the rationale behind the EU suggests also, that adopting EU with one exception for a very influential, low probability (below 0,5) case may be even a better strategy overall.
For the issue of infinitesimals, maybe it depends on the interpretation of what the nature if probability is. I used to think about probability in terms a subjective degree of belief, or level of confidence, maybe also with some frequentist element attached to it. I’m sceptical about the usage of infinitesimals, since it seems problematic to believe something with an infinitely small level of confidence. Although I have to admit that maybe it would make sense in some cases, e.g. in the case when you are confronted with a lottery with infinitely many options, and you know that one of those infinitely many options will be randomly selected, but there is no way to determine which one. Then it may seem plausible that you should assigned to each option an infinitely small probability of being selected. But at least in the case in which I’m interested in here (the case of belief in God), I don’t think that assigning an infinitely small probability to it would be right. My own very rough estimate is that the probability of the existence of some kind of God is about 0,3 (it shouldn’t be treated literally, I use this number only to roughly express my level of confidence that this is the case).
You’ve mentioned at the end of your comment also the issue of different magnitudes of infinities. That deserves the discussion of its own. I’m not sure for example, how to make decision if we have to choose between two possible Gods, when one God is more probable to exist but offers you ,,only” infinite amount of value, while the second God is less probable to exist but offers you a an infinity of a bigger size. This is an interesting topic, but I’m not sure what to say about it at this moment.
It is fine to use many level of accuracy but one needs to be consisten on which accuracy level gets applied. If the case is that you “need to want to believe” in the proposition to proceed into step 2 then it is a form of motivated reasoning. And in the case of counterexamples it means providing reasons why a step 1 level analysis is sufficient to prove it absurd without taking into account step 2 analysis.
Standard EU has the property that is some option is worth taking then when tasked to make multiple such choices the same option is chosen. With the 0.5 or actually any total central outcome requirement there is the weird property that what you should choose depends on how many choices you are expecting to make / how long you think you are going to live.
Say you have 3 scenario possibly participate in a lottery A) 1 time B) 10 times C) 100 times. Say you have a 1⁄10 chance to win 1 $, 1⁄50 chance to win 100$ and thew ticket costs 10$. In scenario A you have a under 1⁄5 chance of any positive outcome and even in scenario C without the big win chance you would expect to break even. Isn’t it weird to say that you should participate in C but not in A? The chances don’t need to be that extreme for it starting to get weird. Or as in the world the lottery offices are constantly open it would be weird to recommend not to do it if you are going to do it less than 100 times but recommend if you do it over 100 times if the odds stay they same. If the lottery is worth it is is already worth it at the first ticket.
For example when thiikng of a coin as frequentist you ask “how many times it would come up heads if thrown infinitely often” Then you woudl be comparing heads counts to tails counts an typically both will be infinite (and sneakily amount representing 25⁄75 odds are different than representing 50⁄50 despite being infinite amounts). A frequentist could under stand a infinideismal property as the number of outcomes given infinitie trials woudl be a finite number. For example a coin that would come up infinitely many sides on its side but 3 times on heads and 7 times on tails. Note that we talk as if tails and heads encompass all the relative alternatives while saying that it is possible for a coin to land on its side. Making this exact revolves around probability 0 or what is the distinction betwen impossible and possible but doesn’t happen finite slice of the time. And because people are allergic to infinities if they can express their ideas otherwise they often do so. But when topic is infinities they become relevant again.
If you apply the “rare event, big impact” correction you start to approach EU without any possiblity thresholds to meet. Addressing the idea of EU seriously needs to take this extremization seriously. Otherwise you will end with a stance like “you should do absolutely nothing about asteroids as they are part of the neglibly rare noise which magnitude doesn’t need to be taken into account”.
Couldn’t you for example think that there coul dbe varaintions of god that don’t differn in other than the height of the human avatar if they choose to appear to people. And don’t you express height in real numbers and aren’t real numbers innumerably infinite? And there are multiple different attributes such as severity of jail or hell sentences levied etc. If one could argue that the set of relevant gods is a finite sized set then it could easily be argued that if one of them were to be true then the chances of any particular vision of it would have finite chance. But the relevant options are mostly gathered by limits of imagination and not constrained by any empirical evidence. And therefore by having a better imagination and showing a palette of innumerably many options you woudl have to atleast argue why my way of imaginine fails to capture the options or captures the wrong options.
I agree that it would be weird to accept the lottery with a positive EU only if you take it some specific number of times. In the normal, everyday decision making I wouldn’t argue for this. Indeed, I think the EU is the right approach to making decisions under uncertainty. I’m willing to follow it, even if chances of success are low, but the stakes are high enough to make the EU positive. What I argue for, is the justification why I’m willing to follow the EU. I don’t think that this is self-evident, that I should choose the option with the highest EU. I think that what makes following the EU the right choice is the law of large numbers. If I have to pay 10$ for a lottery where you have 99,9% chance that you will win nothing and 0,1% chance that you will win 100000000000 $ then I think I should pay. But the rationale why I should play, at least for me, is not that it is intrinsically worth it/rational. For me the reason why this is a good option , is that it finally will pay off, and in the end I will have a lot more money than I had at the start. Also, I think this holds also if I was able to buy only one ticket for that lottery, because even if I will gain nothing from this particular choice, still I will encounter low probability-high stakes choices many times in my life, so finally it would be worth it. My point is just that EU needs the law of large numbers or repeated decision making or series of choices, however we call it, to be the rational strategy. And I don’t mean necessarily ,,choices concerning playing this particular lottery”. I mean making any choices under uncertainty. In other words, in the end the EU is more probable than improbable, to produce the outcome with the highest upside. This is why I think that this first order approach, which I described earlier, in fact implies the EU when we look at our situation holistically. For this reason, I regard the EU as rational.
My whole point is basically that it won’t harm you, if you make the one exception from following the EU in the case of one particular low probability-high stake decision, that is Pascal’s wager. The condition is that you need to be able to reliably restrict yourself to making just one (or at least limited) number of exceptions, since finally you will encounter the case when despite of the low probability, consequences will occur to be real.
Sure, this addition may seem ad hoc an a bit theoretically inelegant. It’s also true that it demands that assumption of how much choices you will have occasion to make, but it doesn’t look very problematic for me. All things considered, it seems to me that the rationale standing behind it (derived from the way in which I think the EU is justified) is enough to justify it.
You’ve mentioned also those infinite possibilities of different Gods only slightly different form each other. Fair enough, in this case maybe the infinitesimals are the right representation of credence that one should have in such options. Nevertheless, if we are to follow the EU in literally every case, then it still seems to make sense to determine which God (or set of possible Gods similar to each other) has the highest probability and then to accept the wager. Maybe the acceptance of the wager would not look like the proponents of it usually imagine (i.e. accepting particular religion) but rather devoting yourself to doing research to find out which God is the most probable one (because of the information value). Nevertheless, it still would have a significant influence on the way we live. And maybe this is fine and we should just accept this conclusion, I’m not sure about it. However, the approach that I’ve proposed here for me seems to be rather more rational.
There is the issue whether one believes the stated chances are real or whether one is in error about it. If you believed that there was a 1/4th chance of heads when in fact the coin was fair then your betting will be lead astray. However if the odds are correct and the math says you end up with more money there is no way to argue that you can forgo the option and claim to be a money grabbing agent.
We could think of some agent wanting to not buy a payout biased lottery ticket where they think they wil save the cost of the ticket and get to keep to call themselfs as a good decision maker. If the odds are 10% of 1000$ for 1 $ ticket and the agent thinks they expect to lose on money they have made a math error. You don’t get to call yourself being able to calculate odds correctly if you make a limited amount of mistakes. And certainly you don’t end up going “over the limit” of “all accruable winnings” by the price of ticket. Either the ticket price is part of the accruable winnings, or the total is some subtotal that doesn’t actually represent everything achievable.
The usual worry about what would be the policy implication of accepting the pascal wager would be that you would be prone to be pascal mugged. Anyone can fabricate a very remote very low comfortability threat and ask for finite compensation to not do it. But a website saying you are the 1000000000th visitor to the website is not a very good evidence of those chances being real. And in a way very higly tuned chances need very much data to be well founded. That way almost anyone can make 50:50 claims but very few people can plausibly state any 0.00000001% odds. Thus in a finite aged universe none can have the inductive support for any infinidesimal chance.
There can be many dimensions of asking indecidably low odds of what could happen. An agent that systematically excused each of the questions to be a one-off exception could be totally prey to rare events.But one has to distinguish doing well in a the model and doing well in fact. You don’t get to not get victimised by supernovas if you lack the capacity to model supernovas. It can make sense to focus on what you can model and stay silent on what you can’t model but pushing the edge on what you can model can be critical.
Thanks for your comment. I’ve thought through the issue cerafully and I’m no longer so confident about this topic. Now I’m planning to read more about the Pascal’s wager and about the decision theory in general. I want to think about it more, and do my best to come up with a good enough solution for this problem.
Thank you for the whole disccusion and the time devoted for responding to my arguemnts.
Making the choice infinitely often doesn’t really cure anything. Sure you can say that you are risk averse and that risk averseness is more warranted in runs that are known to be short. But forgoing a fininte chance of infinite payoff for finite chance of finite payoff needs/expresses infinite risk aversion.
You totally wan tto be aware what makes you okay the expection. One possilbitry is that you ahve a hidden assumtion that the pascal chance plays by different rules than ordinary small finite chances. You could formalise this as the chance being transfinite ie infinitely small and then transfinite times a infinite payoff would be comparable in expectation to finite chance for finite payoff. This means looking hard at an assumption like “neglible chances can be always adequately expressed by a (finite precision) real number”. “infinidesimals play by differnt rules” is less arbitrary but then you got a new field of interest.
Also infinite payoffs can be doubbted whether they make sense or not. You coudl imagine that if you are book character your actions might ordianrilöy have appriciable chances to affect your fictional story world. But if there were some actions that could affect the reader of the story and shape the “real world” it could make sense to call that any positive or negataive impact for the real world can not be overcome with fictional outcomes. This would them effectively relatively infinite in value. Then the choice of a single infinite value choice would be 99% chance of defeating the big bad vs 0.0001% chance of saving the reader from alcholism (or whatever real people impact). it is less clear here that certainty for the victory is the morally attractive good.
Thanks for your comment
In some sense I would agree that foregoing a finite chance of infinite payoff for finite chance of finite payoff needs infinite risk aversion. Nevertheless, I think that even such extreme risk aversion could be justified in some specific cases. When I consider this issue, I usually do it in terms of a thought experiment, similar to this with Sue, which I presented in my post.
Imagine you are the only being in the entire universe. You know with certainty that you have just one decision to make and after that you will magically disappear. You are faced with a choice between option A, which gives you 0,001 probability of creating really bad outcome and 0,999 probability of creating a moderately good outcome. You have also option B, and if you choose it nothing happens and universe just remains empty. I think that A is the right choice in this case. In my view, when faced with such a single decision it makes sense to go with whatever option which gives you above 0,5 probability of the best possible outcome.
However, this has very counterintuitive implications of its own. On this account, when faced with such only-one-case scenario as described above, it would be rational to choose option A, which gives you 0,49 probability of infinite negative utility and 0,51 probability of some tiny positive outcome, over option B on which just nothing happens.
This is an extremely counterintuitive result, but ,,pure” expected utility theory (EU) also can generate extremely counterintuitive results, such as choosing the option B (nothing happens) instead of the option A with 0,0000000000000000000000000001 probability of creating infinite negative value and 0,999999999999999999999999999 probability of creating an enormously good, but finite outcome. In a reply to the different comment by Wolajacy above I described how I think about this issue, so I don’t want to repeat it here again. Also, I in my view we should not trust our intuitions in such cases, since they evolved to help us spread our genes in the familiar environment, not to tackle the infinity paradoxes. Therefore I’m ready to accept even counterintuitive results based on explicit reasoning.
You mentioned that maybe it would be worth to revise the assumption that ,,negligible chances can be always adequately expressed by a (finite precision) real number”. That would be a way out of the paradox. However, I don’t think that this is a very promising approach. Surely, in some (maybe most) cases it is hard to speak about precise probabilities that we attach to different beliefs. Nevertheless, I doubt that infinitesimals would be an adequate representation of such probabilities, especially in the case of a belief in God, where I think we can do better.
I agree that whether infinite payoffs make sense or not may be problematic. On the most basic, standard formulation of EU it seems that we are facing a problem, since if there are multiple options which can lead to infinite payoffs then we have no standard to choose between them. However, I think that this could be fix by a relatively uncontroversial addition, stating that when we are faced with multiple options of infinite value, we just go for the one with the highest probability. There may be other issues connected with the comparability of different outcomes, as you also mentioned that kind of problem in your example with being a fictional character, but it seems that discussing those issues would lead us even further away for the original topic.
If you haven’t yet, you can also check my reply to the other comment under this post, where I’ve tried to express myself more clearly. Of course if you have any objections to my reasoning outlined here feel free to criticize it, I really appreciate a well-thought feedback.
In standard utlity theory you really need the numbers to answer which one is better for “really bad outcome” and “moderate good outcome”. The scheme you are proposing is more of “maximising the value of the expected outcome” rather than maximing the expected utility. This is a signifcant difference and not a mere technicality. For example under that scheme buying a lottery ticket could never be worth it if the oods are fixed no matter how much (finitely) the payout increases or the ticket price lowers. Torture vs dust specks content is probably relevant for that stuff.
The pascal stuff makes material use that while determining that there is a non zero positive chance. If you can imagine only real (as in non-imaginaary or infinidesimal) odds that leaves very little options. Can you describe how or why infinidesimals describe the chance badly?
Just because to values that might represent values are inifinte doesn’t mean they are equal. Transfinite quanities can have differnt magnitudes while being relatively infinite to finite values.
You’ve written that ,,In standard utility theory you really need the numbers to answer which one is better for “really bad outcome” and “moderate good outcome’’’’ I agree, probably I should have put there some numbers to be more precise.
,,The scheme you are proposing is more of “maximising the value of the expected outcome” rather than maximing the expected utility.”
If I understood correctly what you mean by that, then I would say that I agree also with that. But, with one important remark. I regard the decision as rational if from the set of all possible acts it selects first those acts which have probability equal or higher than 0,5 of achieving a net positive result, and then from those acts, the act which has the highest upside.
I call it the ,,first order” approach. However, using such an approach in every single decision would lead to a disaster. I think that the correct way to think about it is to look holistically and adopt a strategy, which based on the approach outlined here will lead to the desirable results. I think that EU is such an approach, since adopting the EU over the long run will lead with probability higher than 0,5 to achieving a net positive result with the highest upside. At least for me, this is the rationale behind the EU which justifies it. Although probably some people would disagree with that, and claim that EU is just self-evident in itself (e.g. https://reducing-suffering.org/why-maximize-expected-value/). Accepting this first order approach as the rationale behind the EU suggests also, that adopting EU with one exception for a very influential, low probability (below 0,5) case may be even a better strategy overall.
For the issue of infinitesimals, maybe it depends on the interpretation of what the nature if probability is. I used to think about probability in terms a subjective degree of belief, or level of confidence, maybe also with some frequentist element attached to it. I’m sceptical about the usage of infinitesimals, since it seems problematic to believe something with an infinitely small level of confidence. Although I have to admit that maybe it would make sense in some cases, e.g. in the case when you are confronted with a lottery with infinitely many options, and you know that one of those infinitely many options will be randomly selected, but there is no way to determine which one. Then it may seem plausible that you should assigned to each option an infinitely small probability of being selected. But at least in the case in which I’m interested in here (the case of belief in God), I don’t think that assigning an infinitely small probability to it would be right. My own very rough estimate is that the probability of the existence of some kind of God is about 0,3 (it shouldn’t be treated literally, I use this number only to roughly express my level of confidence that this is the case).
You’ve mentioned at the end of your comment also the issue of different magnitudes of infinities. That deserves the discussion of its own. I’m not sure for example, how to make decision if we have to choose between two possible Gods, when one God is more probable to exist but offers you ,,only” infinite amount of value, while the second God is less probable to exist but offers you a an infinity of a bigger size. This is an interesting topic, but I’m not sure what to say about it at this moment.
It is fine to use many level of accuracy but one needs to be consisten on which accuracy level gets applied. If the case is that you “need to want to believe” in the proposition to proceed into step 2 then it is a form of motivated reasoning. And in the case of counterexamples it means providing reasons why a step 1 level analysis is sufficient to prove it absurd without taking into account step 2 analysis.
Standard EU has the property that is some option is worth taking then when tasked to make multiple such choices the same option is chosen. With the 0.5 or actually any total central outcome requirement there is the weird property that what you should choose depends on how many choices you are expecting to make / how long you think you are going to live.
Say you have 3 scenario possibly participate in a lottery A) 1 time B) 10 times C) 100 times. Say you have a 1⁄10 chance to win 1 $, 1⁄50 chance to win 100$ and thew ticket costs 10$. In scenario A you have a under 1⁄5 chance of any positive outcome and even in scenario C without the big win chance you would expect to break even. Isn’t it weird to say that you should participate in C but not in A? The chances don’t need to be that extreme for it starting to get weird. Or as in the world the lottery offices are constantly open it would be weird to recommend not to do it if you are going to do it less than 100 times but recommend if you do it over 100 times if the odds stay they same. If the lottery is worth it is is already worth it at the first ticket.
For example when thiikng of a coin as frequentist you ask “how many times it would come up heads if thrown infinitely often” Then you woudl be comparing heads counts to tails counts an typically both will be infinite (and sneakily amount representing 25⁄75 odds are different than representing 50⁄50 despite being infinite amounts). A frequentist could under stand a infinideismal property as the number of outcomes given infinitie trials woudl be a finite number. For example a coin that would come up infinitely many sides on its side but 3 times on heads and 7 times on tails. Note that we talk as if tails and heads encompass all the relative alternatives while saying that it is possible for a coin to land on its side. Making this exact revolves around probability 0 or what is the distinction betwen impossible and possible but doesn’t happen finite slice of the time. And because people are allergic to infinities if they can express their ideas otherwise they often do so. But when topic is infinities they become relevant again.
If you apply the “rare event, big impact” correction you start to approach EU without any possiblity thresholds to meet. Addressing the idea of EU seriously needs to take this extremization seriously. Otherwise you will end with a stance like “you should do absolutely nothing about asteroids as they are part of the neglibly rare noise which magnitude doesn’t need to be taken into account”.
Couldn’t you for example think that there coul dbe varaintions of god that don’t differn in other than the height of the human avatar if they choose to appear to people. And don’t you express height in real numbers and aren’t real numbers innumerably infinite? And there are multiple different attributes such as severity of jail or hell sentences levied etc. If one could argue that the set of relevant gods is a finite sized set then it could easily be argued that if one of them were to be true then the chances of any particular vision of it would have finite chance. But the relevant options are mostly gathered by limits of imagination and not constrained by any empirical evidence. And therefore by having a better imagination and showing a palette of innumerably many options you woudl have to atleast argue why my way of imaginine fails to capture the options or captures the wrong options.
I agree that it would be weird to accept the lottery with a positive EU only if you take it some specific number of times. In the normal, everyday decision making I wouldn’t argue for this. Indeed, I think the EU is the right approach to making decisions under uncertainty. I’m willing to follow it, even if chances of success are low, but the stakes are high enough to make the EU positive. What I argue for, is the justification why I’m willing to follow the EU. I don’t think that this is self-evident, that I should choose the option with the highest EU. I think that what makes following the EU the right choice is the law of large numbers. If I have to pay 10$ for a lottery where you have 99,9% chance that you will win nothing and 0,1% chance that you will win 100000000000 $ then I think I should pay. But the rationale why I should play, at least for me, is not that it is intrinsically worth it/rational. For me the reason why this is a good option , is that it finally will pay off, and in the end I will have a lot more money than I had at the start. Also, I think this holds also if I was able to buy only one ticket for that lottery, because even if I will gain nothing from this particular choice, still I will encounter low probability-high stakes choices many times in my life, so finally it would be worth it.
My point is just that EU needs the law of large numbers or repeated decision making or series of choices, however we call it, to be the rational strategy. And I don’t mean necessarily ,,choices concerning playing this particular lottery”. I mean making any choices under uncertainty. In other words, in the end the EU is more probable than improbable, to produce the outcome with the highest upside. This is why I think that this first order approach, which I described earlier, in fact implies the EU when we look at our situation holistically. For this reason, I regard the EU as rational.
My whole point is basically that it won’t harm you, if you make the one exception from following the EU in the case of one particular low probability-high stake decision, that is Pascal’s wager. The condition is that you need to be able to reliably restrict yourself to making just one (or at least limited) number of exceptions, since finally you will encounter the case when despite of the low probability, consequences will occur to be real.
Sure, this addition may seem ad hoc an a bit theoretically inelegant. It’s also true that it demands that assumption of how much choices you will have occasion to make, but it doesn’t look very problematic for me. All things considered, it seems to me that the rationale standing behind it (derived from the way in which I think the EU is justified) is enough to justify it.
You’ve mentioned also those infinite possibilities of different Gods only slightly different form each other. Fair enough, in this case maybe the infinitesimals are the right representation of credence that one should have in such options. Nevertheless, if we are to follow the EU in literally every case, then it still seems to make sense to determine which God (or set of possible Gods similar to each other) has the highest probability and then to accept the wager. Maybe the acceptance of the wager would not look like the proponents of it usually imagine (i.e. accepting particular religion) but rather devoting yourself to doing research to find out which God is the most probable one (because of the information value). Nevertheless, it still would have a significant influence on the way we live. And maybe this is fine and we should just accept this conclusion, I’m not sure about it. However, the approach that I’ve proposed here for me seems to be rather more rational.
There is the issue whether one believes the stated chances are real or whether one is in error about it. If you believed that there was a 1/4th chance of heads when in fact the coin was fair then your betting will be lead astray. However if the odds are correct and the math says you end up with more money there is no way to argue that you can forgo the option and claim to be a money grabbing agent.
We could think of some agent wanting to not buy a payout biased lottery ticket where they think they wil save the cost of the ticket and get to keep to call themselfs as a good decision maker. If the odds are 10% of 1000$ for 1 $ ticket and the agent thinks they expect to lose on money they have made a math error. You don’t get to call yourself being able to calculate odds correctly if you make a limited amount of mistakes. And certainly you don’t end up going “over the limit” of “all accruable winnings” by the price of ticket. Either the ticket price is part of the accruable winnings, or the total is some subtotal that doesn’t actually represent everything achievable.
The usual worry about what would be the policy implication of accepting the pascal wager would be that you would be prone to be pascal mugged. Anyone can fabricate a very remote very low comfortability threat and ask for finite compensation to not do it. But a website saying you are the 1000000000th visitor to the website is not a very good evidence of those chances being real. And in a way very higly tuned chances need very much data to be well founded. That way almost anyone can make 50:50 claims but very few people can plausibly state any 0.00000001% odds. Thus in a finite aged universe none can have the inductive support for any infinidesimal chance.
There can be many dimensions of asking indecidably low odds of what could happen. An agent that systematically excused each of the questions to be a one-off exception could be totally prey to rare events.But one has to distinguish doing well in a the model and doing well in fact. You don’t get to not get victimised by supernovas if you lack the capacity to model supernovas. It can make sense to focus on what you can model and stay silent on what you can’t model but pushing the edge on what you can model can be critical.
Thanks for your comment. I’ve thought through the issue cerafully and I’m no longer so confident about this topic. Now I’m planning to read more about the Pascal’s wager and about the decision theory in general. I want to think about it more, and do my best to come up with a good enough solution for this problem.
Thank you for the whole disccusion and the time devoted for responding to my arguemnts.