I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000.
Now, I don’t think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn’t pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I’m interested in what could cause someone to think that. I actually have a little more information upon asking a few more questions, but I’d like to see what others think without knowing the answer.
My own thoughts:This may be related to the Allais paradox. It also trivially implies two-boxing in Newcomb.
Some more questions raised:
What arguments might I make to change this person’s mind?
Would it be ethical, if I had to make this choice for them, to choose the $1,000,000? What about an AI making choices for a human with this utility function?
I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don’t think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn’t pay someone $999,999 to do away with that probability and therefore guarantee them the $1
Losing money and gaining money is not the same. Most humans use heuristics that treat both cases differently. If you want to understand someone you shouldn’t equate both cases even if they look the same in your utilitarian assessment.
I understand that, which is why I concede that they may choose the million in one case and not in the other. But I think that their decision may be based on other factors, i.e. that they don’t actually believe they’d get the million with 99% probability. They’re imagining someone telling them ,”I’ll give you a million if this RNG from 1-100 comes out anything but 100 (or something similar)”, and are not factoring out distrust. My example with reversing the flow of money was also intended to correct for that.
Perhaps the heuristics you refer to are based on this? Has this idea of “trust” been tested for correlation with “losing money and gaining money” distinction?
As for the ethics, if you already were in a position to HAVE to make the decision, you should do what you think is right regardless of any of their prior opinions. If, however, you just had the opportunity to override them, I thinks you should limit yourself to persuading as many of them as you can, but not override them for their own benefits.
I spoke with someone recently who asserted that they would prefer an 100% chance of getting a dollar, than a 99% chance of getting $1,000,000. Now, I don’t think that they would actually do this if the situation was real, i.e. if they had $1,000,000 and there was a 1 in 100 chance that it would be lost, they wouldn’t pay someone $999,999 to do away with that probability and therefore guarantee them the $1, but they think they would do that. I’m interested in what could cause someone to think that. I actually have a little more information upon asking a few more questions, but I’d like to see what others think without knowing the answer.
My own thoughts:This may be related to the Allais paradox. It also trivially implies two-boxing in Newcomb.
Some more questions raised:
What arguments might I make to change this person’s mind?
Would it be ethical, if I had to make this choice for them, to choose the $1,000,000? What about an AI making choices for a human with this utility function?
Losing money and gaining money is not the same. Most humans use heuristics that treat both cases differently. If you want to understand someone you shouldn’t equate both cases even if they look the same in your utilitarian assessment.
I understand that, which is why I concede that they may choose the million in one case and not in the other. But I think that their decision may be based on other factors, i.e. that they don’t actually believe they’d get the million with 99% probability. They’re imagining someone telling them ,”I’ll give you a million if this RNG from 1-100 comes out anything but 100 (or something similar)”, and are not factoring out distrust. My example with reversing the flow of money was also intended to correct for that.
Perhaps the heuristics you refer to are based on this? Has this idea of “trust” been tested for correlation with “losing money and gaining money” distinction?
Writing it backward, I thinks you just did.
As for the ethics, if you already were in a position to HAVE to make the decision, you should do what you think is right regardless of any of their prior opinions. If, however, you just had the opportunity to override them, I thinks you should limit yourself to persuading as many of them as you can, but not override them for their own benefits.