Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal’s mugging situation. What is different?
If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3 utils are generated? In typical Pascal’s mugging, I expect there to be absolutely zero causal descendents where 3^^^3 utils are generated, but in this example, I expect there to be “1/googol” such causal descendents, even though the subjective probability of these two scenarios is roughly the same. I then do my expected utility maximization with (# of utils)(best guess of my measure) instead of (# of utils)(subjective probability), which seems to match with my intuitions better, at least.
But this also just seems like I am passing the buck to the subjective probability of a certain model of the universe, and that this will suffer from the mugging problem as well.
So does thinking about it this way add anything, or is it just more confusing?
You cant pay for things in Utils, you can only pay for them in Opportunities.
This is where pascals mugging goes wrong as well; the only reason to not give pascals mugger the money is the possibility of an even greater opportunity coming along later; a mugger that’s more credible, and/or offers an even greater potential payof. (And once any mugger offers INFINITE utility there’s only credibility left to increase.)
That doesn’t work because the expected value of things that you should do, e.g. donating to an effective charity, is far lower than the expected value of a pascal mugging.
I expect an FAI to have at least 10% probability of acquiring infinite computational power. This means donations to MIRI have infinite expected utility.
Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal’s mugging situation. What is different?
If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3 utils are generated? In typical Pascal’s mugging, I expect there to be absolutely zero causal descendents where 3^^^3 utils are generated, but in this example, I expect there to be “1/googol” such causal descendents, even though the subjective probability of these two scenarios is roughly the same. I then do my expected utility maximization with (# of utils)(best guess of my measure) instead of (# of utils)(subjective probability), which seems to match with my intuitions better, at least.
But this also just seems like I am passing the buck to the subjective probability of a certain model of the universe, and that this will suffer from the mugging problem as well.
So does thinking about it this way add anything, or is it just more confusing?
You cant pay for things in Utils, you can only pay for them in Opportunities.
This is where pascals mugging goes wrong as well; the only reason to not give pascals mugger the money is the possibility of an even greater opportunity coming along later; a mugger that’s more credible, and/or offers an even greater potential payof. (And once any mugger offers INFINITE utility there’s only credibility left to increase.)
That doesn’t work because the expected value of things that you should do, e.g. donating to an effective charity, is far lower than the expected value of a pascal mugging.
I expect an FAI to have at least 10% probability of acquiring infinite computational power. This means donations to MIRI have infinite expected utility.