The hypothesis is not that they exactly cancel the mugging utility, but that the downstream utilities exceed the mugging utility. I was actually thinking that these downstream effects would be much greater than paying the mugger.
That’s probably true in many cases, but the “mugger” scenario is really designed to test our limits. If 3^^^3 doesn’t work, then probably 3^^^^3 will. To be logically coherent, there has to be some crossover point, where the mugger provides exactly enough evidence to decide that yes, it’s worth paying the $5, despite our astoundingly low priors.
The proposed priors have one of two problems:
you can get mugged too easily, by your mugger simply being sophisticated enough to pick a high enough number to overwhelm your prior.
We’ve got a prior that is highly resistant to mugging, but unfortunately, is also resistant to being convinced by evidence. If there is any positive probability that we really could encounter a matrix lord able to do what they claim, and would offer some kind of pascal mugging like deal, there should be some amount of evidence that would convince us to take the deal. We would like it if the amount of necessary evidence were within the bounds of what it is possible for our brain to receive and update on in a lifetime, but that is not necessarily the case with the priors which we know will be able to avoid specious muggings.
I’m not actually certain that a prior has to exist which doesn’t have one of these two problems.
I also agree with Eliezer’s general principle that when we see convincing evidence of things that we previously considered effectively impossible (prior of /10^-googol or such), then we need to update the whole map on which that prior was based, not just on the specific point. When you watch a person turn into a small cat, either your own sense data, or pretty much your whole map of how things work must come into question. You can’t just say “Oh, people can turn into cats.” and move on as if that doesn’t affect almost everything you previously thought you knew about how the world worked.
It’s much more likely, based on what I know right now, that I am having an unusually convincing dream or hallucination than that people can turn into cats. And if I manage to collect enough evidence to actually make my probability of “people can turn into cats” higher than “my sensory data is not reliable”, then the whole framework of physics, chemistry, biology, and basic experience which caused me to assign such a low probability to “people can turn into cats” in the first place has to be reconsidered.
That’s probably true in many cases, but the “mugger” scenario is really designed to test our limits. If 3^^^3 doesn’t work, then probably 3^^^^3 will.
The probability that humans will eventually be capable of creating x utility given that the mugger is capable of creating x utility probably converges to some constant as x goes to infinity. (Of course, this still isn’t a solution as expected utility still doesn’t convege.)
The hypothesis is not that they exactly cancel the mugging utility, but that the downstream utilities exceed the mugging utility. I was actually thinking that these downstream effects would be much greater than paying the mugger.
That’s probably true in many cases, but the “mugger” scenario is really designed to test our limits. If 3^^^3 doesn’t work, then probably 3^^^^3 will. To be logically coherent, there has to be some crossover point, where the mugger provides exactly enough evidence to decide that yes, it’s worth paying the $5, despite our astoundingly low priors.
The proposed priors have one of two problems:
you can get mugged too easily, by your mugger simply being sophisticated enough to pick a high enough number to overwhelm your prior.
We’ve got a prior that is highly resistant to mugging, but unfortunately, is also resistant to being convinced by evidence. If there is any positive probability that we really could encounter a matrix lord able to do what they claim, and would offer some kind of pascal mugging like deal, there should be some amount of evidence that would convince us to take the deal. We would like it if the amount of necessary evidence were within the bounds of what it is possible for our brain to receive and update on in a lifetime, but that is not necessarily the case with the priors which we know will be able to avoid specious muggings.
I’m not actually certain that a prior has to exist which doesn’t have one of these two problems.
I also agree with Eliezer’s general principle that when we see convincing evidence of things that we previously considered effectively impossible (prior of /10^-googol or such), then we need to update the whole map on which that prior was based, not just on the specific point. When you watch a person turn into a small cat, either your own sense data, or pretty much your whole map of how things work must come into question. You can’t just say “Oh, people can turn into cats.” and move on as if that doesn’t affect almost everything you previously thought you knew about how the world worked.
It’s much more likely, based on what I know right now, that I am having an unusually convincing dream or hallucination than that people can turn into cats. And if I manage to collect enough evidence to actually make my probability of “people can turn into cats” higher than “my sensory data is not reliable”, then the whole framework of physics, chemistry, biology, and basic experience which caused me to assign such a low probability to “people can turn into cats” in the first place has to be reconsidered.
The probability that humans will eventually be capable of creating x utility given that the mugger is capable of creating x utility probably converges to some constant as x goes to infinity. (Of course, this still isn’t a solution as expected utility still doesn’t convege.)
That assumes that the number is independent of the prior. I wouldn’t make that assumption.