Regarding the comments about exploding brains, it’s a wonder to me that we are able to think about these issues and not lose our sanity. How is it that a brain evolved for hunting/gathering/socializing is able to consider these problems at all? Not only that, but we seem to have some useful intuitions about these problems. Where on Earth did they come from?
Nick> Does your proposal require that one accepts the SIA?
Yes, but using a complexity-based measure as the anthropic probability measure implies that the SIA’s effect is limited. For example, consider two universes, the first with 1 observer, and the second with 2. If all of the observers have the same complexity you’d assign a higher prior probability (i.e., 2⁄3) to being in the second universe. But if the second universe has an infinite number of observers, the sum of their measures can’t exceed the measure of the universe as a whole, so the “presumptuous philosopher” problem is not too bad.
Nick> If I understand your suggestion correctly, you propose that the same anthropic probability measure should also be used as a measure of moral importance.
Yes, in fact I think there are good arguments for this. If you have an anthropic probability measure, you can argue that it should be used as the measure of moral importance, since everyone would prefer that was the case from behind the veil of ignorance. On the other hand, if you have a measure of moral importance, you can argue that for decisions not involving externalities, the global best case can be obtained if people use that measure as the anthropic probability measure and just consider their self interests.
BTW, when using both anthropic reasoning and moral discounting, it’s easy to accidentally apply the same measure twice. For example, suppose the two universes both have 1 observer each, but the observer in the second universe has twice the measure of the one in the first universe. If you’re asked to guess which universe you’re in with some payoff if you guess right, you don’t want to think “There’s 2⁄3 probability that I’m in the second universe, and the payoff is twice as important if I guess ‘second’, so the expected utility of guessing ‘second’ is 4 times as much as the EU of guessing ‘first’.”
I think that to avoid this kind of confusion and other anthropic reasoning paradoxes (see http://groups.google.com/group/everything-list/browse_frm/thread/dd21cbec7063215b/), it’s best to consider all decisions and choices from a multiversal objective-deterministic point of view. That is, when you make a decision between choices A and B, you should think “would I prefer if everyone in my position (i.e., having the same perceptions and memories as me) in the entire multiverse chose A or B?” and ignore the temptation to ask “which universe am I likely to be in?”.
But that may not work unless you believe in a Tegmarkian multiverse. If you don’t, you may have to use both anthropic reasoning and moral discounting, being very careful not to double-count.
Regarding the comments about exploding brains, it’s a wonder to me that we are able to think about these issues and not lose our sanity. How is it that a brain evolved for hunting/gathering/socializing is able to consider these problems at all? Not only that, but we seem to have some useful intuitions about these problems. Where on Earth did they come from?
Nick> Does your proposal require that one accepts the SIA?
Yes, but using a complexity-based measure as the anthropic probability measure implies that the SIA’s effect is limited. For example, consider two universes, the first with 1 observer, and the second with 2. If all of the observers have the same complexity you’d assign a higher prior probability (i.e., 2⁄3) to being in the second universe. But if the second universe has an infinite number of observers, the sum of their measures can’t exceed the measure of the universe as a whole, so the “presumptuous philosopher” problem is not too bad.
Nick> If I understand your suggestion correctly, you propose that the same anthropic probability measure should also be used as a measure of moral importance.
Yes, in fact I think there are good arguments for this. If you have an anthropic probability measure, you can argue that it should be used as the measure of moral importance, since everyone would prefer that was the case from behind the veil of ignorance. On the other hand, if you have a measure of moral importance, you can argue that for decisions not involving externalities, the global best case can be obtained if people use that measure as the anthropic probability measure and just consider their self interests.
BTW, when using both anthropic reasoning and moral discounting, it’s easy to accidentally apply the same measure twice. For example, suppose the two universes both have 1 observer each, but the observer in the second universe has twice the measure of the one in the first universe. If you’re asked to guess which universe you’re in with some payoff if you guess right, you don’t want to think “There’s 2⁄3 probability that I’m in the second universe, and the payoff is twice as important if I guess ‘second’, so the expected utility of guessing ‘second’ is 4 times as much as the EU of guessing ‘first’.”
I think that to avoid this kind of confusion and other anthropic reasoning paradoxes (see http://groups.google.com/group/everything-list/browse_frm/thread/dd21cbec7063215b/), it’s best to consider all decisions and choices from a multiversal objective-deterministic point of view. That is, when you make a decision between choices A and B, you should think “would I prefer if everyone in my position (i.e., having the same perceptions and memories as me) in the entire multiverse chose A or B?” and ignore the temptation to ask “which universe am I likely to be in?”.
But that may not work unless you believe in a Tegmarkian multiverse. If you don’t, you may have to use both anthropic reasoning and moral discounting, being very careful not to double-count.
To be fair, humans are surrounded by thousands of other species that evolved under the same circumstances and can’t consider them.