I think that there are deep philosophical implications for many-world theories, including but not limited to quantum many-world theories. If there are many worlds, presumably a large number of them must differ in their most obvious meta-characteristics. Some of these meta-characteristics that I observe are consequence, complexity, and difficulty (that is, across a wide array of phenomena, harmony is possible but not easy. There is no argument that will convince everyone, there is no FTL, there is a great filter...). Thus I can safely presume that inhabitants of the worlds which do not share these meta-characteristics are in some separate anthropic set. Thus, for beings in my anthropic set, I can take these characteristics as moral axioms. I do not argue that they are a source for all moral reasons, except through the difficult mediation of evolution; however, they are a moral bedrock. In other words, they underdetermine my morals, but they do determine them.
Did you ever read “A Fire Upon The Deep”? Obviously, it’s shameless space opera. But it’s a good metaphor for what I think our real situation is. The premise is that there is some kind of “IQ limit” that goes from 0 at the center of the galaxy to infinite outside it. The outside is the domain of the strong AIs, ineffable to human reason, and we are in the grey zone, where intelligence is possible but AI is not. I think that a situation something like this pertains, not over real space, but over the parameter space of multiple worlds. We ARE on the border of God’s Mandelbrot Set, and that there is something special about that. If we ever make it to AGI, for me, that is not a win condition (or a lose condition) but just a boundary condition: I cannot begin to evaluate the conditions of my actions here and now on the world beyond that boundary, so it is beyond my morals. The specialness of our position, and the fact that a world where we attain AGI is not in the same way special, is for me a consequence of the anthropic principle plus many worlds (as I said, quantum and otherwise.)
So many worlds for me is an argument that we should not be in any all-fired hurry to reach AGI, that moral actions within the context of the world-as-we-know-it are more important.
I think that there are deep philosophical implications for many-world theories, including but not limited to quantum many-world theories. If there are many worlds, presumably a large number of them must differ in their most obvious meta-characteristics. Some of these meta-characteristics that I observe are consequence, complexity, and difficulty (that is, across a wide array of phenomena, harmony is possible but not easy. There is no argument that will convince everyone, there is no FTL, there is a great filter...). Thus I can safely presume that inhabitants of the worlds which do not share these meta-characteristics are in some separate anthropic set. Thus, for beings in my anthropic set, I can take these characteristics as moral axioms. I do not argue that they are a source for all moral reasons, except through the difficult mediation of evolution; however, they are a moral bedrock. In other words, they underdetermine my morals, but they do determine them.
Did you ever read “A Fire Upon The Deep”? Obviously, it’s shameless space opera. But it’s a good metaphor for what I think our real situation is. The premise is that there is some kind of “IQ limit” that goes from 0 at the center of the galaxy to infinite outside it. The outside is the domain of the strong AIs, ineffable to human reason, and we are in the grey zone, where intelligence is possible but AI is not. I think that a situation something like this pertains, not over real space, but over the parameter space of multiple worlds. We ARE on the border of God’s Mandelbrot Set, and that there is something special about that. If we ever make it to AGI, for me, that is not a win condition (or a lose condition) but just a boundary condition: I cannot begin to evaluate the conditions of my actions here and now on the world beyond that boundary, so it is beyond my morals. The specialness of our position, and the fact that a world where we attain AGI is not in the same way special, is for me a consequence of the anthropic principle plus many worlds (as I said, quantum and otherwise.)
So many worlds for me is an argument that we should not be in any all-fired hurry to reach AGI, that moral actions within the context of the world-as-we-know-it are more important.