“Infinite ethics” is surely a non-problem for individuals—since an individual agent can only act locally. Things that are far away are outside the agent’s light cone.
This is an all-possible-worlds-exist philosophy. There are an infinite number of worlds where there are entities which are subjectively identical to you and cognitively similar enough that they will make the same decision you make, for the same reasons. When you make a choice, all those duplicates make the same choice, and there are consequences in an infinity of worlds. So there’s a fuzzy neoplatonic idea according to which you identify yourself with the whole equivalence class of subjective duplicates to which you belong.
But I believe there’s an illusion here and for every individual, the situation described actually reduces to an individual making a decision and not knowing which possible world they’re in. There is no sense in which the decision by any one individual actually causes decisions in other worlds. I postulate that there is no decision-theoretic advantage or moral imperative to indulging the neoplatonic perspective, and if you try to extract practical implications from it, you won’t be able to improve on the uncertain-single-world approach.
By hypothesis. There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is. As, I think, you go on to say.
I agree. I was paraphrasing what ata and Roko were talking about. I think it’s a hypothesis worth considering. There may be a level of enlightenment beyond which one sees that the hypothesis is definitely true, definitely false, definitely undecidable, or definitely irrelevant to decision-making, but I don’t know any of that yet.
There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is.
I think, again, that we don’t actually know any of that yet. Epistemically, there would appear to be infinitely many possibilities. It may be that a rational agent does need to acknowledge and deal with this fact somehow. For example, maximizing utility in this situation may require infinite sums or integrals of some form (the expected utility of an action being the sum, across all possible worlds, of its expected utility in each such world times the world’s apriori probability). Experience with halting probabilities suggests that such sums may be uncomputable, even supposing you can rationally decide on a model of possibility space and on a prior, and the best you can do may be some finite approximation. But ideally one would want to show that such finite methods really do approximate the unattainable infinite, and in this sense the agent would need to “bother with infinity”, in order to justify the rationality of its procedures.
As for evidence of infinities within this world, observationally we can only see a finite distance in space and time, but if the rationally preferred model of the world contains infinities, then there is such evidence. I see this as primarily a quantum gravity question and so it’s in the process of being answered (by the ongoing, mostly deductive examination of the various available models). If it turns out, let us say, that gravity and quantum mechanics imply string theory, and string theory implies eternal inflation, then you would have a temporal infinity implied by the finite physical evidence.
There’s no temporal infinity without spatial infinity (instead you typically get eternal return). There’s incredibly weak evidence for spatial infinity—since we can only see the nearest 13 billion light years—and that’s practiacally nothing—compared to infinity.
The situation is that we don’t know with much certainty whether the world is finite or infinite. However, if an ethical system suggests people behave very differently here and now depending on the outcome of such abstract metaphysicis, I think that ethical system is probably screwed.
“Infinite ethics” is surely a non-problem for individuals—since an individual agent can only act locally. Things that are far away are outside the agent’s light cone.
This is an all-possible-worlds-exist philosophy. There are an infinite number of worlds where there are entities which are subjectively identical to you and cognitively similar enough that they will make the same decision you make, for the same reasons. When you make a choice, all those duplicates make the same choice, and there are consequences in an infinity of worlds. So there’s a fuzzy neoplatonic idea according to which you identify yourself with the whole equivalence class of subjective duplicates to which you belong.
But I believe there’s an illusion here and for every individual, the situation described actually reduces to an individual making a decision and not knowing which possible world they’re in. There is no sense in which the decision by any one individual actually causes decisions in other worlds. I postulate that there is no decision-theoretic advantage or moral imperative to indulging the neoplatonic perspective, and if you try to extract practical implications from it, you won’t be able to improve on the uncertain-single-world approach.
Re: “There are an infinite number of worlds”
By hypothesis. There is no evidence for any infinities in nature. Agents need not bother with infinity when making decisions or deciding what the right thing to do is. As, I think, you go on to say.
I agree. I was paraphrasing what ata and Roko were talking about. I think it’s a hypothesis worth considering. There may be a level of enlightenment beyond which one sees that the hypothesis is definitely true, definitely false, definitely undecidable, or definitely irrelevant to decision-making, but I don’t know any of that yet.
I think, again, that we don’t actually know any of that yet. Epistemically, there would appear to be infinitely many possibilities. It may be that a rational agent does need to acknowledge and deal with this fact somehow. For example, maximizing utility in this situation may require infinite sums or integrals of some form (the expected utility of an action being the sum, across all possible worlds, of its expected utility in each such world times the world’s apriori probability). Experience with halting probabilities suggests that such sums may be uncomputable, even supposing you can rationally decide on a model of possibility space and on a prior, and the best you can do may be some finite approximation. But ideally one would want to show that such finite methods really do approximate the unattainable infinite, and in this sense the agent would need to “bother with infinity”, in order to justify the rationality of its procedures.
As for evidence of infinities within this world, observationally we can only see a finite distance in space and time, but if the rationally preferred model of the world contains infinities, then there is such evidence. I see this as primarily a quantum gravity question and so it’s in the process of being answered (by the ongoing, mostly deductive examination of the various available models). If it turns out, let us say, that gravity and quantum mechanics imply string theory, and string theory implies eternal inflation, then you would have a temporal infinity implied by the finite physical evidence.
There’s no temporal infinity without spatial infinity (instead you typically get eternal return). There’s incredibly weak evidence for spatial infinity—since we can only see the nearest 13 billion light years—and that’s practiacally nothing—compared to infinity.
The situation is that we don’t know with much certainty whether the world is finite or infinite. However, if an ethical system suggests people behave very differently here and now depending on the outcome of such abstract metaphysicis, I think that ethical system is probably screwed.
That is something the MP’s preceding sentence seems to indicate.