Hi David—it’s true that I don’t engage your paper (there’s a large literature on infinite ethics, and the piece leaves out a lot of it—and I’m also not sure I had seen your paper at the time I was writing), but re: your comments here on the ethical relevance of infinities: I discuss the fact that the affectable universe is probably finite—“current science suggests that our causal influence is made finite by things like lightspeed and entropy”—in section 1 of the essay (paragraph 5), and argue that infinites are still
relevant in practice due to (i) the remaining probability that current physical theories are wrong about our causal influence (and note also related possibilities like having causal influence on whether you go to an infinite-heaven/hell etc, a la pascal) and (ii) due to the possibility of having infinite acausal influence conditional on various plausible-in-my-view decision theories, and
relevant to ethical theory, even if not to day-to-day decision-making, due to (a) ethical theory generally aspiring to cover various physically impossible cases, and (b) the existence of intuitions about infinite cases (e.g., heaven > hell, pareto, etc) that seem prima facie amenable to standard attempts at systematization.
First, I said I was frustrated that you didn’t address the paper, which I intended to mean was a personal frustration, not blame for not engaging, given the vast number of things you could have focused on. I brought it up only because I don’t want to make it sound like I thought it was relevant for those reading my comment, to appreciate that this was a personal motive, not a dispassionate evaluation.
Howeer, to defend my criticism, for decisionmakers with finite computational power / bounded time and limited ability to consider issues, I think that there’s a strong case to dismiss arguments based on plausible relevance. There are, obviously, an huge (but, to be fair, weighted by a simplicity prior, effectively finite) number of potential philosophies or objections, and a smaller amount of time to make decisions than would be required to evaluate each. So I think we need a case for relevance, and I have two reasons / partial responses to the above that I think explain why I don’t think there is such a case.
There are (to simplify greatly,) two competing reasons for a theory to have come to our attention enough to be considered; plausibility, or interestingness. If a possibility is very cool seeming, and leads to lots of academic papers and cool sounding ideas, the burden of proof for plausibility is, ceteris pariubus, higher.
This is not to say that we should strongly dismiss these questions, but it is a reason to ask for more than just non-zero possibility that physics is wrong. (And in the paper, we make an argument that “physics is wrong” still doesn’t imply that bounds we know of are likely to be revoked—most changes to physics which have occurred have constrained things more, not less.)
I’m unsure why I should care that I have intuitions that can be expanded to implausible cases. Justifying this via intuitions built on constructed cases which work seems exactly backwards.
As an explanation for why I think this is confused, Stuart Armstrong made a case that people fall prey to a failure mode in reasoning that parallels one we see in ML, which I’ll refer to as premature rulemaking. In ML, that’s seen when a model sees a small sample, and try to build a classification rule based on that, and apply it out of sample; all small black fuzzy object it has seen are cats, and it has seen to cats which are large or other colors, so it calls large grey housecats non-cats, and small black dogs cats. Even moving from that point, it it harder to change away from that mode; we can convince it that dogs are a different category, but the base rule gets expanded by default to other cases, and tigers are not cats, and black mice are, etc. Once we set up the problem as a classifier, trying to find rules, we spend time building systems, not judging cases on their merits. (The alternative he proposes in this context, IIRC, is something like trying to do grouping rather than build rules, and evaluate distance from the cluster rather than classification.)
The parallel here is that people find utilitarianism / deontology / maximizing complexity plausible in a set of cases, and jump to using it as a rule. This is the premature rulemaking. People then try to modify the theory to fit a growing number of cases, ignoring that it’s way out of sample for their intuitions. Intuitions then get reified, and people self-justify their new reified intuitions as obvious. (Some evidence for this: people have far more strongly contrasting intuitions in less plausible constructed cases.)
This has gone somewhat off track, I think, but in short, I’m deeply unsure why we should spend time on infinite ethics, have a theory for why people do so, and would want to see strong evidence of why to focus on the topic before considering it useful, as opposed to fun.
Hi David—it’s true that I don’t engage your paper (there’s a large literature on infinite ethics, and the piece leaves out a lot of it—and I’m also not sure I had seen your paper at the time I was writing), but re: your comments here on the ethical relevance of infinities: I discuss the fact that the affectable universe is probably finite—“current science suggests that our causal influence is made finite by things like lightspeed and entropy”—in section 1 of the essay (paragraph 5), and argue that infinites are still
relevant in practice due to (i) the remaining probability that current physical theories are wrong about our causal influence (and note also related possibilities like having causal influence on whether you go to an infinite-heaven/hell etc, a la pascal) and (ii) due to the possibility of having infinite acausal influence conditional on various plausible-in-my-view decision theories, and
relevant to ethical theory, even if not to day-to-day decision-making, due to (a) ethical theory generally aspiring to cover various physically impossible cases, and (b) the existence of intuitions about infinite cases (e.g., heaven > hell, pareto, etc) that seem prima facie amenable to standard attempts at systematization.
First, I said I was frustrated that you didn’t address the paper, which I intended to mean was a personal frustration, not blame for not engaging, given the vast number of things you could have focused on. I brought it up only because I don’t want to make it sound like I thought it was relevant for those reading my comment, to appreciate that this was a personal motive, not a dispassionate evaluation.
Howeer, to defend my criticism, for decisionmakers with finite computational power / bounded time and limited ability to consider issues, I think that there’s a strong case to dismiss arguments based on plausible relevance. There are, obviously, an huge (but, to be fair, weighted by a simplicity prior, effectively finite) number of potential philosophies or objections, and a smaller amount of time to make decisions than would be required to evaluate each. So I think we need a case for relevance, and I have two reasons / partial responses to the above that I think explain why I don’t think there is such a case.
There are (to simplify greatly,) two competing reasons for a theory to have come to our attention enough to be considered; plausibility, or interestingness. If a possibility is very cool seeming, and leads to lots of academic papers and cool sounding ideas, the burden of proof for plausibility is, ceteris pariubus, higher.
This is not to say that we should strongly dismiss these questions, but it is a reason to ask for more than just non-zero possibility that physics is wrong. (And in the paper, we make an argument that “physics is wrong” still doesn’t imply that bounds we know of are likely to be revoked—most changes to physics which have occurred have constrained things more, not less.)
I’m unsure why I should care that I have intuitions that can be expanded to implausible cases. Justifying this via intuitions built on constructed cases which work seems exactly backwards.
As an explanation for why I think this is confused, Stuart Armstrong made a case that people fall prey to a failure mode in reasoning that parallels one we see in ML, which I’ll refer to as premature rulemaking. In ML, that’s seen when a model sees a small sample, and try to build a classification rule based on that, and apply it out of sample; all small black fuzzy object it has seen are cats, and it has seen to cats which are large or other colors, so it calls large grey housecats non-cats, and small black dogs cats. Even moving from that point, it it harder to change away from that mode; we can convince it that dogs are a different category, but the base rule gets expanded by default to other cases, and tigers are not cats, and black mice are, etc. Once we set up the problem as a classifier, trying to find rules, we spend time building systems, not judging cases on their merits. (The alternative he proposes in this context, IIRC, is something like trying to do grouping rather than build rules, and evaluate distance from the cluster rather than classification.)
The parallel here is that people find utilitarianism / deontology / maximizing complexity plausible in a set of cases, and jump to using it as a rule. This is the premature rulemaking. People then try to modify the theory to fit a growing number of cases, ignoring that it’s way out of sample for their intuitions. Intuitions then get reified, and people self-justify their new reified intuitions as obvious. (Some evidence for this: people have far more strongly contrasting intuitions in less plausible constructed cases.)
This has gone somewhat off track, I think, but in short, I’m deeply unsure why we should spend time on infinite ethics, have a theory for why people do so, and would want to see strong evidence of why to focus on the topic before considering it useful, as opposed to fun.