First, I said I was frustrated that you didn’t address the paper, which I intended to mean was a personal frustration, not blame for not engaging, given the vast number of things you could have focused on. I brought it up only because I don’t want to make it sound like I thought it was relevant for those reading my comment, to appreciate that this was a personal motive, not a dispassionate evaluation.
Howeer, to defend my criticism, for decisionmakers with finite computational power / bounded time and limited ability to consider issues, I think that there’s a strong case to dismiss arguments based on plausible relevance. There are, obviously, an huge (but, to be fair, weighted by a simplicity prior, effectively finite) number of potential philosophies or objections, and a smaller amount of time to make decisions than would be required to evaluate each. So I think we need a case for relevance, and I have two reasons / partial responses to the above that I think explain why I don’t think there is such a case.
There are (to simplify greatly,) two competing reasons for a theory to have come to our attention enough to be considered; plausibility, or interestingness. If a possibility is very cool seeming, and leads to lots of academic papers and cool sounding ideas, the burden of proof for plausibility is, ceteris pariubus, higher.
This is not to say that we should strongly dismiss these questions, but it is a reason to ask for more than just non-zero possibility that physics is wrong. (And in the paper, we make an argument that “physics is wrong” still doesn’t imply that bounds we know of are likely to be revoked—most changes to physics which have occurred have constrained things more, not less.)
I’m unsure why I should care that I have intuitions that can be expanded to implausible cases. Justifying this via intuitions built on constructed cases which work seems exactly backwards.
As an explanation for why I think this is confused, Stuart Armstrong made a case that people fall prey to a failure mode in reasoning that parallels one we see in ML, which I’ll refer to as premature rulemaking. In ML, that’s seen when a model sees a small sample, and try to build a classification rule based on that, and apply it out of sample; all small black fuzzy object it has seen are cats, and it has seen to cats which are large or other colors, so it calls large grey housecats non-cats, and small black dogs cats. Even moving from that point, it it harder to change away from that mode; we can convince it that dogs are a different category, but the base rule gets expanded by default to other cases, and tigers are not cats, and black mice are, etc. Once we set up the problem as a classifier, trying to find rules, we spend time building systems, not judging cases on their merits. (The alternative he proposes in this context, IIRC, is something like trying to do grouping rather than build rules, and evaluate distance from the cluster rather than classification.)
The parallel here is that people find utilitarianism / deontology / maximizing complexity plausible in a set of cases, and jump to using it as a rule. This is the premature rulemaking. People then try to modify the theory to fit a growing number of cases, ignoring that it’s way out of sample for their intuitions. Intuitions then get reified, and people self-justify their new reified intuitions as obvious. (Some evidence for this: people have far more strongly contrasting intuitions in less plausible constructed cases.)
This has gone somewhat off track, I think, but in short, I’m deeply unsure why we should spend time on infinite ethics, have a theory for why people do so, and would want to see strong evidence of why to focus on the topic before considering it useful, as opposed to fun.
First, I said I was frustrated that you didn’t address the paper, which I intended to mean was a personal frustration, not blame for not engaging, given the vast number of things you could have focused on. I brought it up only because I don’t want to make it sound like I thought it was relevant for those reading my comment, to appreciate that this was a personal motive, not a dispassionate evaluation.
Howeer, to defend my criticism, for decisionmakers with finite computational power / bounded time and limited ability to consider issues, I think that there’s a strong case to dismiss arguments based on plausible relevance. There are, obviously, an huge (but, to be fair, weighted by a simplicity prior, effectively finite) number of potential philosophies or objections, and a smaller amount of time to make decisions than would be required to evaluate each. So I think we need a case for relevance, and I have two reasons / partial responses to the above that I think explain why I don’t think there is such a case.
There are (to simplify greatly,) two competing reasons for a theory to have come to our attention enough to be considered; plausibility, or interestingness. If a possibility is very cool seeming, and leads to lots of academic papers and cool sounding ideas, the burden of proof for plausibility is, ceteris pariubus, higher.
This is not to say that we should strongly dismiss these questions, but it is a reason to ask for more than just non-zero possibility that physics is wrong. (And in the paper, we make an argument that “physics is wrong” still doesn’t imply that bounds we know of are likely to be revoked—most changes to physics which have occurred have constrained things more, not less.)
I’m unsure why I should care that I have intuitions that can be expanded to implausible cases. Justifying this via intuitions built on constructed cases which work seems exactly backwards.
As an explanation for why I think this is confused, Stuart Armstrong made a case that people fall prey to a failure mode in reasoning that parallels one we see in ML, which I’ll refer to as premature rulemaking. In ML, that’s seen when a model sees a small sample, and try to build a classification rule based on that, and apply it out of sample; all small black fuzzy object it has seen are cats, and it has seen to cats which are large or other colors, so it calls large grey housecats non-cats, and small black dogs cats. Even moving from that point, it it harder to change away from that mode; we can convince it that dogs are a different category, but the base rule gets expanded by default to other cases, and tigers are not cats, and black mice are, etc. Once we set up the problem as a classifier, trying to find rules, we spend time building systems, not judging cases on their merits. (The alternative he proposes in this context, IIRC, is something like trying to do grouping rather than build rules, and evaluate distance from the cluster rather than classification.)
The parallel here is that people find utilitarianism / deontology / maximizing complexity plausible in a set of cases, and jump to using it as a rule. This is the premature rulemaking. People then try to modify the theory to fit a growing number of cases, ignoring that it’s way out of sample for their intuitions. Intuitions then get reified, and people self-justify their new reified intuitions as obvious. (Some evidence for this: people have far more strongly contrasting intuitions in less plausible constructed cases.)
This has gone somewhat off track, I think, but in short, I’m deeply unsure why we should spend time on infinite ethics, have a theory for why people do so, and would want to see strong evidence of why to focus on the topic before considering it useful, as opposed to fun.