… are we asking what our feeling would be when confronted with a real world in which somehow we could see this happening? As if we could ever see 3^^^3 of anything? Because that seems to be the closest I can get to defining right and wrong.
As a simple improvement, you could approximate the (normatively) “correct answer” as the decision reached as a result of spending a billion years developing moral theory, formal decision-making algorithms and social institutions supporting the research process, in the hypothetical where nothing goes wrong during this project. (Then, whatever you should actually decide in the hypothetical where you are confronted with the problem and won’t spend a billion years might be formalized as an attempt to approximate that (explicitly inaccessible) approximation, and would itself be expected to be a worse approximation.)
As a simple improvement, you could approximate the (normatively) “correct answer” as the decision reached as a result of spending a billion years developing moral theory, formal decision-making algorithms and social institutions supporting the research process, in the hypothetical where nothing goes wrong during this project.
Presumably this requires that a billion years developing theory with nothing going wrong during the development would need to converge on a single answer. That is, a million identical starting points like our world would need to end up after that billion years within a very small volume of morality-space, in order for this process to be meaningful.
Considering that the 4 billion years (or so) spent by evolution on this project has developed quite divergent moralities across not only different species, but with quite a bit of variation even within our own species, that convergence seems exceedingly unlikely to me. Do you have any reason to think that there is “an answer” to what would happen after a billion years of good progress?
Supposing another billion years of evolution and natural selection. If the human species splits into multiple species or social intelligences arise among other species, one would (or at least i would in absence of a reason not to) expect each species to have a different morality. Further since it has been evolution that has gotten us to the morality we have, presumably another billion years of evolution would have to be considered as a “nothing going wrong” method. Perhaps some of these evolved systems will address the 3^^^3 dust specks vs torture question better than our current system does, but is there any reason to believe that all of them will get the same answer? Or I should say, any rational reason, as faith in a higher morality that we are a pale echo of would constitute a reason but not a rational one?
That is, a million identical starting points like our world would need to end up after that billion years within a very small volume of morality-space, in order for this process to be meaningful.
(The hypothetical I posited doesn’t start with our world, but with an artificial isolated/abstract “research project” whose only goal is to answer that one question.) In any case, determinism is not particularly important as long as the expected quality across the various possible outcomes is high. For example, for a goal of producing a good movie to be meaningful, it is not necessary to demand that only very similar movies can be produced.
Considering that the 4 billion years (or so) spent by evolution
Evolution is irrelevant, not analogous to intelligent minds purposefully designing things.
presumably another billion years of evolution would have to be considered as a “nothing going wrong” method
As a simple improvement, you could approximate the (normatively) “correct answer” as the decision reached as a result of spending a billion years developing moral theory, formal decision-making algorithms and social institutions supporting the research process, in the hypothetical where nothing goes wrong during this project. (Then, whatever you should actually decide in the hypothetical where you are confronted with the problem and won’t spend a billion years might be formalized as an attempt to approximate that (explicitly inaccessible) approximation, and would itself be expected to be a worse approximation.)
Presumably this requires that a billion years developing theory with nothing going wrong during the development would need to converge on a single answer. That is, a million identical starting points like our world would need to end up after that billion years within a very small volume of morality-space, in order for this process to be meaningful.
Considering that the 4 billion years (or so) spent by evolution on this project has developed quite divergent moralities across not only different species, but with quite a bit of variation even within our own species, that convergence seems exceedingly unlikely to me. Do you have any reason to think that there is “an answer” to what would happen after a billion years of good progress?
Supposing another billion years of evolution and natural selection. If the human species splits into multiple species or social intelligences arise among other species, one would (or at least i would in absence of a reason not to) expect each species to have a different morality. Further since it has been evolution that has gotten us to the morality we have, presumably another billion years of evolution would have to be considered as a “nothing going wrong” method. Perhaps some of these evolved systems will address the 3^^^3 dust specks vs torture question better than our current system does, but is there any reason to believe that all of them will get the same answer? Or I should say, any rational reason, as faith in a higher morality that we are a pale echo of would constitute a reason but not a rational one?
(The hypothetical I posited doesn’t start with our world, but with an artificial isolated/abstract “research project” whose only goal is to answer that one question.) In any case, determinism is not particularly important as long as the expected quality across the various possible outcomes is high. For example, for a goal of producing a good movie to be meaningful, it is not necessary to demand that only very similar movies can be produced.
Evolution is irrelevant, not analogous to intelligent minds purposefully designing things.
No, see “value drift”, fragility of value.