The connotations of “objective” (also discussed in the other replies in this thread) don’t seem relevant to the question about the meaning of “correct” morality. Suppose we are considering a process of producing an idealized preference that gives different results for different people, and also nondeterministically gives one of many possible results for each person. Even in this case, the question of expected ranking of consequences of alternative actions according to this idealization process applied to someone can be asked.
Should this complicated question be asked? If the idealization process is such that you expect it to produce a better ranking of outcomes than you can when given only a little time, then it’s better to base actions on what the idealization process could tell you than on your own guess (e.g. desires). To the extent your own guess deviates from your expectation of the idealization process, basing your actions on your guess (desires) is an incorrect decision.
A standard example of an idealization dynamic is what you would yourself decide given much more time and resources. If you anticipate that the results of this dynamic can nondeterministically produce widely contradictory answers, this too will be taken into account by the dynamic itself, as the abstract you-with-more-time starts to contemplate the question. The resulting meta-question of whether taking the diverging future decisions into account produces worse decisions can be attacked in the same manner, etc. If done right, such process can reliably give a better result than you-with-little-time can, because any problem with it that you could anticipate will be taken into account.
A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the “territory” that moral reasoning should explore, a criterion of correctness. It is a hard-to-use criterion of correctness, you might need to build a FAI to actually access it, but at least it’s meaningful, and it illustrates the way in which many ways of thinking about morality are confused.
(As an analogy, we might posit the problem of drawing an accurate map of the surface of Pluto. My argument amounts to pointing out that Pluto can be actually located in the world, even if we don’t have much information about the details of its surface, and won’t be able to access it without building spacecraft. Given that there is actual territory to the question of the surface of Pluto, many intuition-backed assertions about it can already be said to be incorrect (as antiprediction against something unfounded), even if there is no concrete knowledge about what the correct assertions are. “Subjectivity” may be translated as different people caring about surfaces of different celestial bodies, but all of them can be incorrect in their respective detailed/confident claims, because none of them have actually observed the imagery from spacecraft.)
A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the “territory” that moral reasoning should explore, a criterion of correctness.
I think that such a specification probably isn’t the correct specification of the territory that moral reasoning should explore. By analogy, it’s like specifying the territory for mathematical reasoning based on idealizing human mathematical reasoning, or specifying the territory for scientific reasoning based on idealizing human scientific reasoning. (As opposed to figuring out how to directly refer to some external reality.) It seems like a step that’s generally tempting to take when you’re able to informally reason (to some extent) about something but you don’t know how to specify the territory, but I would prefer to just say that we don’t know how to specify the territory yet. But...
It is a hard-to-use criterion of correctness, you might need to build a FAI to actually access it, but at least it’s meaningful, and it illustrates the way in which many ways of thinking about morality are confused.
Maybe I’m underestimating the utility of having a specification that’s “at least meaningful” even if it’s not necessarily correct. (I don’t mind “hard-to-use” so much.) Can you give some examples of how it illustrates the way in which many ways of thinking about morality are confused?
The connotations of “objective” (also discussed in the other replies in this thread) don’t seem relevant to the question about the meaning of “correct” morality. Suppose we are considering a process of producing an idealized preference that gives different results for different people, and also nondeterministically gives one of many possible results for each person. Even in this case, the question of expected ranking of consequences of alternative actions according to this idealization process applied to someone can be asked.
Should this complicated question be asked? If the idealization process is such that you expect it to produce a better ranking of outcomes than you can when given only a little time, then it’s better to base actions on what the idealization process could tell you than on your own guess (e.g. desires). To the extent your own guess deviates from your expectation of the idealization process, basing your actions on your guess (desires) is an incorrect decision.
A standard example of an idealization dynamic is what you would yourself decide given much more time and resources. If you anticipate that the results of this dynamic can nondeterministically produce widely contradictory answers, this too will be taken into account by the dynamic itself, as the abstract you-with-more-time starts to contemplate the question. The resulting meta-question of whether taking the diverging future decisions into account produces worse decisions can be attacked in the same manner, etc. If done right, such process can reliably give a better result than you-with-little-time can, because any problem with it that you could anticipate will be taken into account.
A hypothetical idealization dynamic may not be helpful in actually making decisions, but its theoretical role is that it provides a possible specification of the “territory” that moral reasoning should explore, a criterion of correctness. It is a hard-to-use criterion of correctness, you might need to build a FAI to actually access it, but at least it’s meaningful, and it illustrates the way in which many ways of thinking about morality are confused.
(As an analogy, we might posit the problem of drawing an accurate map of the surface of Pluto. My argument amounts to pointing out that Pluto can be actually located in the world, even if we don’t have much information about the details of its surface, and won’t be able to access it without building spacecraft. Given that there is actual territory to the question of the surface of Pluto, many intuition-backed assertions about it can already be said to be incorrect (as antiprediction against something unfounded), even if there is no concrete knowledge about what the correct assertions are. “Subjectivity” may be translated as different people caring about surfaces of different celestial bodies, but all of them can be incorrect in their respective detailed/confident claims, because none of them have actually observed the imagery from spacecraft.)
I think that such a specification probably isn’t the correct specification of the territory that moral reasoning should explore. By analogy, it’s like specifying the territory for mathematical reasoning based on idealizing human mathematical reasoning, or specifying the territory for scientific reasoning based on idealizing human scientific reasoning. (As opposed to figuring out how to directly refer to some external reality.) It seems like a step that’s generally tempting to take when you’re able to informally reason (to some extent) about something but you don’t know how to specify the territory, but I would prefer to just say that we don’t know how to specify the territory yet. But...
Maybe I’m underestimating the utility of having a specification that’s “at least meaningful” even if it’s not necessarily correct. (I don’t mind “hard-to-use” so much.) Can you give some examples of how it illustrates the way in which many ways of thinking about morality are confused?