The word “morality” needs to be made more specific for this discussion. One of the things you seem to be talking about is mental behavior that produces value judgments or their justifications. It’s something human brains do, and we can in principle systematically study this human activity in detail, or abstractly describe humans as brain activity algorithms and study those algorithms. This characterization doesn’t seem particularly interesting, as you might also describe mathematicians in this way, but this won’t be anywhere close to an efficient route to learning about mathematics or describing what mathematics is.
“Logic” and “mathematics” are also somewhat vague in this context. In one sense, “mathematics” may refer to anything, as a way of considering things, which makes the characterization empty of content. In another sense, it’s the study of the kinds of objects that mathematicians typically study, but in this sense it probably won’t refer to things like activity of human brains or particular physical universes. “Logic” is more specific, it’s a particular way of representing and processing mathematical ideas. It allows describing the things you are talking about and obtaining new information about them that wasn’t explicit in the original description.
Morality in the FAI-relevant sense is a specification of what to do with the world, and as such it isn’t concerned with human cognition. The question of the nature of morality in this sense is a question about ways of specifying what to do with the world. Such specification would need to be able to do at least these two things: (1) it needs to be given with much less explicit detail than what can be extracted from it when decisions about novel situations need to be made, which suggests that the study of logic might be relevant, and (2) it needs to be related to the world, which suggests that the study of physics might be relevant.
This question about the nature of morality is separate from the question of how to pinpoint the right specification of morality to use in a FAI, out of all possible specifications. The difficulty of finding the right morality seems mostly unrelated to describing what kind of thing morality is. If I put a note with a number written on it in a box, it might be perfectly accurate to say that the box contains a number, even though it might be impossible to say what that number is, precisely, and even if people aren’t able to construct any interesting models of the unknown number.
The word “morality” needs to be made more specific for this discussion. One of the things you seem to be talking about is mental behavior that produces value judgments or their justifications. It’s something human brains do, and we can in principle systematically study this human activity in detail, or abstractly describe humans as brain activity algorithms and study those algorithms. This characterization doesn’t seem particularly interesting, as you might also describe mathematicians in this way, but this won’t be anywhere close to an efficient route to learning about mathematics or describing what mathematics is.
“Logic” and “mathematics” are also somewhat vague in this context. In one sense, “mathematics” may refer to anything, as a way of considering things, which makes the characterization empty of content. In another sense, it’s the study of the kinds of objects that mathematicians typically study, but in this sense it probably won’t refer to things like activity of human brains or particular physical universes. “Logic” is more specific, it’s a particular way of representing and processing mathematical ideas. It allows describing the things you are talking about and obtaining new information about them that wasn’t explicit in the original description.
Morality in the FAI-relevant sense is a specification of what to do with the world, and as such it isn’t concerned with human cognition. The question of the nature of morality in this sense is a question about ways of specifying what to do with the world. Such specification would need to be able to do at least these two things: (1) it needs to be given with much less explicit detail than what can be extracted from it when decisions about novel situations need to be made, which suggests that the study of logic might be relevant, and (2) it needs to be related to the world, which suggests that the study of physics might be relevant.
This question about the nature of morality is separate from the question of how to pinpoint the right specification of morality to use in a FAI, out of all possible specifications. The difficulty of finding the right morality seems mostly unrelated to describing what kind of thing morality is. If I put a note with a number written on it in a box, it might be perfectly accurate to say that the box contains a number, even though it might be impossible to say what that number is, precisely, and even if people aren’t able to construct any interesting models of the unknown number.