That’s because morality is a property of a cognitive agent, not a holistic property of the agent and its environment.
I don’t understand this sentence. Morality is a property of a system that can be explained in terms of its parts. A cognitive agent is also a system of parts, parts which on their own do not exhibit morality.
If something is judged to be beautiful then the pattern that identifies beauty is in the mind of the agent and exhibited by the object that is deemed beautiful. If the agent ceases to be then the beautiful object does still exhibit the same pattern. Likewise if a human loses its ability to proclaim that the object is beautiful, it is still beautiful. If you continued to remove certain brain areas, or one neuron at a time, at what point does beauty cease to exist?
I don’t understand this sentence. Morality is a property of a system that can be explained in terms of its parts. A cognitive agent is also a system of parts, parts which on their own do not exhibit morality.
I meant that we attribute morality to an agent. Suppose agent A1 makes a decision in environment E1 that I approve of morally, based on value set V. You can’t come up with another environment E2, such that if A1 were in environment E2, and made the same decision using the same mental steps and having exactly the same mental representations, I will say it was immoral for A1 in environment E2 according to value set V.
You can easily come up with an environment E2 where the outcome of A1′s actions are bad. If you change the environment enough, you can come up with an E2 where A1′s values consistently lead to bad outcomes, and so A1 “should” change its values (for some complicated and confusing value of “should”). But, if we’re judging the morality of A1′s behavior according to a constant set of values, then properties of the environment which are unknown to A1 will have no impact on our (or at least my) judgement of whether A1′s decision was moral.
A simpler way of saying all this is: Information unknown to agent A has no impact on our judgement of whether A’s actions are moral.
If something is judged to be beautiful then the pattern that identifies beauty is in the mind of the agent and exhibited by the object that is deemed beautiful. If the agent ceases to be then the beautiful object does still exhibit the same pattern. Likewise if a human loses its ability to proclaim that the object is beautiful, it is still beautiful. If you continued to remove certain brain areas, or one neuron at a time, at what point does beauty cease to exist?
This is a tricky problem. Is morality, like beauty, something that exists in the mind of the beholder? Like aesthetic judgements, it exists relative to a set of values, so probably yes.
I don’t understand this sentence. Morality is a property of a system that can be explained in terms of its parts. A cognitive agent is also a system of parts, parts which on their own do not exhibit morality.
If something is judged to be beautiful then the pattern that identifies beauty is in the mind of the agent and exhibited by the object that is deemed beautiful. If the agent ceases to be then the beautiful object does still exhibit the same pattern. Likewise if a human loses its ability to proclaim that the object is beautiful, it is still beautiful. If you continued to remove certain brain areas, or one neuron at a time, at what point does beauty cease to exist?
I meant that we attribute morality to an agent. Suppose agent A1 makes a decision in environment E1 that I approve of morally, based on value set V. You can’t come up with another environment E2, such that if A1 were in environment E2, and made the same decision using the same mental steps and having exactly the same mental representations, I will say it was immoral for A1 in environment E2 according to value set V.
You can easily come up with an environment E2 where the outcome of A1′s actions are bad. If you change the environment enough, you can come up with an E2 where A1′s values consistently lead to bad outcomes, and so A1 “should” change its values (for some complicated and confusing value of “should”). But, if we’re judging the morality of A1′s behavior according to a constant set of values, then properties of the environment which are unknown to A1 will have no impact on our (or at least my) judgement of whether A1′s decision was moral.
A simpler way of saying all this is: Information unknown to agent A has no impact on our judgement of whether A’s actions are moral.
This is a tricky problem. Is morality, like beauty, something that exists in the mind of the beholder? Like aesthetic judgements, it exists relative to a set of values, so probably yes.