Why stop at connotation and signalling? If there is a non-empty set of preferences whose satistfaction is inclined to lead to conflict, and a non-empty set of preferences that can be satisfied withotu conflict, then “morally relevant prefernece” can denote the members of the first set...which is not idenitcal to the set of all preferences.
For any such preference, you can immediately provide a utility function such that the corresponding agent would be very unhappy about that preference, and would give its life to prevent it.
Or do you mean “a set of preferences the implementation of which would on balance benefit the largest amount of agents the most”? That would change as the set of agents changes, so does the “correct” morality change too, then?
Also, why should I or anyone else particular care about about such preferences (however you define them), especially as the “on average” doesn’t benefit me? Is it because evolutionary speaking, that’s how what evolved? What our mirror neurons lead us towards? Wouldn’t that just be a case of the naturalistic fallacy?
For any such preference, you can immediately provide a utility function such that the corresponding agent would be very unhappy about that preference
Sure. So what? Kids don’t like teachers and criminals don’t like the police..but they can’t object to them, because
“entitiy X is stopping from doing bad things and making me do good things” is no (rational, adult) objection.
Also, why should I or anyone else particular care about about such preferences (however you define them), especially as the “on average” doesn’t benefit me?
If being moral increases your utility, it increases your utility—what other sense of “benefitting me” is there?
If being moral increases your utility, it increases your utility—what other sense of “benefitting me” is there?
If utility is the satisfaction of preferences, and you can have preferences that don’t benefit you (such as doing heroin), increasing your utility doesn’t necessarily benefit you.
If you can get utility out of paperclips, why can’t you get it out of heorin? You’re surely not saying that there is some sort of Objective utility that everyone ought to have in their UF’s?
You can get utility out of heroin if you prefer to use it, which is an example of “benefiting me” and utility not being synonymous. I don’t think there’s any objective utility function for all conceivable agents, but as you get more specific in the kinds of agents you consider (i.e. humans), there are commonalities in their utility functions, due to human nature. Also, there are sometimes inconsistencies between (for lack of better terminology) what people prefer and what they really prefer—that is, people can act and have a preference to act in ways that, if they were to act differently, they would prefer the different act.
(Kids—teachers), (criminals—police), so is “morally correct” defined by the most powerful agents, then?
Adult, rational objections are objections that other agents might feel impelled to do somehting about, and so are not just based on “I don’t like it”.”I don’t like it” is no objectio to “you should do your homework”, etc.
If being moral increases your utility (...)
And if being moral (whatever it may mean) does not?
Then you would belong to the set of Immoral Agents, AKA Bad People.
“You should do your homework (… because it is in your own long-term best interest, you just can’t see that yet)” is in the interest of the kid, cf. an FAI telling you to do an action because it is in your interest. “You should jump out that window (… because it amuses me / because I call that morally good)” is not in your interest, you should not do that. In such cases, “I don’t like that” is the most pertinent objection and can stand all on its own.
Then you would belong to the set of Immoral Agents, AKA Bad People.
Boo bad people! What if we encountered aliens with “immoral” preferences?
Why stop at connotation and signalling? If there is a non-empty set of preferences whose satistfaction is inclined to lead to conflict, and a non-empty set of preferences that can be satisfied withotu conflict, then “morally relevant prefernece” can denote the members of the first set...which is not idenitcal to the set of all preferences.
For any such preference, you can immediately provide a utility function such that the corresponding agent would be very unhappy about that preference, and would give its life to prevent it.
Or do you mean “a set of preferences the implementation of which would on balance benefit the largest amount of agents the most”? That would change as the set of agents changes, so does the “correct” morality change too, then?
Also, why should I or anyone else particular care about about such preferences (however you define them), especially as the “on average” doesn’t benefit me? Is it because evolutionary speaking, that’s how what evolved? What our mirror neurons lead us towards? Wouldn’t that just be a case of the naturalistic fallacy?
Sure. So what? Kids don’t like teachers and criminals don’t like the police..but they can’t object to them, because “entitiy X is stopping from doing bad things and making me do good things” is no (rational, adult) objection.
If being moral increases your utility, it increases your utility—what other sense of “benefitting me” is there?
If utility is the satisfaction of preferences, and you can have preferences that don’t benefit you (such as doing heroin), increasing your utility doesn’t necessarily benefit you.
If you can get utility out of paperclips, why can’t you get it out of heorin? You’re surely not saying that there is some sort of Objective utility that everyone ought to have in their UF’s?
You can get utility out of heroin if you prefer to use it, which is an example of “benefiting me” and utility not being synonymous. I don’t think there’s any objective utility function for all conceivable agents, but as you get more specific in the kinds of agents you consider (i.e. humans), there are commonalities in their utility functions, due to human nature. Also, there are sometimes inconsistencies between (for lack of better terminology) what people prefer and what they really prefer—that is, people can act and have a preference to act in ways that, if they were to act differently, they would prefer the different act.
(Kids—teachers), (criminals—police), so is “morally correct” defined by the most powerful agents, then?
And if being moral (whatever it may mean) does not?
Adult, rational objections are objections that other agents might feel impelled to do somehting about, and so are not just based on “I don’t like it”.”I don’t like it” is no objectio to “you should do your homework”, etc.
Then you would belong to the set of Immoral Agents, AKA Bad People.
“You should do your homework (… because it is in your own long-term best interest, you just can’t see that yet)” is in the interest of the kid, cf. an FAI telling you to do an action because it is in your interest. “You should jump out that window (… because it amuses me / because I call that morally good)” is not in your interest, you should not do that. In such cases, “I don’t like that” is the most pertinent objection and can stand all on its own.
Boo bad people! What if we encountered aliens with “immoral” preferences?