If morality is a thing we have some reason to be interested in and care about, it’s going to have to be grounded in our preferences.
To some extent. Minimally it can be grounded in our preference not to be punished. Less minimally, but not maximally, it can be grounded in negative preferences , like ” I don’t want to be killed” without being grounded in positive preferences like * “I prefer Tutti Frutti”. In either case, you dont need a detailed.picture of human preference to solve morality, if you haven’t first shown that all preferences are relevant.
To some extent. Minimally it can be grounded in our preference not to be punished. Less minimally, but not maximally, it can be grounded in negative preferences , like ” I don’t want to be killed” without being grounded in positive preferences like * “I prefer Tutti Frutti”. In either case, you dont need a detailed.picture of human preference to solve morality, if you haven’t first shown that all preferences are relevant.