I’m a moral anti-realist. I don’t see a justification for S. If there are facts about “how good or bad things are, from the perspective of the agent” it seems like those facts, for humans, are often facts about the ‘real world’. I also don’t much see what this has to do with moral realism.
Regarding objective utility: are you just talking about adding up utilities of all agent-like things? I suppose you could call such a figure “objective utility” but that doesn’t mean such a figure is of any moral importance. I doubt I would care much about it.
This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it’s mainly just something I was thinking about while thinking about moral realism.
I don’t really know what I’m talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind’s utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.
This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true.
I’m a moral realist and I find your claim nearly as absurd as asserting that 2+2=3, and I suspect nearly all moral realists would share my sentiment (even if they wouldn’t express it quiet as strongly).
I’m a moral anti-realist. I don’t see a justification for S. If there are facts about “how good or bad things are, from the perspective of the agent” it seems like those facts, for humans, are often facts about the ‘real world’. I also don’t much see what this has to do with moral realism.
Regarding objective utility: are you just talking about adding up utilities of all agent-like things? I suppose you could call such a figure “objective utility” but that doesn’t mean such a figure is of any moral importance. I doubt I would care much about it.
This is related to moral realism in that I suspect moral realists would be more likely to accept S, and S arguably provides some moral statements that are true. But it’s mainly just something I was thinking about while thinking about moral realism.
I don’t really know what I’m talking about when I say objective utility, I am just claiming that if such a thing exists/ makes sense to talk about, that it can only depend on the states of individual minds, since each mind’s utility can only depend on the state of that mind and nothing outside of the utility of minds can be ethically relevant.
I’m a moral realist and I find your claim nearly as absurd as asserting that 2+2=3, and I suspect nearly all moral realists would share my sentiment (even if they wouldn’t express it quiet as strongly).