All I’m saying is that there needs to be an intelligence, some value-having agent or entity, in order for actions to be judged. If there is no intelligence, there are no values.
Judging requires an agent. But values does not. That just requires an object capable of representing information. The universe could have values built into it even without having intelligence to be judging with it. (Completely irrelevant observation.)
Judging requires an agent. But values does not. That just requires an object capable of representing information.
I see what you mean there, but without intelligence those values would be just static information. I don’t see how the moral realist’s conception of objective morality can make any sense without an intelligent agent.
I suppose, in connection with your point about “subjective objectivity” a moment ago, I can see how any set of values can be said to “exist” in the sense that one could measure reality against them.
Edit: That doesn’t seem to change anything ethically though. We can call it objective if we like, but to choose which of those we call “right” or “moral” depends entirely on the values and preferences of the querying agent.
Judging requires an agent. But values does not. That just requires an object capable of representing information. The universe could have values built into it even without having intelligence to be judging with it. (Completely irrelevant observation.)
I see what you mean there, but without intelligence those values would be just static information. I don’t see how the moral realist’s conception of objective morality can make any sense without an intelligent agent.
I suppose, in connection with your point about “subjective objectivity” a moment ago, I can see how any set of values can be said to “exist” in the sense that one could measure reality against them.
Edit: That doesn’t seem to change anything ethically though. We can call it objective if we like, but to choose which of those we call “right” or “moral” depends entirely on the values and preferences of the querying agent.
And if someone changes their values of preferences as a result of exhortation or self-reflection...what do values and preferences then depend on?
The physical change in a mind over time, i.e. cognition.
If someone chooses, or would choose, to adopt a new value or preference, they do so by referring to their existing value / preference network.
I’ve considered this idea before, but I can’t imagine what if anything it would actually entail.
In fact judging only requires an optimization process. Not all optimization processes are agents or intelligent.
It doesn’t even need to be an optimization process. Just a process.