There’s a counterargument-template which roughly says “Suppose the ground-truth source of morality is X. If X says that it’s good to torture babies (not in exchange for something else valuable, just good in its own right), would you then accept that truth and spend your resources to torture babies? Does X saying it’s good actually make it good?”
I’m not sure if I’m able to properly articulate my thoughts on this but I’d be interested to know if it’s understandable and where it might fit. Sorry if I repeat myself.
from my perspective It’s like if you applied a similar template to verify/refute the cogito.
I know consciousness exists because I’m conscious of it. If you asked me if I’d accept the truth that I’m not conscious, supposing this were the result of the cogito, I’d consider that question incoherent.
If someone concluded that they’re not conscious, by leveraging consciousness to assess whether they’re conscious, then I could only conclude that they misunderstand consciousness.
My version of moral realism would be similar. The existence of positive and negative moral value is effectively self evident to all beings affected by such values.
To me, saying:
“what if the ground truth of morality is that (all else equal) an instance of suffering is preferable to it’s absence.”
Is like saying:
“what if being conscious of one’s own experience isn’t necessarily evidence for consciousness.”
I actually don’t think this is a statement of moral realism; I think it’s a statement of moral nonrealism. Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them. If some other agents hold some other values, then those other values can presumably seem just as self-evident to those other agents. (And of course we humans would then say that those other agents are immoral.)
This all sounds functionally-identical to moral nonrealism. In particular, it gives us no reason at all to expect some alien intelligence or AI to converge to similar values to humans, and it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
I appreciate your input, these are my first two comments here so apologies if i’m out of line at all.
>Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them.
In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn’t Imply that consciousness is any less real, even if it means that it isn’t possible for one agent to entirely assess the “realness” of another agent’s claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don’t suggest doubting the fact that the existence of consciousness can be arrived at self evidently.
For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they’re wrong to call that self evident, and you could justifiably dismiss their level of confidence.
Similarly, I’m not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.
I realise that probably makes it sound like I’m trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value.
If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.
>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn’t obvious to me.
In this rebate “real” means objective which means something like independent from observers. Consciousness is dependent on you observing it and the idea that you could be conscious without observing it seems incoherent.
The moral realism position is that it’s coherent to say that there are thinks that have moral value even if there’s no observer that judges them to have moral value.
I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of.
This naturally raises the question of whether people who don’t agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth. I’ve heard the second version stated seriously in my Bible-belt childhood; it didn’t impress me then. The first just seems … odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinction).
Any position claiming that everyone, deep down, agrees tends to founder on the observation that we simply don’t—or to seem utterly banal (because everyone agrees with it).
I’m not sure if I’m able to properly articulate my thoughts on this but I’d be interested to know if it’s understandable and where it might fit. Sorry if I repeat myself.
from my perspective It’s like if you applied a similar template to verify/refute the cogito.
I know consciousness exists because I’m conscious of it. If you asked me if I’d accept the truth that I’m not conscious, supposing this were the result of the cogito, I’d consider that question incoherent.
If someone concluded that they’re not conscious, by leveraging consciousness to assess whether they’re conscious, then I could only conclude that they misunderstand consciousness.
My version of moral realism would be similar. The existence of positive and negative moral value is effectively self evident to all beings affected by such values.
To me, saying: “what if the ground truth of morality is that (all else equal) an instance of suffering is preferable to it’s absence.” Is like saying: “what if being conscious of one’s own experience isn’t necessarily evidence for consciousness.”
I actually don’t think this is a statement of moral realism; I think it’s a statement of moral nonrealism. Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them. If some other agents hold some other values, then those other values can presumably seem just as self-evident to those other agents. (And of course we humans would then say that those other agents are immoral.)
This all sounds functionally-identical to moral nonrealism. In particular, it gives us no reason at all to expect some alien intelligence or AI to converge to similar values to humans, and it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
I appreciate your input, these are my first two comments here so apologies if i’m out of line at all.
>Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them.
In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn’t Imply that consciousness is any less real, even if it means that it isn’t possible for one agent to entirely assess the “realness” of another agent’s claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don’t suggest doubting the fact that the existence of consciousness can be arrived at self evidently.
For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they’re wrong to call that self evident, and you could justifiably dismiss their level of confidence.
Similarly, I’m not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.
I realise that probably makes it sound like I’m trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value.
If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.
>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn’t obvious to me.
In this rebate “real” means objective which means something like independent from observers. Consciousness is dependent on you observing it and the idea that you could be conscious without observing it seems incoherent.
The moral realism position is that it’s coherent to say that there are thinks that have moral value even if there’s no observer that judges them to have moral value.
I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of.
This naturally raises the question of whether people who don’t agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth. I’ve heard the second version stated seriously in my Bible-belt childhood; it didn’t impress me then. The first just seems … odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinction).
Any position claiming that everyone, deep down, agrees tends to founder on the observation that we simply don’t—or to seem utterly banal (because everyone agrees with it).