I appreciate your input, these are my first two comments here so apologies if i’m out of line at all.
>Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them.
In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn’t Imply that consciousness is any less real, even if it means that it isn’t possible for one agent to entirely assess the “realness” of another agent’s claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don’t suggest doubting the fact that the existence of consciousness can be arrived at self evidently.
For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they’re wrong to call that self evident, and you could justifiably dismiss their level of confidence.
Similarly, I’m not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.
I realise that probably makes it sound like I’m trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value.
If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.
>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn’t obvious to me.
In this rebate “real” means objective which means something like independent from observers. Consciousness is dependent on you observing it and the idea that you could be conscious without observing it seems incoherent.
The moral realism position is that it’s coherent to say that there are thinks that have moral value even if there’s no observer that judges them to have moral value.
I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of.
This naturally raises the question of whether people who don’t agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth. I’ve heard the second version stated seriously in my Bible-belt childhood; it didn’t impress me then. The first just seems … odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinction).
Any position claiming that everyone, deep down, agrees tends to founder on the observation that we simply don’t—or to seem utterly banal (because everyone agrees with it).
I appreciate your input, these are my first two comments here so apologies if i’m out of line at all.
>Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them.
In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn’t Imply that consciousness is any less real, even if it means that it isn’t possible for one agent to entirely assess the “realness” of another agent’s claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don’t suggest doubting the fact that the existence of consciousness can be arrived at self evidently.
For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they’re wrong to call that self evident, and you could justifiably dismiss their level of confidence.
Similarly, I’m not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.
I realise that probably makes it sound like I’m trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value.
If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.
>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn’t obvious to me.
In this rebate “real” means objective which means something like independent from observers. Consciousness is dependent on you observing it and the idea that you could be conscious without observing it seems incoherent.
The moral realism position is that it’s coherent to say that there are thinks that have moral value even if there’s no observer that judges them to have moral value.
I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of.
This naturally raises the question of whether people who don’t agree with you are not moral agents or are somehow so confused or deceitful that they have abandoned their inherent truth. I’ve heard the second version stated seriously in my Bible-belt childhood; it didn’t impress me then. The first just seems … odd (and also raises the question of whether the non-morally-relevant will eventually outcompete the moral, leading to their extinction).
Any position claiming that everyone, deep down, agrees tends to founder on the observation that we simply don’t—or to seem utterly banal (because everyone agrees with it).