I appreciate your input, these are my first two comments here so apologies if i’m out of line at all.
>Roughly speaking, you’re saying that the ground-truth source of values is the self-evidence of those values to agents holding them.
In the same way that the ground-truth proof for the existence of conscious experience comes from conscious experience. This doesn’t Imply that consciousness is any less real, even if it means that it isn’t possible for one agent to entirely assess the “realness” of another agent’s claims to be experiencing consciousness. Agents can also be mistaken about the self evident nature/scope of certain things relevant to consciousness, and other agents can justifiably reject the inherent validity of those claims, however those points don’t suggest doubting the fact that the existence of consciousness can be arrived at self evidently.
For example, someone might suggest that It is self evident that a particular course of events occurred because they have a clear memory of it happening. Obviously they’re wrong to call that self evident, and you could justifiably dismiss their level of confidence.
Similarly, I’m not suggesting that any given moral value held to be self evident should be considered as such, just that the realness of morality is arrived at self evidently.
I realise that probably makes it sound like I’m trying to rationalise attributing the awareness of moral reality to some enlightened subset who I happen to agree with, but I’m suggesting there’s a common denominator which all morally relevant agents are inherently cognizant of. I think experiencing suffering is sufficient evidence for the existence of real moral truth value.
If an alien intelligence claimed to prefer to experience suffering on net, I think it would be a faulty translation or a deception, in the same sense as if an alien intelligence claimed to exhibit a variety of consciousness that precluded experiential phenomenon.
>it says that an AI will have to somehow get evidence about what humans consider moral in order to learn morality.
Does moral realism necessarily imply that a sufficiently intelligent system can bootstrap moral knowledge without evidence derived via conscious agents? That isn’t obvious to me.
When considering this topic I think one has to contend with the notion that suffering and well-being can’t carry symmetrical weight.
The idea that they’re not things you can combine into one value with the hopes that the sum ends up being positive. That in fact suffering just exists in the negative domain of qualia, and no amount of positive qualia can “cancel it out”, unless the two are experienced simultaneously (in which case I don’t think I’d consider that to be actual suffering).
I’m currently undecided on the merits of antinatalism for a variety of reasons.
That said, I have past experience of at least ten years of excruciating major depressive disorder, (doing much better now). If I were given the option to experience another decade of that, in exchange for an extra century of pain-free euphoria, I would absolutely decline that offer. Even if there were only a 10% chance that I’d even experience that decade, I would still decline.