Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, “Murder is wrong,” even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.
Wait, does this “truth is relative” stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there’s a sizable minority here who wont.
What do you disagree with? That “truth is relative” applies to only moral questions? or that it applies to more than moral questions?
If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)
My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as “right according to human morals.” Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI—so I would have a hard time calling them “subjective”.
Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise—see e.g. Nazis.
EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I’m having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for “metaethics sequence”.
We can know what is Right, as long as we define it as “right according to human morals.” Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI—so I would have a hard time calling them “subjective”
I don’t dispute the possibility that your conclusion may be correct, I’m wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I’m comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.
From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I’ve chosen to operate as if they are relative because (i) if moral truths exist but I don’t know what they are, I’m in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn’t mean you don’t use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.
OK, i’ll try to search for those EY writings, thanks.
Perspectivism provides that all truth is subjective, but in practice, this characterization has no relevance to the extent there is agreement on any particular truth. For example, “Murder is wrong,” even if a subjective truth, is not so in practice because there is collective agreement that murder is wrong. That is all I meant, but agree that it was not clear.
Thanks for the clarifiction.
Wait, does this “truth is relative” stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there’s a sizable minority here who wont.
What do you disagree with? That “truth is relative” applies to only moral questions? or that it applies to more than moral questions?
If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)
My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as “right according to human morals.” Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI—so I would have a hard time calling them “subjective”.
Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise—see e.g. Nazis.
EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I’m having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for “metaethics sequence”.
I, ah … I’m not seeing anything here. Have you accidentally posted just a space or something?
I don’t dispute the possibility that your conclusion may be correct, I’m wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I’m comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.
From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I’ve chosen to operate as if they are relative because (i) if moral truths exist but I don’t know what they are, I’m in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn’t mean you don’t use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.
OK, i’ll try to search for those EY writings, thanks.