It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, “I don’t know the axioms either.” That would indeed be a map/territory confusion on their part, but it’s a pretty common and understandable one. The statement, “Flying machines are impossible” is not equivalent to “I don’t know how to build a flying machine,” but in the short term they are making the same prediction: no one is flying anywhere today.
Actually, and I don’t know if you’ve thought of it this way, but in asking for the axioms of social justice theory, weren’t you in effect asking for something close to the solution to the Friendly AI problem? No wonder your friend couldn’t come up with a good answer on the spot!
It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, “I don’t know the axioms either.” That would indeed be a map/territory confusion on their part, but it’s a pretty common and understandable one. The statement, “Flying machines are impossible” is not equivalent to “I don’t know how to build a flying machine,” but in the short term they are making a similar prediction: no one is flying anywhere today.
They seemed to be saying both things.
Actually, and I don’t know if you’ve thought of it this way, but in asking for the axioms of social justice theory, weren’t you in effect asking for something close to the solution to the Friendly AI problem? No wonder your friend couldn’t come up with a good answer on the spot!
Hah, that’s true! I didn’t think of it that way. I don’t know that much about the Friendly AI problem, so I wouldn’t know anyway. I’ve been able to reduce my entire morality to two axioms, though (which probably aren’t especially suitable for AI or a 100% rational person, because there’s no possibility at all that I’ve actually found a solution to a problem I know nothing about that has been considered by many educated people for long periods of time), so I thought that maybe you could find something similar for social justice (I was having trouble deciding on what to feel about certain fringe cases).
My point was that they probably did think they meant both things, because the distinction between “it’s impossible” and “I don’t know how” is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean “impossible.”
I’ve been able to reduce my entire morality to two axioms...
My point was that they probably did mean both things, because the distinction between “it’s impossible” and “I don’t know how” is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean “impossible.”
Hmm, I agree, but I don’t think that it adequately explains the entire picture. I think it might have been two different ideas coming from two different sources. I can imagine that my friend had absorbed “applying formalized reason to society is bad” from popular culture, whereas “I don’t know what founding propositions of social justice are”, and subsequently “there might not be able to be such things” (like you talked about), came from their own internal evaluations.
Okay, I’ll bite. What are they?
I kinda wanted to avoid this because social approval etc., also brevity, but okay:
Everybody is completely, equally, and infinitely entitled to life, positive feelings, and a lack of negative feelings.
One must forfeit gratification of axiom 1 to help others to achieve it. (This might be badly worded. What I mean is that you also have to consider the entitlement of others as well to etc etc etc in their actions, and while others are do not have the things in axiom 1, one should be helping them get them, not oneself.)
I know it loses a lot of nuance this way (to what extent must you help others? well, so that it works out optimally for everyone; but what exactly is optimal? the sum of everyone’s life/positive feelings/lack of negative feelings? that’s left undefined), but it works for me, at least.
I think it is deeply misleading to label these “axioms.” At best these are summaries of heuristics that you use (or believe you use) to make moral decisions. You couldn’t feed these axioms into a computer and get moral behavior back out. Have you read the posts orbiting around Fake Fake Utility Functions?
That’s exactly what I can’t make my mind up about, and forces me to default to nihilism on things like that. Maybe it really is irrelevant where the pleasure comes from? If we did wirehead everyone for eternity, then would it be sad if everyone spontaneously disappeared at some point? Those are questions that I can’t answer. My morality is only good for today’s society, not tomorrow’s. I guess strictly morally, yes, wireheading is a solution, but philosophically, there are arguments to be made against it. (Not from a nihilistic point of view, though, which I am not comfortable with. I guess, philosophically, I can adopt two axioms: “Life requires meaning,” and “meaning must be created.” And then arises the question, “What is meaning?”, at which point I leave it to people with real degrees in philosophy. If you asked me, I’d try to relate it to the entropy of the universe somehow. But I feel that I’m really out of my depth at that point.)
It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, “I don’t know the axioms either.” That would indeed be a map/territory confusion on their part, but it’s a pretty common and understandable one. The statement, “Flying machines are impossible” is not equivalent to “I don’t know how to build a flying machine,” but in the short term they are making the same prediction: no one is flying anywhere today.
Actually, and I don’t know if you’ve thought of it this way, but in asking for the axioms of social justice theory, weren’t you in effect asking for something close to the solution to the Friendly AI problem? No wonder your friend couldn’t come up with a good answer on the spot!
They seemed to be saying both things.
Hah, that’s true! I didn’t think of it that way. I don’t know that much about the Friendly AI problem, so I wouldn’t know anyway. I’ve been able to reduce my entire morality to two axioms, though (which probably aren’t especially suitable for AI or a 100% rational person, because there’s no possibility at all that I’ve actually found a solution to a problem I know nothing about that has been considered by many educated people for long periods of time), so I thought that maybe you could find something similar for social justice (I was having trouble deciding on what to feel about certain fringe cases).
My point was that they probably did think they meant both things, because the distinction between “it’s impossible” and “I don’t know how” is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean “impossible.”
Okay, I’ll bite. What are they?
Hmm, I agree, but I don’t think that it adequately explains the entire picture. I think it might have been two different ideas coming from two different sources. I can imagine that my friend had absorbed “applying formalized reason to society is bad” from popular culture, whereas “I don’t know what founding propositions of social justice are”, and subsequently “there might not be able to be such things” (like you talked about), came from their own internal evaluations.
I kinda wanted to avoid this because social approval etc., also brevity, but okay:
Everybody is completely, equally, and infinitely entitled to life, positive feelings, and a lack of negative feelings.
One must forfeit gratification of axiom 1 to help others to achieve it. (This might be badly worded. What I mean is that you also have to consider the entitlement of others as well to etc etc etc in their actions, and while others are do not have the things in axiom 1, one should be helping them get them, not oneself.)
I know it loses a lot of nuance this way (to what extent must you help others? well, so that it works out optimally for everyone; but what exactly is optimal? the sum of everyone’s life/positive feelings/lack of negative feelings? that’s left undefined), but it works for me, at least.
I think it is deeply misleading to label these “axioms.” At best these are summaries of heuristics that you use (or believe you use) to make moral decisions. You couldn’t feed these axioms into a computer and get moral behavior back out. Have you read the posts orbiting around Fake Fake Utility Functions?
(axioms omitted)
I don’t see any mathematics there, and making them into mathematics looks to me like an AI-complete problem. What do you do with these axioms?
What do you mean by “positive feelings”? For example, would you support wireheading everyone?
That’s exactly what I can’t make my mind up about, and forces me to default to nihilism on things like that. Maybe it really is irrelevant where the pleasure comes from? If we did wirehead everyone for eternity, then would it be sad if everyone spontaneously disappeared at some point? Those are questions that I can’t answer. My morality is only good for today’s society, not tomorrow’s. I guess strictly morally, yes, wireheading is a solution, but philosophically, there are arguments to be made against it. (Not from a nihilistic point of view, though, which I am not comfortable with. I guess, philosophically, I can adopt two axioms: “Life requires meaning,” and “meaning must be created.” And then arises the question, “What is meaning?”, at which point I leave it to people with real degrees in philosophy. If you asked me, I’d try to relate it to the entropy of the universe somehow. But I feel that I’m really out of my depth at that point.)
I think you’re giving up too early. Have you read the metaethics sequence?