Anyhow, back to some sort of topic: You seem to be saying that Parfit is not claiming his theory as any sort of One True theory. Is this accurate?
Surely anyone who argues for a theory is saying that.
I dunno, you could just write down your theory to get it out there, maybe to convince other humans (which is possible, us being imperfect) as a means to spreading your morality.
the correct way to handle that theory is to say that different people have different theories/intuitions. Otherwise you fall into the trap of saying there are no real disagreements about morality, or that serial killer morality is perfectly valid because serial can make up their own meaning definition of “moral”.
Talking about “validity” just seems to be a way to disparage any morality/theory/set of intuitions that’s not your own. From a general level, anything that fills the cognitive role we talked about as a definition, assigning things something like blameworthiness, counts. And yes, that means the serial-killer morality too.
The way to avoid “dead-end relativism”—e.g. not stopping serial killers even though you think it’s bad—is to be comfortable with being an agent with a morality the same way a carefully-built AI could be an agent with a morality. It doesn’t actually matter that your morality could have been something else. It is what it is, and so it’s true that when I say “right” I’m referring to Manfred::right, some specific algorithm, and I’ll still stop serial killers because it’s the right thing to do.
We’re back to trouble with words again. Like the tree falling in the forest making a sound, “right” can mean different things to different people, and the way to solve the problem is not to argue over who’s “right” is right, but use more words to just care about the actual state of the universe. So I’ll stop a serial killer, but I won’t argue with him about whether what he’s doing is right. Well, I guess that’s an oversimplification—humans are persuadable about the darndest things, so arguing about “right” is sometimes fruitful. But if it the argument goes nowhere, I’m comfortable with him doing Killer::right, and me doing Manfred::right, and then I’ll hit him with a big stick.
Talking about “validity” just seems to be a way to disparage any morality/theory/set of intuitions that’s not your own.
You can promote metaethical objectivism without having and particular first order moral theory in mind; and you can hold that the Meaning Theory is a poor argument for subjectivism without holding objectivism to be true.
From a general level, anything that fills the cognitive role we talked about as a definition, assigning things something like blameworthiness, counts.
Not equally. Not without some hefty question begging. Anything that assigns solutions to numeric problems could be called arithmetic, but some assignments are true and others false.
and yes, that means the serial-killer morality too.
Counts as correct?
The way to avoid “dead-end relativism”—e.g. not stopping serial killers even though you think it’s bad—is to be comfortable with being an agent with a morality the same way a carefully-built AI could be an agent with a morality. It doesn’t actually matter that your morality could have been something else. It is what it is, and so it’s true that when I say “right” I’m referring to Manfred::right, some specific algorithm, and I’ll still stop serial killers because it’s the right thing to do.
Unless you are one.
I don’t find it satisfactory to be compelled to stop things—to treat them as if they are wrong—without knowing why,
or even that, they are wrong. I like reasons. i guess you could call me a rationalist.
We’re back to trouble with words again. Like the tree falling in the forest making a sound, “right” can mean different things to different people,
I’ve just argued against that. This is going in circles.
and the way to solve the problem is not to argue over who’s “right” is right, but use more words to just care about the actual state of the universe. So I’ll stop a serial killer, but I won’t argue with him about whether what he’s doing is right.
I think a universe where force is minimised in favour of persuasion is preferable.
But if it the argument goes nowhere, I’m comfortable with him doing Killer::right, and me doing Manfred::right, and then I’ll hit him with a big stick.
What if you are really wrong? What if you are the guy who is rounding the slave owners “property” and dutifully returning them to him?
“right” can mean different things to different people,
I’ve just argued against that. This is going in circles.
Didn’t you just agree that the algorithm for sorting things into “right” and “not right” is different in different in different people? Are we really going to have to taboo “means” now?
if it the argument goes nowhere, I’m comfortable with him doing Killer::right, and me doing Manfred::right, and then I’ll hit him with a big stick.
What if you are really wrong? What if you are the guy who is rounding the slave owners “property” and dutifully returning them to him?
Then I’m wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence. Hmm. Actually, it looks like things get complicated here because of human mutability—we can be persuaded of either a thing or its opposite in different conditions. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that’s not strictly necessary).
Didn’t you just agree that the algorithm for sorting things into “right” and “not right” is different in different in different people?
Yes, and I also argued, repeatedly, against saying that such an algorithm constitutes either a definition or a meaning.
What if you are really wrong? What if you are the guy who is rounding the slave owners “property” and dutifully returning them to him?
Then I’m wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence.
Not necessarily. You could be wrong about morality itself. You could think property rights are more important
than liberty, or that people are means not ends.
. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that’s not strictly necessary).
What sort of impact would being right or wrong about morality have that I could notice? For example, let’s say someone thinks taxation is inherently morally wrong. What sort of observations are ruled out by this belief, such that making those observations would falsify the belief?
Hah, looks like someone went through and upvoted all your posts in the conversation while downvoting mine. Relativism has at least one anti-fan :P
I didn’t understand your last reply, but I’d still like to ask you a favor: imagine what the universe would look like if there weren’t any particular best morality, only moralities that were best by some individual’s standard, which nobody else was under any particular cognitive necessity to accept. All the electrons would stay in their orbitals, things would look the same, but inside agents would just do what they did for their own reasons and not for others.
My last post was a question (now edited). You were tacitly assuming that being able to predict is what matters, that non predictive theories can be disregarded. I was questioning that being able to predict matters more than morality (in fact, I was doubting that anything does). I think the does-it-predict test is flawed in that sense.
I also think the other tacit assumption, that morality is non predictive is false. If you act on your morality,
it will predict what you observations...whether they are eventually of a death row cell, or a the receipt of a nobel peace prize, for instance. If you don’t act on it, why have it? Morality is connected to action, treating it as a theory whose job it is to predict the experiences of a passive observer is a category error.
The problem I have with subjective morality is that I can’t see how it differs from no morality:
If subjective morality is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
If error theory is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
That, if correct, only goes as far as establishing that morality is either objective or non existent.
You wonder what would change given the truth/falsity of objective morality. What would change is the truth
and falsity (and rationality and irrationality) of things that are logically linked to it. You can either be in jail
at time T or not; that’s objective. If objective punishments and rewards can’t be objectively justifiied, there
is a certain amount of irrationality in the world. So what objective morality would change is that certain ideas and attitudes, and actions leading from them , would make sense the world would be a more rational place.
If morality is totally non-predictive then it shouldn’t be in our model of the world. It’s like the sort of “consciousness” where in the non-conscious zombie universe, philosophers write the exact same papers about consciousness despite not being conscious. If morality is non-predictive, then even if we act morally, it’s for reasons totally divorced from morality! If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access “morality”! That’s why morality has to predict things, and that’s why it has to be inside peoples’ heads. Because if it ain’t in peoples’ heads to start with, there’s no magical process that puts it there.
If morality is totally non-predictive then it shouldn’t be in our model of the world.
The point of morality is to change the world, not model it.
If morality is non-predictive, then even if we act morally, it’s for reasons totally divorced from morality!
If we act morally, the morality we are acting on predicts our actions. Your beef seems to be with the idea
that morality is not some universal causal law—that you have to choose it. There will be a causal explanation
of behaviour at the neuronal level, but that doesn’t exclude an explanation at the level of moral reasoning,any more than an explanation of a computers operation at the level of electrons excludes a software level explanation.
If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access “morality”!
A causal process can implement moral reasoning just as it can implement mathematical reasoning.
Your objection is a category error. like saying a software is an immaterial abstraction that doesn’t cause a computer to do anything.
That’s why morality has to predict things, and that’s why it has to be inside peoples’ heads.
Morality is inside people’s heads since it is a form of reasoning. Where did I say otherwise?
OK. You didn’t get that morality is as predictive as you make it by acting on it. And you also didn’t get that there are more important things than prediction.
I dunno, you could just write down your theory to get it out there, maybe to convince other humans (which is possible, us being imperfect) as a means to spreading your morality.
Talking about “validity” just seems to be a way to disparage any morality/theory/set of intuitions that’s not your own. From a general level, anything that fills the cognitive role we talked about as a definition, assigning things something like blameworthiness, counts. And yes, that means the serial-killer morality too.
The way to avoid “dead-end relativism”—e.g. not stopping serial killers even though you think it’s bad—is to be comfortable with being an agent with a morality the same way a carefully-built AI could be an agent with a morality. It doesn’t actually matter that your morality could have been something else. It is what it is, and so it’s true that when I say “right” I’m referring to Manfred::right, some specific algorithm, and I’ll still stop serial killers because it’s the right thing to do.
We’re back to trouble with words again. Like the tree falling in the forest making a sound, “right” can mean different things to different people, and the way to solve the problem is not to argue over who’s “right” is right, but use more words to just care about the actual state of the universe. So I’ll stop a serial killer, but I won’t argue with him about whether what he’s doing is right. Well, I guess that’s an oversimplification—humans are persuadable about the darndest things, so arguing about “right” is sometimes fruitful. But if it the argument goes nowhere, I’m comfortable with him doing Killer::right, and me doing Manfred::right, and then I’ll hit him with a big stick.
You can promote metaethical objectivism without having and particular first order moral theory in mind; and you can hold that the Meaning Theory is a poor argument for subjectivism without holding objectivism to be true.
Not equally. Not without some hefty question begging. Anything that assigns solutions to numeric problems could be called arithmetic, but some assignments are true and others false.
Counts as correct?
Unless you are one.
I don’t find it satisfactory to be compelled to stop things—to treat them as if they are wrong—without knowing why, or even that, they are wrong. I like reasons. i guess you could call me a rationalist.
I’ve just argued against that. This is going in circles.
I think a universe where force is minimised in favour of persuasion is preferable.
What if you are really wrong? What if you are the guy who is rounding the slave owners “property” and dutifully returning them to him?
Didn’t you just agree that the algorithm for sorting things into “right” and “not right” is different in different in different people? Are we really going to have to taboo “means” now?
Then I’m wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence.
Hmm. Actually, it looks like things get complicated here because of human mutability—we can be persuaded of either a thing or its opposite in different conditions. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that’s not strictly necessary).
Yes, and I also argued, repeatedly, against saying that such an algorithm constitutes either a definition or a meaning.
Not necessarily. You could be wrong about morality itself. You could think property rights are more important than liberty, or that people are means not ends.
Those are not your only choices.
What sort of impact would being right or wrong about morality have that I could notice? For example, let’s say someone thinks taxation is inherently morally wrong. What sort of observations are ruled out by this belief, such that making those observations would falsify the belief?
The questions is what you should care about.
Is it rational to care more about being able to predict accurately than care about inadvertantly doing evil?
Hah, looks like someone went through and upvoted all your posts in the conversation while downvoting mine. Relativism has at least one anti-fan :P
I didn’t understand your last reply, but I’d still like to ask you a favor: imagine what the universe would look like if there weren’t any particular best morality, only moralities that were best by some individual’s standard, which nobody else was under any particular cognitive necessity to accept. All the electrons would stay in their orbitals, things would look the same, but inside agents would just do what they did for their own reasons and not for others.
Okay, thanks.
My last post was a question (now edited). You were tacitly assuming that being able to predict is what matters, that non predictive theories can be disregarded. I was questioning that being able to predict matters more than morality (in fact, I was doubting that anything does). I think the does-it-predict test is flawed in that sense.
I also think the other tacit assumption, that morality is non predictive is false. If you act on your morality, it will predict what you observations...whether they are eventually of a death row cell, or a the receipt of a nobel peace prize, for instance. If you don’t act on it, why have it? Morality is connected to action, treating it as a theory whose job it is to predict the experiences of a passive observer is a category error.
The problem I have with subjective morality is that I can’t see how it differs from no morality:
If subjective morality is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
If error theory is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
That, if correct, only goes as far as establishing that morality is either objective or non existent.
You wonder what would change given the truth/falsity of objective morality. What would change is the truth and falsity (and rationality and irrationality) of things that are logically linked to it. You can either be in jail at time T or not; that’s objective. If objective punishments and rewards can’t be objectively justifiied, there is a certain amount of irrationality in the world. So what objective morality would change is that certain ideas and attitudes, and actions leading from them , would make sense the world would be a more rational place.
If morality is totally non-predictive then it shouldn’t be in our model of the world. It’s like the sort of “consciousness” where in the non-conscious zombie universe, philosophers write the exact same papers about consciousness despite not being conscious. If morality is non-predictive, then even if we act morally, it’s for reasons totally divorced from morality! If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access “morality”! That’s why morality has to predict things, and that’s why it has to be inside peoples’ heads. Because if it ain’t in peoples’ heads to start with, there’s no magical process that puts it there.
The point of morality is to change the world, not model it.
If we act morally, the morality we are acting on predicts our actions. Your beef seems to be with the idea that morality is not some universal causal law—that you have to choose it. There will be a causal explanation of behaviour at the neuronal level, but that doesn’t exclude an explanation at the level of moral reasoning,any more than an explanation of a computers operation at the level of electrons excludes a software level explanation.
A causal process can implement moral reasoning just as it can implement mathematical reasoning. Your objection is a category error. like saying a software is an immaterial abstraction that doesn’t cause a computer to do anything.
Morality is inside people’s heads since it is a form of reasoning. Where did I say otherwise?
Oh, okay, I take back my big rant then. Sorry :D
OK. You didn’t get that morality is as predictive as you make it by acting on it. And you also didn’t get that there are more important things than prediction.