Didn’t you just agree that the algorithm for sorting things into “right” and “not right” is different in different in different people?
Yes, and I also argued, repeatedly, against saying that such an algorithm constitutes either a definition or a meaning.
What if you are really wrong? What if you are the guy who is rounding the slave owners “property” and dutifully returning them to him?
Then I’m wrong about some fact that I used in translating my morality into actions, e.g. skin color determines intelligence.
Not necessarily. You could be wrong about morality itself. You could think property rights are more important
than liberty, or that people are means not ends.
. So I really do have to stick with morality as the algorithm itself and not some run of it if I want consistency (though that’s not strictly necessary).
What sort of impact would being right or wrong about morality have that I could notice? For example, let’s say someone thinks taxation is inherently morally wrong. What sort of observations are ruled out by this belief, such that making those observations would falsify the belief?
Hah, looks like someone went through and upvoted all your posts in the conversation while downvoting mine. Relativism has at least one anti-fan :P
I didn’t understand your last reply, but I’d still like to ask you a favor: imagine what the universe would look like if there weren’t any particular best morality, only moralities that were best by some individual’s standard, which nobody else was under any particular cognitive necessity to accept. All the electrons would stay in their orbitals, things would look the same, but inside agents would just do what they did for their own reasons and not for others.
My last post was a question (now edited). You were tacitly assuming that being able to predict is what matters, that non predictive theories can be disregarded. I was questioning that being able to predict matters more than morality (in fact, I was doubting that anything does). I think the does-it-predict test is flawed in that sense.
I also think the other tacit assumption, that morality is non predictive is false. If you act on your morality,
it will predict what you observations...whether they are eventually of a death row cell, or a the receipt of a nobel peace prize, for instance. If you don’t act on it, why have it? Morality is connected to action, treating it as a theory whose job it is to predict the experiences of a passive observer is a category error.
The problem I have with subjective morality is that I can’t see how it differs from no morality:
If subjective morality is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
If error theory is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
That, if correct, only goes as far as establishing that morality is either objective or non existent.
You wonder what would change given the truth/falsity of objective morality. What would change is the truth
and falsity (and rationality and irrationality) of things that are logically linked to it. You can either be in jail
at time T or not; that’s objective. If objective punishments and rewards can’t be objectively justifiied, there
is a certain amount of irrationality in the world. So what objective morality would change is that certain ideas and attitudes, and actions leading from them , would make sense the world would be a more rational place.
If morality is totally non-predictive then it shouldn’t be in our model of the world. It’s like the sort of “consciousness” where in the non-conscious zombie universe, philosophers write the exact same papers about consciousness despite not being conscious. If morality is non-predictive, then even if we act morally, it’s for reasons totally divorced from morality! If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access “morality”! That’s why morality has to predict things, and that’s why it has to be inside peoples’ heads. Because if it ain’t in peoples’ heads to start with, there’s no magical process that puts it there.
If morality is totally non-predictive then it shouldn’t be in our model of the world.
The point of morality is to change the world, not model it.
If morality is non-predictive, then even if we act morally, it’s for reasons totally divorced from morality!
If we act morally, the morality we are acting on predicts our actions. Your beef seems to be with the idea
that morality is not some universal causal law—that you have to choose it. There will be a causal explanation
of behaviour at the neuronal level, but that doesn’t exclude an explanation at the level of moral reasoning,any more than an explanation of a computers operation at the level of electrons excludes a software level explanation.
If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access “morality”!
A causal process can implement moral reasoning just as it can implement mathematical reasoning.
Your objection is a category error. like saying a software is an immaterial abstraction that doesn’t cause a computer to do anything.
That’s why morality has to predict things, and that’s why it has to be inside peoples’ heads.
Morality is inside people’s heads since it is a form of reasoning. Where did I say otherwise?
OK. You didn’t get that morality is as predictive as you make it by acting on it. And you also didn’t get that there are more important things than prediction.
Yes, and I also argued, repeatedly, against saying that such an algorithm constitutes either a definition or a meaning.
Not necessarily. You could be wrong about morality itself. You could think property rights are more important than liberty, or that people are means not ends.
Those are not your only choices.
What sort of impact would being right or wrong about morality have that I could notice? For example, let’s say someone thinks taxation is inherently morally wrong. What sort of observations are ruled out by this belief, such that making those observations would falsify the belief?
The questions is what you should care about.
Is it rational to care more about being able to predict accurately than care about inadvertantly doing evil?
Hah, looks like someone went through and upvoted all your posts in the conversation while downvoting mine. Relativism has at least one anti-fan :P
I didn’t understand your last reply, but I’d still like to ask you a favor: imagine what the universe would look like if there weren’t any particular best morality, only moralities that were best by some individual’s standard, which nobody else was under any particular cognitive necessity to accept. All the electrons would stay in their orbitals, things would look the same, but inside agents would just do what they did for their own reasons and not for others.
Okay, thanks.
My last post was a question (now edited). You were tacitly assuming that being able to predict is what matters, that non predictive theories can be disregarded. I was questioning that being able to predict matters more than morality (in fact, I was doubting that anything does). I think the does-it-predict test is flawed in that sense.
I also think the other tacit assumption, that morality is non predictive is false. If you act on your morality, it will predict what you observations...whether they are eventually of a death row cell, or a the receipt of a nobel peace prize, for instance. If you don’t act on it, why have it? Morality is connected to action, treating it as a theory whose job it is to predict the experiences of a passive observer is a category error.
The problem I have with subjective morality is that I can’t see how it differs from no morality:
If subjective morality is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
If error theory is true, everyone does as they see fit and there is no ultimate right or wrong to any of it.
That, if correct, only goes as far as establishing that morality is either objective or non existent.
You wonder what would change given the truth/falsity of objective morality. What would change is the truth and falsity (and rationality and irrationality) of things that are logically linked to it. You can either be in jail at time T or not; that’s objective. If objective punishments and rewards can’t be objectively justifiied, there is a certain amount of irrationality in the world. So what objective morality would change is that certain ideas and attitudes, and actions leading from them , would make sense the world would be a more rational place.
If morality is totally non-predictive then it shouldn’t be in our model of the world. It’s like the sort of “consciousness” where in the non-conscious zombie universe, philosophers write the exact same papers about consciousness despite not being conscious. If morality is non-predictive, then even if we act morally, it’s for reasons totally divorced from morality! If morality is non-predictive, then when we try to act morally we might as well just flip a coin, because no causal process can access “morality”! That’s why morality has to predict things, and that’s why it has to be inside peoples’ heads. Because if it ain’t in peoples’ heads to start with, there’s no magical process that puts it there.
The point of morality is to change the world, not model it.
If we act morally, the morality we are acting on predicts our actions. Your beef seems to be with the idea that morality is not some universal causal law—that you have to choose it. There will be a causal explanation of behaviour at the neuronal level, but that doesn’t exclude an explanation at the level of moral reasoning,any more than an explanation of a computers operation at the level of electrons excludes a software level explanation.
A causal process can implement moral reasoning just as it can implement mathematical reasoning. Your objection is a category error. like saying a software is an immaterial abstraction that doesn’t cause a computer to do anything.
Morality is inside people’s heads since it is a form of reasoning. Where did I say otherwise?
Oh, okay, I take back my big rant then. Sorry :D
OK. You didn’t get that morality is as predictive as you make it by acting on it. And you also didn’t get that there are more important things than prediction.