Remember that this is about your claim that science can inform us abut oughts. How can a science conducted by imperfectly rational scientists inform us what desires a perfectly ratioanal agent would have?
Science can’t tell us anything about how to be more rational? Is that your claim?
After breaking down the equation for how one gets to an ‘ought’ statement, I think it’s obvious how science can help us inform our ‘oughts’. You seem to agree, more or less, with my assessment of the calculation necessary for reaching ‘ought’ statements, and since science can tell us things about each of the individual parts of the calculation, it follows that it can tell us things about the sum as well.
And while we’re on the subject, why would rationality constrain desires?
Hmm… After thinking about it, it seems more likely that rationality belongs to the ‘is’ box, and reflectiveness/informativeness belong in the the ‘desire/goal’ box. Duly noted.
When we condemn someone, we are sayig they morally-ought not to have done what they did. You are taking “ought” as if only had an instrumental meaning?
I’m not sure I understand what you are objecting to.
Science can’t tell us anything about how to be more rational? Is that your claim?
I think the claim is that science can’t tell us how to become “perfectly rational”. Science can certainly tell us how to become “more rational”, but only if we already have a specification of what “more rational” is, and just need to figure out how to implement it. I think most of us who are trying to figure out such specifications do not see our work as following the methods of science, but rather more like doing philosophy.
Science can’t tell us anything about how to be more rational?
I was responding to you claim:
“perfectly informed and perfectly rational”
You have shifted the ground from “perfect” to “better”.
After breaking down the equation for how one gets to an ‘ought’ statement, I think it’s obvious how science can help us inform our ‘oughts’
That’s because you are still thinking of an “ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn’t that, it is ethical. You still haven’t understood what the issue is about. There are cirrcumstances under which I ought not to do what I desire to do.
You have shifted the ground from “perfect” to “better”.
The better it gets, the closer it gets to perfect. Eventually, if science can tell us enough about rationality, there’s no reason we can’t understand the best form of it.
That’s because you are still thinking of an “ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn’t that, it is ethical. You still haven’t understood what the issue is about.
I’m a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you’ve been calling ‘instrumental meaning’ or to what I’ll call ‘terminal meaning’, as in terminal values.
There’s nothing more to it. If you think there is, prove it. I’m going with Mackie on this one.
There are cicsumstances under which I ought not to do what I desire to do.
Yes, like I’ve said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting ‘ought’ will be wrong.
Eventually, if science can tell us enough about rationality, there’s no reason we can’t understand the best form of it.
But you original claim was::
to get out of the is-ought bind all you have to do is specify a goal or desire you have.
You then switched to
perfectly informed and perfectly rational
and then switched again to gradual improvement.
In any case, it sill improving instrimental ratioanility is supposed to
do anything at all with regard to ethics.
I’m a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you’ve been calling ‘instrumental meaning’ or to what I’ll call ‘terminal meaning’, as in terminal values.
So? You claim was that there science can solve the is-ought problem. Are you claiming that there is scientific proof of MAR?
There’s nothing more to it. If you think there is, prove it.
I have.
But the problem here is your claim that sceince can sol ve the is-ought gap was put forward against the argument that philosophy still has a job to do in discussing “ought” issues. As it turns out, far from proving philosophy to be redundant, you are actually relyig on it (albeit in a surreptittious and unargued way).
Yes, like I’ve said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting ‘ought’ will be wrong
None of that has anything to do with ethics. You seem to have a blind spot about the subject.
Science can’t tell us anything about how to be more rational? Is that your claim?
After breaking down the equation for how one gets to an ‘ought’ statement, I think it’s obvious how science can help us inform our ‘oughts’. You seem to agree, more or less, with my assessment of the calculation necessary for reaching ‘ought’ statements, and since science can tell us things about each of the individual parts of the calculation, it follows that it can tell us things about the sum as well.
Hmm… After thinking about it, it seems more likely that rationality belongs to the ‘is’ box, and reflectiveness/informativeness belong in the the ‘desire/goal’ box. Duly noted.
I’m not sure I understand what you are objecting to.
I think the claim is that science can’t tell us how to become “perfectly rational”. Science can certainly tell us how to become “more rational”, but only if we already have a specification of what “more rational” is, and just need to figure out how to implement it. I think most of us who are trying to figure out such specifications do not see our work as following the methods of science, but rather more like doing philosophy.
I was responding to you claim:
“perfectly informed and perfectly rational”
You have shifted the ground from “perfect” to “better”.
That’s because you are still thinking of an “ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn’t that, it is ethical. You still haven’t understood what the issue is about. There are cirrcumstances under which I ought not to do what I desire to do.
The better it gets, the closer it gets to perfect. Eventually, if science can tell us enough about rationality, there’s no reason we can’t understand the best form of it.
I’m a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you’ve been calling ‘instrumental meaning’ or to what I’ll call ‘terminal meaning’, as in terminal values.
There’s nothing more to it. If you think there is, prove it. I’m going with Mackie on this one.
Yes, like I’ve said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting ‘ought’ will be wrong.
But you original claim was::
You then switched to
and then switched again to gradual improvement.
In any case, it sill improving instrimental ratioanility is supposed to do anything at all with regard to ethics.
So? You claim was that there science can solve the is-ought problem. Are you claiming that there is scientific proof of MAR?
I have.
But the problem here is your claim that sceince can sol ve the is-ought gap was put forward against the argument that philosophy still has a job to do in discussing “ought” issues. As it turns out, far from proving philosophy to be redundant, you are actually relyig on it (albeit in a surreptittious and unargued way).
None of that has anything to do with ethics. You seem to have a blind spot about the subject.
I have argued that: PMR doesn’t solve the is-ought problem