Science can’t tell us anything about how to be more rational?
I was responding to you claim:
“perfectly informed and perfectly rational”
You have shifted the ground from “perfect” to “better”.
After breaking down the equation for how one gets to an ‘ought’ statement, I think it’s obvious how science can help us inform our ‘oughts’
That’s because you are still thinking of an “ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn’t that, it is ethical. You still haven’t understood what the issue is about. There are cirrcumstances under which I ought not to do what I desire to do.
You have shifted the ground from “perfect” to “better”.
The better it gets, the closer it gets to perfect. Eventually, if science can tell us enough about rationality, there’s no reason we can’t understand the best form of it.
That’s because you are still thinking of an “ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn’t that, it is ethical. You still haven’t understood what the issue is about.
I’m a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you’ve been calling ‘instrumental meaning’ or to what I’ll call ‘terminal meaning’, as in terminal values.
There’s nothing more to it. If you think there is, prove it. I’m going with Mackie on this one.
There are cicsumstances under which I ought not to do what I desire to do.
Yes, like I’ve said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting ‘ought’ will be wrong.
Eventually, if science can tell us enough about rationality, there’s no reason we can’t understand the best form of it.
But you original claim was::
to get out of the is-ought bind all you have to do is specify a goal or desire you have.
You then switched to
perfectly informed and perfectly rational
and then switched again to gradual improvement.
In any case, it sill improving instrimental ratioanility is supposed to
do anything at all with regard to ethics.
I’m a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you’ve been calling ‘instrumental meaning’ or to what I’ll call ‘terminal meaning’, as in terminal values.
So? You claim was that there science can solve the is-ought problem. Are you claiming that there is scientific proof of MAR?
There’s nothing more to it. If you think there is, prove it.
I have.
But the problem here is your claim that sceince can sol ve the is-ought gap was put forward against the argument that philosophy still has a job to do in discussing “ought” issues. As it turns out, far from proving philosophy to be redundant, you are actually relyig on it (albeit in a surreptittious and unargued way).
Yes, like I’ve said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting ‘ought’ will be wrong
None of that has anything to do with ethics. You seem to have a blind spot about the subject.
I was responding to you claim:
“perfectly informed and perfectly rational”
You have shifted the ground from “perfect” to “better”.
That’s because you are still thinking of an “ought* as an instrumental rule for realising personal values, but in the context of the is-ought divide, it isn’t that, it is ethical. You still haven’t understood what the issue is about. There are cirrcumstances under which I ought not to do what I desire to do.
The better it gets, the closer it gets to perfect. Eventually, if science can tell us enough about rationality, there’s no reason we can’t understand the best form of it.
I’m a Moral Anti-Realist (probably something close to a PMR, a la Luke) so the is-ought problem reduces to either what you’ve been calling ‘instrumental meaning’ or to what I’ll call ‘terminal meaning’, as in terminal values.
There’s nothing more to it. If you think there is, prove it. I’m going with Mackie on this one.
Yes, like I’ve said. When your beliefs about the world are wrong, or your beliefs about how best to achieve your desires are wrong, or your beliefs about your values are misinformed or unreflective, then the resulting ‘ought’ will be wrong.
But you original claim was::
You then switched to
and then switched again to gradual improvement.
In any case, it sill improving instrimental ratioanility is supposed to do anything at all with regard to ethics.
So? You claim was that there science can solve the is-ought problem. Are you claiming that there is scientific proof of MAR?
I have.
But the problem here is your claim that sceince can sol ve the is-ought gap was put forward against the argument that philosophy still has a job to do in discussing “ought” issues. As it turns out, far from proving philosophy to be redundant, you are actually relyig on it (albeit in a surreptittious and unargued way).
None of that has anything to do with ethics. You seem to have a blind spot about the subject.
I have argued that: PMR doesn’t solve the is-ought problem