You might be interested in Sam Harris’s book The Moral Landscape which argues that science can be used to answer moral questions and determine how we should behave.
No, but he clearly would have the same definition as most of us. He thinks morality comes from the brain and by learning more about our brains we learn more about morality. He says things like scientists who think science can’t answer “should” questions very often act as if should questions have objectively right answers, and our brains seem to store moral beliefs in the same way as they do factual beliefs.
He says things like scientists who think science can’t answer “should” questions very often act as if should questions have objectively right answers,
Is that supposed to be a bad thing? In any case, the more usual argument is that I can’t take “what my brain does” as the last word on the subject.
and our brains seem to store moral beliefs in the same way as they do factual beliefs.
I’m struggling to see the relevance of that. Our brains probably store information about size in the same way that they store information about colour, but that doesn’t mean you can infer anything about an objects colour from information about its size. The is-ought is one instance of a general rule about information falling into orthogonal categories, not special pleading.
ETA: Just stumbled on:
“Thesis 5 is the idea that one cannot logically derive a conclusion from a set of premises that have nothing to do with it. (The is-ought gap is an example of this).”
″ I can’t take “what my brain does” as the last word on the subject.”
But what if morality is all about the welfare of brains? I think Harris would say that once you accept human welfare is the goal you have crossed the is-ought gap and can use science to determine what is in the best interest of humans. Yes this is hard and people will disagree, but the same is true of generally accepted scientific questions. Plus, Harris says, lots of people have moral beliefs based on falsifiable premises (God wants this) and so we can use science to evaluate these beliefs.
But what if morality is all about the welfare of brains
That’s irrelevant. Welfare being about brains doesn’t make my brain omniscient about yours. I’m not omniscient about neruroscience, either.
I think Harris would say that once you accept human welfare is the goal you have crossed the is-ought gap and
For some value of “crossed”. What does “accept” mean? Not proved, explain, justified, anyway. .If you accept “welfare is about brains” as an unproven axiom, you can derive oughts from ises ..within that particular system.
The problem, of course, is that you can construct any number of other ethical systems with different but equally arbitrary premises. So you are not getting convergence on objective truth.
You might be interested in Sam Harris’s book The Moral Landscape which argues that science can be used to answer moral questions and determine how we should behave.
did he define science?
No, but he clearly would have the same definition as most of us. He thinks morality comes from the brain and by learning more about our brains we learn more about morality. He says things like scientists who think science can’t answer “should” questions very often act as if should questions have objectively right answers, and our brains seem to store moral beliefs in the same way as they do factual beliefs.
Is that supposed to be a bad thing? In any case, the more usual argument is that I can’t take “what my brain does” as the last word on the subject.
I’m struggling to see the relevance of that. Our brains probably store information about size in the same way that they store information about colour, but that doesn’t mean you can infer anything about an objects colour from information about its size. The is-ought is one instance of a general rule about information falling into orthogonal categories, not special pleading.
ETA: Just stumbled on:
“Thesis 5 is the idea that one cannot logically derive a conclusion from a set of premises that have nothing to do with it. (The is-ought gap is an example of this).”
https://nintil.com/2017/04/18/still-not-a-zombie-replies-to-commenters/
″ I can’t take “what my brain does” as the last word on the subject.”
But what if morality is all about the welfare of brains? I think Harris would say that once you accept human welfare is the goal you have crossed the is-ought gap and can use science to determine what is in the best interest of humans. Yes this is hard and people will disagree, but the same is true of generally accepted scientific questions. Plus, Harris says, lots of people have moral beliefs based on falsifiable premises (God wants this) and so we can use science to evaluate these beliefs.
That’s irrelevant. Welfare being about brains doesn’t make my brain omniscient about yours. I’m not omniscient about neruroscience, either.
For some value of “crossed”. What does “accept” mean? Not proved, explain, justified, anyway. .If you accept “welfare is about brains” as an unproven axiom, you can derive oughts from ises ..within that particular system.
The problem, of course, is that you can construct any number of other ethical systems with different but equally arbitrary premises. So you are not getting convergence on objective truth.