This would however still not be able to motivate an agent that starts with an empty set of moral instructions.
That sounds likely to me.
The other things that sound possible to me are that we could ultimately determine
We ought not build dangerously uncontrolled (AI, nuclear reactors, chemical reactors) or tolerate those who do.
We ought not get sentiences to do things for us through promises of physical pain or other disutility (threats and coercion)j
We may have a somewhat different understanding of what ought means by that point, just as we have a different sense of what time and position and particle and matter and energy mean from having done physics. For example, “ought” might be how we refer to a set of policiies that produce an optimum tradeoff between the short term interests of of the individual and the short terim interests of the group, perhaps designed so that a system in which the individuals followed these rules would have productivity growth at some maximum rate, or would converge on some optimum measure of individual freedom or both. Maybe “ought” would suggest such a wide ranging agreement that individuals who don’t follow these rules must be adjusted or restrained because their cost unmodified or unrestrained to the group is so clearly “unfair,” even possibly dangerous.
I am not even Leonardo da Vinci here trying to describe what the future of science might look like. I am some tradesman in Florence in da Vinci’s time trying to describe what the future of science might look like. My point isn’t that any of the individiual details should be the ones we learn when we finally perfect “the moral method” (in analogy with the scientific method), but rather that the richness of what COULD happen makes it very hard to say never, and that someone being told about the possibilities of “objective science” 1000 years ago would have been pretty justified in saying “we will never know whether the sun will rise tomorrow, we will never be able to derive “will happen” from “did happen,” (which I take to be the scientific analogy of can’t derive “ought” from “is”).
The “is-ought” dichotomy is overrated, as are the kindred splits between normative and descriptive, practice and theory, etc. I suggest that every “normative” statement contains some “descriptive” import and vice versa. For example, “grass is green” implies statements like “if you want to see green, then other things being equal you should see some grass”, and “murder is immoral” implies something like “if you want to be able to justify your actions to fellow humans in open rational dialogue, you shouldn’t murder.” Where the corresponding motivation (e.g. “wanting to see green”) is idiosyncratic and whimsical, the normative import seems trivial and we call the statement descriptive. Where it is nearly-universal and typically dear, the normative import looms large. But the evidence that there are two radically different kinds of statement—or one category of statement and a radically different category of non-statement—is lacking. When philosophers try to produce such evidence, they usually assume a strong form of moral internalism which is not itself justifiable.
That sounds likely to me.
The other things that sound possible to me are that we could ultimately determine
We ought not build dangerously uncontrolled (AI, nuclear reactors, chemical reactors) or tolerate those who do. We ought not get sentiences to do things for us through promises of physical pain or other disutility (threats and coercion)j
We may have a somewhat different understanding of what ought means by that point, just as we have a different sense of what time and position and particle and matter and energy mean from having done physics. For example, “ought” might be how we refer to a set of policiies that produce an optimum tradeoff between the short term interests of of the individual and the short terim interests of the group, perhaps designed so that a system in which the individuals followed these rules would have productivity growth at some maximum rate, or would converge on some optimum measure of individual freedom or both. Maybe “ought” would suggest such a wide ranging agreement that individuals who don’t follow these rules must be adjusted or restrained because their cost unmodified or unrestrained to the group is so clearly “unfair,” even possibly dangerous.
I am not even Leonardo da Vinci here trying to describe what the future of science might look like. I am some tradesman in Florence in da Vinci’s time trying to describe what the future of science might look like. My point isn’t that any of the individiual details should be the ones we learn when we finally perfect “the moral method” (in analogy with the scientific method), but rather that the richness of what COULD happen makes it very hard to say never, and that someone being told about the possibilities of “objective science” 1000 years ago would have been pretty justified in saying “we will never know whether the sun will rise tomorrow, we will never be able to derive “will happen” from “did happen,” (which I take to be the scientific analogy of can’t derive “ought” from “is”).
Well said. I’d go further:
The “is-ought” dichotomy is overrated, as are the kindred splits between normative and descriptive, practice and theory, etc. I suggest that every “normative” statement contains some “descriptive” import and vice versa. For example, “grass is green” implies statements like “if you want to see green, then other things being equal you should see some grass”, and “murder is immoral” implies something like “if you want to be able to justify your actions to fellow humans in open rational dialogue, you shouldn’t murder.” Where the corresponding motivation (e.g. “wanting to see green”) is idiosyncratic and whimsical, the normative import seems trivial and we call the statement descriptive. Where it is nearly-universal and typically dear, the normative import looms large. But the evidence that there are two radically different kinds of statement—or one category of statement and a radically different category of non-statement—is lacking. When philosophers try to produce such evidence, they usually assume a strong form of moral internalism which is not itself justifiable.