It’s been sketched out several times already, by various people.
1.You have a set of goals (aposteriori “is”)
You have a set of strategies for achieving goals with varying levels of efficiency (aposteriori “is”)
Being rational is applying rationality to achieve goals optimally (analytical “is”), ie if you are want to be rational, you ought to optimise your UF.
Of course that isnt pure empiricism (what is?)
because 3 is a sort of conceptual analysis of “oughtness”. I am not bothered
about that for a number of reasons: I am not commited to the insolubility of the is/ought gap, nor to the non existence
of objective ethics.
I’m not sure I get this. The intention behind drawing the initial distinction between is/ought problems was to make clear the focus is not on, as it were, the mind of the beholder. The question is a less specific variant of the question as to how any mere physical being comes to have intentions (e.g., to buy a lawnmower) in the first place.
I don’t see why the etiology of intentions should pose any more of a problem than the representation of intentions. You can build robots that seek out light sources. “seek light sources” is represented in its programming. It came from
the progammer. Where’s the problem?
I agree, but I think it does mean you ought to in a qualified sense.
But the qualified sense is easily explained as goal+strategy. You rational-ought to adopt strategies to achieve
your goals.
Your merely being in a physical or computational state, however, by itself doesn’t, or so the thought goes
Concrete facts about my goals and siituation, and abstract facts about which strategies achieve which goals
are allt hat is needed to establish truths about rational-ought. What is unnaturalistic about that? The abstract
facts about how strageties may be unnaturualisable in a sense, but it is a rather unimpactive sense. Abstract
reasoning in general isn’t (at least usefully) reducible to atoms, but that doesnt mean it is “about” some non physical realm. In a sense it isn’t about anything, It just operates on its own level.
It’s been sketched out several times already, by various people.
1.You have a set of goals (aposteriori “is”)
You have a set of strategies for achieving goals with varying levels of efficiency (aposteriori “is”)
Being rational is applying rationality to achieve goals optimally (analytical “is”), ie if you are want to be rational, you ought to optimise your UF.
Of course that isnt pure empiricism (what is?) because 3 is a sort of conceptual analysis of “oughtness”. I am not bothered about that for a number of reasons: I am not commited to the insolubility of the is/ought gap, nor to the non existence of objective ethics.
I don’t see why the etiology of intentions should pose any more of a problem than the representation of intentions. You can build robots that seek out light sources. “seek light sources” is represented in its programming. It came from the progammer. Where’s the problem?
But the qualified sense is easily explained as goal+strategy. You rational-ought to adopt strategies to achieve your goals.
Concrete facts about my goals and siituation, and abstract facts about which strategies achieve which goals are allt hat is needed to establish truths about rational-ought. What is unnaturalistic about that? The abstract facts about how strageties may be unnaturualisable in a sense, but it is a rather unimpactive sense. Abstract reasoning in general isn’t (at least usefully) reducible to atoms, but that doesnt mean it is “about” some non physical realm. In a sense it isn’t about anything, It just operates on its own level.