Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions,
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
I might have let some of that bleed through from other subthreads.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
I might have let some of that bleed through from other subthreads.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
No one can do that whatever theory they have. I don’t see how it is relevant.
Which isn’t actually computable.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
Niether is half of math. Many differential equations are uncomputable, and yet they are very useful. Why should a moral theory be computable?
(and “maximize expected utility” can be approximated computably, like most of those uncomputable differential equations)