I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind
that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it’s the “double emphasis on you”, The situations in which I morally ought not do
something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I’m so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being “Rape people” and “Kill people”.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I’m always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I’m a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally.
How can you hate something yet praise it internally? I’m having trouble coming up with an example.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don’t change that kind of relationship by re-arranging atoms..
And what’s the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don’t have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions,
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
I might have let some of that bleed through from other subthreads.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
I don’t see what you’re getting at. I’ll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I’m asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is “what should I do?”, because it’s the only one I can act on. I don’t think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I’ve decided answers the question “what ought to be done”), I claim that I ought to act in such a way as achieves the “best” outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don’t care which is done. You can call this “consequentialism” if you like. Then, unpacking “best” a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”, which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you’ve worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
What’s wrong with sticking with “what ought to be done” as formulation?
I claim that I ought to act in such a way as achieves the “best” outcome,
Meaning others shouldn’t? Your use of the “I” formulation is making your theory unclear.
I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”,
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
I don’t see why. Do you think you are much better at making predictions?
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it’s the “double emphasis on you”, The situations in which I morally ought not do something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Soooo...
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I’m so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being “Rape people” and “Kill people”.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I’m always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I’m a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
How can you hate something yet praise it internally? I’m having trouble coming up with an example.
I know a very good one, very grounded in reality, that millions if not billions of people have and do this.
Death.
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don’t change that kind of relationship by re-arranging atoms..
And what’s the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don’t have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
I might have let some of that bleed through from other subthreads.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
No one can do that whatever theory they have. I don’t see how it is relevant.
Which isn’t actually computable.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
Niether is half of math. Many differential equations are uncomputable, and yet they are very useful. Why should a moral theory be computable?
(and “maximize expected utility” can be approximated computably, like most of those uncomputable differential equations)
I don’t see what you’re getting at. I’ll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I’m asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is “what should I do?”, because it’s the only one I can act on. I don’t think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I’ve decided answers the question “what ought to be done”), I claim that I ought to act in such a way as achieves the “best” outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don’t care which is done. You can call this “consequentialism” if you like. Then, unpacking “best” a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”, which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you’ve worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
What’s wrong with sticking with “what ought to be done” as formulation?
Meaning others shouldn’t? Your use of the “I” formulation is making your theory unclear.
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
I don’t see why. Do you think you are much better at making predictions?