This is a flaw with (ETA: simpler versions of) consequentialism: no one can accurately predict the long range consequences of their actions. But it is unreasonable to hold someone culpable, to blame them, for what they cannot predict. So the consequentialist notion of good and bad actions doesn’t translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise. This line of thinking can lead to a kind of fusion of deontology and consequentialism: we praise someone for following the rules (“as a rule, try to save a life where you can”) even if the consequences were unwelcome (“The person you saved was a mass murderer”);
So the consequentialist notion of good and bad actions doesn’t translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise.
What I want out of a moral theory is to know what I ought to do.
As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.
It seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
I don’t think a moral theory has to have special cases built in for judging other people’s actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.
Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
I don’t think a moral theory has to have special cases built in for judging other people’s actions, and then prescribing rewards/punishments
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful. In general, I don’t want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don’t see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They’re either in jail or they are not.
Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn’t directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
Standard consequentialists can and do judge the actions of others to be right or wrong according to their
consequences. I don’t know what you think is blocking that off.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone’s wallet although the money is morally neutral.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality—and one that is often used as an objection against it.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory
is actually egoism: you are saying that there is no sense in which I should care about people unknown to me,
but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
(...)
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don’t know what you think is blocking that off.
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn’t said “I don’t know what you think is blocking that off.”, I’d have assumed you were perfectly agreeing with me on those points.
(...)
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
If you want to put your own labels on everything, then yes, that’s exactly what my theory is and that’s exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I’m being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incrediblecoincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Your mental judgments are actions, in the useful sense when discussing metaethics
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
How incredibly coincidental and curious!
That was mean sarcastically: so it isn’t coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant.
That is not obvious.
To return to your previous words, I believe you’ll agree that someone who
To return to your previous words, I believe you’ll agree that someone who
That is incomplete.
Oh, sorry. I was jumping from place to place. I’ve edited the comment, what I meant to say was:
“To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.”
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
For me, it’s a good heuristic that judgments and thoughts also count as actions when I’m thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they’re better for.
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality[?]
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
(...)
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
That isn’t a reduction that can be performed by real-world agents. You are using “reduction” in the peculiar LW sense of “ultimately composed of” rather than the more usual “understandable in terms of”. For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I’m not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than…
“understandable in terms of”? What do you even mean? How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in
“near” (or “real”) mode.
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
Huh? I don’t think “instrumental” means “actually will work form an omniscicent PoV”. What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, “don’t kill unless there are serious extenuating circumsntaces” is both “what is considered moral now” and as instrumental as we can achieve.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people?
I don’t see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves
ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
I’m not sure I understand your line of reasoning for that last part of your comment.
It’s what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than...
You may have been “using” in the sense of connoting, or intending that, but you cannot have been using it in
the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
“understandable in terms of”? What do you even mean?
Eg:”All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance”.
How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
That needs tabooing. It explains “reduction” in terms of “reducing”.
“In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states.”
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
The desirability of a world-state is a black-box process
Or an algorithm that can be understood and written down, like the “description” you mention above? That is a rather important distinction.
that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates,
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
There’s not strong reason to think that something actually is good just because our genes say so. It’s a form of Euthyphro. As EY has noted.
If I’m parsing that right, you misunderstood my point. Sorry.
I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I’m saying, though, that this is a matter of normative ethics, not metaethics.
As a matter of metaethics, I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”. As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics.
I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind
that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it’s the “double emphasis on you”, The situations in which I morally ought not do
something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I’m so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being “Rape people” and “Kill people”.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I’m always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I’m a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally.
How can you hate something yet praise it internally? I’m having trouble coming up with an example.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don’t change that kind of relationship by re-arranging atoms..
And what’s the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don’t have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions,
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
I might have let some of that bleed through from other subthreads.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
I don’t see what you’re getting at. I’ll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I’m asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is “what should I do?”, because it’s the only one I can act on. I don’t think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I’ve decided answers the question “what ought to be done”), I claim that I ought to act in such a way as achieves the “best” outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don’t care which is done. You can call this “consequentialism” if you like. Then, unpacking “best” a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”, which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you’ve worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
What’s wrong with sticking with “what ought to be done” as formulation?
I claim that I ought to act in such a way as achieves the “best” outcome,
Meaning others shouldn’t? Your use of the “I” formulation is making your theory unclear.
I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”,
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
I don’t see why. Do you think you are much better at making predictions?
A consequentialist considers the moral action to be the one that has good consequences. But that means moral behaviour is to perform the acts that we anticipate to have good consequences. And moral blame or praise on people is likewise assigned on the consequences of their actions as they anticipated them...
So the consequentialist assigns moral blame if it was anticipated that the person saved was a mass murderer and was likely to kill multiple times again....
We must indeed use rules as a matter of practical necessity, but it’s just that: a matter of practical necessity. We can’t model the entirety of our future lightcone in sufficient detail so we make generic rules like “do not lie” “do not murder” “don’t violate the rights of others” which seem to be more likely to have good consequences than the opposite.
But the good consequences are still the thing we’re striving for—obeying rules is just a means to that end, and therefore can be replaced or overriden in particular contexts where the best consequences are known to be achievable differently...
A consequentialist is perhaps a bit scarier in the sense that you don’t know if they’ll stupidly break some significant rule by using bad judgment. But a deontologist that follows rules can likewise be scary in blindly obeying a rule which you were hoping them to break.
I agree that if what I want is a framework for assigning blame in a socially useful fashion, consequentialism violates many of our intuitions about reasonableness of such a framework.
So, sure, if the purpose of morality is to guide the apportionment of praise and blame, and we endorse those intuitions, then it follows that consequentialism is flawed relative to other models.
It’s not clear to me that either of those premises is necessary.
There’s a confusion here between consequentialistically good acts (ones that have good consequences) and consequentialistically good behaviour (acting according to your beliefs of what acts have good consequences).
People can only act according to their model of the consequences, not accoriding to the consequences themselves.
I find your terms confusing, but yes, I agree that classifying acts is one thing and making decisions is something else, and that a consequentialist does the latter based on their expectations about the consequences, and these often get confused.
judging the moral worth of others actions is something a moral theory should enable one to do. It’s not something you can just give up on.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
judging the moral worth of others actions is something a moral theory should enable one to do.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense?
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question “what is B morally responsible for” does not answer the question “what should A do”, which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that’s not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
“what should A do”, which is the only question A is interested in.
I don’t see how that follows from consequentialism or anything else.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don’t see things this way.
I don’t see how that follows from consequentialism or anything else.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort.
It doesn’t follow from that that you have no interest in praise and blame.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
Isn’t A interested in the actions of B and C that impinge on A?
Isn’t A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything. 2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
“actions of B and C that impinge on A” is a subset of 1) and “giving praise and blame” is a subset of 2). “Influencing the actions of B and C” is also a subset of 2).
1) The state of the world. This is important information for deciding anything.
2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
It doesn’t follow from that that you have no interest in praise and blame.
Yes, and it doesn’t follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it’s just not the same as I use for myself.
Isn’t A interested in the actions of B and C that impinge on A?
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
This is a flaw with (ETA: simpler versions of) consequentialism: no one can accurately predict the long range consequences of their actions. But it is unreasonable to hold someone culpable, to blame them, for what they cannot predict. So the consequentialist notion of good and bad actions doesn’t translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise. This line of thinking can lead to a kind of fusion of deontology and consequentialism: we praise someone for following the rules (“as a rule, try to save a life where you can”) even if the consequences were unwelcome (“The person you saved was a mass murderer”);
What I want out of a moral theory is to know what I ought to do.
As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.
Knowledge without motivation may lend itself to akrasia. It would also be useful for a moral theory to motivate us to do what we ought to do.
So you don’t want to be able to understand how punishments and rewards are morally justified—why someone ought, or not, be sent to jail?
It seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
I don’t think a moral theory has to have special cases built in for judging other people’s actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
Universalisability rides again.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful. In general, I don’t want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don’t see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They’re either in jail or they are not.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn’t directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don’t know what you think is blocking that off.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone’s wallet although the money is morally neutral.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality—and one that is often used as an objection against it.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn’t said “I don’t know what you think is blocking that off.”, I’d have assumed you were perfectly agreeing with me on those points.
If you want to put your own labels on everything, then yes, that’s exactly what my theory is and that’s exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I’m being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incredible coincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality.
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
That was mean sarcastically: so it isn’t coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
What is the point of that comment?
That is not obvious.
That is incomplete.
Oh, sorry. I was jumping from place to place. I’ve edited the comment, what I meant to say was:
“To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.”
For me, it’s a good heuristic that judgments and thoughts also count as actions when I’m thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they’re better for.
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
That isn’t a reduction that can be performed by real-world agents. You are using “reduction” in the peculiar LW sense of “ultimately composed of” rather than the more usual “understandable in terms of”. For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I’m not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than…
“understandable in terms of”? What do you even mean? How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in “near” (or “real”) mode.
Huh? I don’t think “instrumental” means “actually will work form an omniscicent PoV”. What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, “don’t kill unless there are serious extenuating circumsntaces” is both “what is considered moral now” and as instrumental as we can achieve.
I don’t see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
It’s what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
You may have been “using” in the sense of connoting, or intending that, but you cannot have been using it in the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
Eg:”All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance”.
That needs tabooing. It explains “reduction” in terms of “reducing”.
“In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states.”
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
Or an algorithm that can be understood and written down, like the “description” you mention above? That is a rather important distinction.
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
There’s not strong reason to think that something actually is good just because our genes say so. It’s a form of Euthyphro. As EY has noted.
If I’m parsing that right, you misunderstood my point. Sorry.
I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I’m saying, though, that this is a matter of normative ethics, not metaethics.
As a matter of metaethics, I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”. As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics.
Do you understand what I’m getting at better now?
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it’s the “double emphasis on you”, The situations in which I morally ought not do something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Soooo...
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I’m so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being “Rape people” and “Kill people”.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I’m always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I’m a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
How can you hate something yet praise it internally? I’m having trouble coming up with an example.
I know a very good one, very grounded in reality, that millions if not billions of people have and do this.
Death.
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don’t change that kind of relationship by re-arranging atoms..
And what’s the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don’t have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
I might have let some of that bleed through from other subthreads.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
No one can do that whatever theory they have. I don’t see how it is relevant.
Which isn’t actually computable.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
Niether is half of math. Many differential equations are uncomputable, and yet they are very useful. Why should a moral theory be computable?
(and “maximize expected utility” can be approximated computably, like most of those uncomputable differential equations)
I don’t see what you’re getting at. I’ll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I’m asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is “what should I do?”, because it’s the only one I can act on. I don’t think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I’ve decided answers the question “what ought to be done”), I claim that I ought to act in such a way as achieves the “best” outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don’t care which is done. You can call this “consequentialism” if you like. Then, unpacking “best” a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”, which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you’ve worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
What’s wrong with sticking with “what ought to be done” as formulation?
Meaning others shouldn’t? Your use of the “I” formulation is making your theory unclear.
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
I don’t see why. Do you think you are much better at making predictions?
A consequentialist considers the moral action to be the one that has good consequences.
But that means moral behaviour is to perform the acts that we anticipate to have good consequences.
And moral blame or praise on people is likewise assigned on the consequences of their actions as they anticipated them...
So the consequentialist assigns moral blame if it was anticipated that the person saved was a mass murderer and was likely to kill multiple times again....
And how do we anticipate or project, save on the basis of relatively tractable rules?
We must indeed use rules as a matter of practical necessity, but it’s just that: a matter of practical necessity. We can’t model the entirety of our future lightcone in sufficient detail so we make generic rules like “do not lie” “do not murder” “don’t violate the rights of others” which seem to be more likely to have good consequences than the opposite.
But the good consequences are still the thing we’re striving for—obeying rules is just a means to that end, and therefore can be replaced or overriden in particular contexts where the best consequences are known to be achievable differently...
A consequentialist is perhaps a bit scarier in the sense that you don’t know if they’ll stupidly break some significant rule by using bad judgment. But a deontologist that follows rules can likewise be scary in blindly obeying a rule which you were hoping them to break.
In the case of super-intelligent agents that shared my values, I’d hope them to be consequentialists. As intelligence of agent decreases, there’s assurance in some limited type of deontology… “For the good of the tribe, do not murder even for the good of the tribe...”
That’s the kind of Combination approach I was arguing for.
My understanding of pure Consequentialism is that this is exactly the approach it promotes.
Am I to understand that you’re arguing for consequentialism by rejecting “consequentialism” and calling it a “combination approach”?
That would be why he specified “simpler versions”, yes?
Yes
I agree that if what I want is a framework for assigning blame in a socially useful fashion, consequentialism violates many of our intuitions about reasonableness of such a framework.
So, sure, if the purpose of morality is to guide the apportionment of praise and blame, and we endorse those intuitions, then it follows that consequentialism is flawed relative to other models.
It’s not clear to me that either of those premises is necessary.
There’s a confusion here between consequentialistically good acts (ones that have good consequences) and consequentialistically good behaviour (acting according to your beliefs of what acts have good consequences).
People can only act according to their model of the consequences, not accoriding to the consequences themselves.
I find your terms confusing, but yes, I agree that classifying acts is one thing and making decisions is something else, and that a consequentialist does the latter based on their expectations about the consequences, and these often get confused.
That’s not a flaw in consequentialism. It’s a flaw in judging other people’s morality.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
judging the moral worth of others actions is something a moral theory should enable one to do. It’s not something you can just give up on.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question “what is B morally responsible for” does not answer the question “what should A do”, which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that’s not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
Indeed. I would consider it a given that you should model the objects in your world if you want to predict and influence the world.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
I don’t see how that follows from consequentialism or anything else.
Then it is limited.
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don’t see things this way.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
It doesn’t follow from that that you have no interest in praise and blame.
Isn’t A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything.
2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
“actions of B and C that impinge on A” is a subset of 1) and “giving praise and blame” is a subset of 2). “Influencing the actions of B and C” is also a subset of 2).
1) The state of the world. This is important information for deciding anything. 2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
Or, briefly “The Union of A and not-A”
or, more briefly still:
“Everything”.
Yes, and it doesn’t follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it’s just not the same as I use for myself.
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
Your metaethics treats everyone as acting but not acted on?