I can only interpret a statement like this as “they are exactly like you would be if you were exactly like them”, which is of course a tautology.
No. If they were, say, psycopaths, or babyeater aliens in human skins, then living their life—holding the same beliefs, experienceing the same problems—would not make you act the same way. It’s a question of terminal value differences and instrumental value differences. The former must be fought, (or at most bargained with,) but the latter can be persuaded.
If you first accept a definition of what is good and what is bad, then certainly there are bad people. A bad person is someone who does bad things. This is still relative to some morality, presumably that of the speaker.
So anyone who’s actions have negative consequences “deserves” Bad Things to happen to them?
My point is that the distinction between “Bad Person” and “Good Person” seems … well, arbitrary. Anyone’s actions can have Bad Consequences. I guess that didn’t come across so well, huh?
This is a flaw with (ETA: simpler versions of) consequentialism: no one can accurately predict the long range consequences of their actions. But it is unreasonable to hold someone culpable, to blame them, for what they cannot predict. So the consequentialist notion of good and bad actions doesn’t translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise. This line of thinking can lead to a kind of fusion of deontology and consequentialism: we praise someone for following the rules (“as a rule, try to save a life where you can”) even if the consequences were unwelcome (“The person you saved was a mass murderer”);
So the consequentialist notion of good and bad actions doesn’t translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise.
What I want out of a moral theory is to know what I ought to do.
As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.
It seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
I don’t think a moral theory has to have special cases built in for judging other people’s actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.
Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
I don’t think a moral theory has to have special cases built in for judging other people’s actions, and then prescribing rewards/punishments
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful. In general, I don’t want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don’t see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They’re either in jail or they are not.
Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn’t directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
Standard consequentialists can and do judge the actions of others to be right or wrong according to their
consequences. I don’t know what you think is blocking that off.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone’s wallet although the money is morally neutral.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality—and one that is often used as an objection against it.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory
is actually egoism: you are saying that there is no sense in which I should care about people unknown to me,
but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
(...)
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don’t know what you think is blocking that off.
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn’t said “I don’t know what you think is blocking that off.”, I’d have assumed you were perfectly agreeing with me on those points.
(...)
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
If you want to put your own labels on everything, then yes, that’s exactly what my theory is and that’s exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I’m being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incrediblecoincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Your mental judgments are actions, in the useful sense when discussing metaethics
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
How incredibly coincidental and curious!
That was mean sarcastically: so it isn’t coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant.
That is not obvious.
To return to your previous words, I believe you’ll agree that someone who
To return to your previous words, I believe you’ll agree that someone who
That is incomplete.
Oh, sorry. I was jumping from place to place. I’ve edited the comment, what I meant to say was:
“To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.”
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
For me, it’s a good heuristic that judgments and thoughts also count as actions when I’m thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they’re better for.
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality[?]
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
(...)
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
That isn’t a reduction that can be performed by real-world agents. You are using “reduction” in the peculiar LW sense of “ultimately composed of” rather than the more usual “understandable in terms of”. For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I’m not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than…
“understandable in terms of”? What do you even mean? How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in
“near” (or “real”) mode.
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
Huh? I don’t think “instrumental” means “actually will work form an omniscicent PoV”. What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, “don’t kill unless there are serious extenuating circumsntaces” is both “what is considered moral now” and as instrumental as we can achieve.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people?
I don’t see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves
ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
I’m not sure I understand your line of reasoning for that last part of your comment.
It’s what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than...
You may have been “using” in the sense of connoting, or intending that, but you cannot have been using it in
the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
“understandable in terms of”? What do you even mean?
Eg:”All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance”.
How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
That needs tabooing. It explains “reduction” in terms of “reducing”.
“In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states.”
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
The desirability of a world-state is a black-box process
Or an algorithm that can be understood and written down, like the “description” you mention above? That is a rather important distinction.
that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates,
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
There’s not strong reason to think that something actually is good just because our genes say so. It’s a form of Euthyphro. As EY has noted.
If I’m parsing that right, you misunderstood my point. Sorry.
I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I’m saying, though, that this is a matter of normative ethics, not metaethics.
As a matter of metaethics, I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”. As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics.
I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind
that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it’s the “double emphasis on you”, The situations in which I morally ought not do
something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I’m so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being “Rape people” and “Kill people”.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I’m always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I’m a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally.
How can you hate something yet praise it internally? I’m having trouble coming up with an example.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don’t change that kind of relationship by re-arranging atoms..
And what’s the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don’t have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions,
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
I might have let some of that bleed through from other subthreads.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
I don’t see what you’re getting at. I’ll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I’m asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is “what should I do?”, because it’s the only one I can act on. I don’t think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I’ve decided answers the question “what ought to be done”), I claim that I ought to act in such a way as achieves the “best” outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don’t care which is done. You can call this “consequentialism” if you like. Then, unpacking “best” a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”, which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you’ve worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
What’s wrong with sticking with “what ought to be done” as formulation?
I claim that I ought to act in such a way as achieves the “best” outcome,
Meaning others shouldn’t? Your use of the “I” formulation is making your theory unclear.
I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”,
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
I don’t see why. Do you think you are much better at making predictions?
A consequentialist considers the moral action to be the one that has good consequences. But that means moral behaviour is to perform the acts that we anticipate to have good consequences. And moral blame or praise on people is likewise assigned on the consequences of their actions as they anticipated them...
So the consequentialist assigns moral blame if it was anticipated that the person saved was a mass murderer and was likely to kill multiple times again....
We must indeed use rules as a matter of practical necessity, but it’s just that: a matter of practical necessity. We can’t model the entirety of our future lightcone in sufficient detail so we make generic rules like “do not lie” “do not murder” “don’t violate the rights of others” which seem to be more likely to have good consequences than the opposite.
But the good consequences are still the thing we’re striving for—obeying rules is just a means to that end, and therefore can be replaced or overriden in particular contexts where the best consequences are known to be achievable differently...
A consequentialist is perhaps a bit scarier in the sense that you don’t know if they’ll stupidly break some significant rule by using bad judgment. But a deontologist that follows rules can likewise be scary in blindly obeying a rule which you were hoping them to break.
I agree that if what I want is a framework for assigning blame in a socially useful fashion, consequentialism violates many of our intuitions about reasonableness of such a framework.
So, sure, if the purpose of morality is to guide the apportionment of praise and blame, and we endorse those intuitions, then it follows that consequentialism is flawed relative to other models.
It’s not clear to me that either of those premises is necessary.
There’s a confusion here between consequentialistically good acts (ones that have good consequences) and consequentialistically good behaviour (acting according to your beliefs of what acts have good consequences).
People can only act according to their model of the consequences, not accoriding to the consequences themselves.
I find your terms confusing, but yes, I agree that classifying acts is one thing and making decisions is something else, and that a consequentialist does the latter based on their expectations about the consequences, and these often get confused.
judging the moral worth of others actions is something a moral theory should enable one to do. It’s not something you can just give up on.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
judging the moral worth of others actions is something a moral theory should enable one to do.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense?
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question “what is B morally responsible for” does not answer the question “what should A do”, which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that’s not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
“what should A do”, which is the only question A is interested in.
I don’t see how that follows from consequentialism or anything else.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don’t see things this way.
I don’t see how that follows from consequentialism or anything else.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort.
It doesn’t follow from that that you have no interest in praise and blame.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
Isn’t A interested in the actions of B and C that impinge on A?
Isn’t A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything. 2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
“actions of B and C that impinge on A” is a subset of 1) and “giving praise and blame” is a subset of 2). “Influencing the actions of B and C” is also a subset of 2).
1) The state of the world. This is important information for deciding anything.
2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
It doesn’t follow from that that you have no interest in praise and blame.
Yes, and it doesn’t follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it’s just not the same as I use for myself.
Isn’t A interested in the actions of B and C that impinge on A?
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
Yes, but even that is subject to counter-arguments and further debate, so I think the point is in trying to find something that more appropriately describes exactly what we’re looking for.
After all, proportionality and other factors have to be taken into account. If Einstein takes more actions with Good Consequences and less actions with Bad Consequences than John Q. Eggfart, I don’t anticipate this to be solely because John Q. Eggfart is a Bad Person with a broken morality system. I suspect Mr. Eggfart’s IQ of 75 to have something to do with it.
I wonder if 1,000 people upvoted this comment, in series with 1,000 people voting it down. I’d like to know 1/(# of reads) or 1/(number of votes). Can we use network theory to assume that people here conform to the first-mover theory? (ie: “If a post starts getting upvoted, it then continues to be upvoted, whereas if a post starts getting downvoted or ignored, it continues to get downvoted or ignored, or at least has a greater probability of being so.”)
I suspect Mr. Eggfart’s IQ of 75 to have something to do with it.
He also might be a sociopath with an IQ superior to Einstein’s. He also might be a John von Neumann, (successfully?) arguing in favor of nuking Russia, because he thinks that Russia is evil (correct) and that Russia is full of scientists who are almost as smart as himself (maybe correct), and because it’s logical to do so (possibly correct, but seemingly not, based on the outcome), or he might think that everyone is as logical as possible (incorrect), or he might not have empathy for those who don’t take the opportunities they’re given (who’s to say if he’s right?). In hindsight, I’m really glad the USA didn’t nuke Russia. In hindsight, I’m very glad that Von Neumann wasn’t killed in order to minimize his destructiveness, but that democracy managed to mitigate his (and Goldwater’s) destructiveness. (Goldwater was the better candidate overall, on all subjects, but his willingness to use the bomb was a fatal, grotesque, and unacceptable flaw in that otherwise “better overall.” Goldwater’s attitude towards the bomb was similar to, and seemingly informed by, von Neumann.)
I do support punishing sociopaths legally, even if they didn’t think it was wrong when they raped and murdered your wife. What the sociopath thinks doesn’t diminish the harm they knowingly caused. The legal system should be a disincentive toward actual wrong. When the legal system operates properly, it is a blessing that allows the emergence of market-based civilization. The idea of a “right” is not necessarily a deontological philosophical claim, but a legal one.
As a consequentialist, I don’t necessarily hate sociopaths. I understand why they exist, from an evolutionary perspective. …But I might still kill one if I had to, in order to serve what I anticipated to be the optimal good. I might also kill one in retaliation, because they had taken something valuable from me (such as the life of a loved one), and I wished to make it clear to them that their choice to steal from me rightfully enraged me (vengeance, punishment).
While I don’t think that (even righteous) punishment is the grandest motive, I also don’t deny others their (rightful) desires for punishment. There is a “right” and a “wrong” external to outcomes, based on philosophy that is mutually-compatible with consequentialism. If we were all submissive slaves, there would be a lot of “peace,” but I still wouldn’t likely choose such an existence over a violent but possibly more free existence.
If you mean that some people choose poorly or are simply unlucky, yes.
If you mean that some people are Evil and so take Evil actions, then … well, yes, I suppose, psychopaths. But most Bad Consequences do not reflect some inherent deformity of the soul, which is all I’m saying.
Classifying people as Bad is not helpful. Classifying people as Dangerous … is. My only objection is turning people into Evil Mutants—which the comment I originally replied to was doing. (“Bad Things are done by Bad People who deserve to be punished.”)
If you mean that some people are Evil and so take Evil actions, then … well, yes, I suppose, psychopaths. But most Bad Consequences do not reflect some inherent deformity of the soul, which is all I’m saying.
I’d prefer to leave “the soul” out of this.
How do you know that most bad consequences don’t involve sociopaths or their influence? It seems unlikely that that’s not the case, to me.
Also, don’t forget conformists who obey sociopaths. Franz Stangl said he felt “weak in the knees” when he was pushing gas chamber doors shut on a group of women and kids. …But he did it anyway.
Wagner gleefully killed women and kids.
Yet, we also rightfully call Stangl an evil person, and rightfully punish him, even though he was “Just following orders.” In hindsight, even his claims that the democide of over 6 million Jews and 10 million German dissidents and dissenters was solely for theft and without racist motivations, doesn’t make me want to punish him less.
I’m aware many people who believe this don’t literally think of it in terms of the soul - if only because they don’t think about it all—but I think it’s a good shorthand for the ideas involved.
How do you know that most bad consequences don’t involve sociopaths or their influence?
Observing simple incompetence in the environment.
Franz Stangl [...] Wagner
I should probably note I’m not familiar with these individuals, although the names do ring a faint bell.
Franz Stangl said he felt “weak in the knees” when he was pushing gas chamber doors shut on a group of women and kids. …But he did it anyway.
Seems like evidence for my previous statements. No?
Wagner gleefully killed women and kids.
These are Nazis, yes? I wouldn’t be that surprised if some of them were “gleeful” even if they had literally no psychopaths among their ranks—unlikely from a purely statistical standpoint.
Yet, we also rightfully call Stangl an evil person, and rightfully punish him, even though he was “Just following orders.”
While my contrarian tendencies are screaming at me to argue this was, in fact, completely unjust … I can see some neat arguments for that …
We punished Nazis who were “just obeying orders”—and now nobody can use that excuse. Seems like a pretty classic example of punishment setting an example for others. No “they’re monsters and must suffer” required.
In hindsight, even his claims that the democide of over 6 million Jews and 10 million German dissidents and dissenters was solely for theft and without racist motivations, doesn’t make me want to punish him less.
I’m probably more practiced at empathising with racists, and specifically Nazis—just based on your being drawn from our culture—but surely racist beliefs is a more sympathetic motivation than greed?
(At least, if we ignore the idea of bias possibly leading to racist beliefs that justify benefiting ourselves at their expense, which you are, right?)
In fact, there is a blind spot in most people’s realities that’s filled by their evolutionarily-determined blindness to sociopaths. This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons, total lack of empathy, as described by Robert Hare in “without conscience”) with modern technology and a support network of other sociopaths.
In fact, virtually everyone who hasn’t read Stanley Milgram’s book about it, and put in a lot of thought about its implications is in this category. I’m not suggesting that you or anyone else in this conversation is “bad” or “ignorant,” but just that you might not be referencing an accurate picture of political thought, political reality, political networks.
The world still doesn’t have much of a problem with the “initiation of force” or “aggression.” (Minus a minority of enlightened libertarian dissenters.) …Especially not when it’s labeled as “majoritarian government.” ie: “Legitimized by a vote.” However, a large and growing number of people who see reality accurately (small-L libertarians) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people.
Stangl could have recognized that the murder of women and children was “too wrong to tolerate.” In fact, he did recognize this, by his comment that he felt “weak in the knees” while pushing women and children into the gas chamber. That he chose to follow “the path of compliance” “the path of obedience” and “the path of nonresistance” (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).
The reason I still judge the Nazis (and their modern equivalents) harshly is because they faced significant opposition, but it was almost as wrong as they were. The levellers innovated proper jury trials in the 1600s, and restored them by the 1670, in the trial of William Penn. It wasn’t as if Austria was without its “Golden Bull” either. Instead, they chose a mindless interpretation of “the will to power.”
The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler’s rise to power. Adam Smith had written “The Wealth of Nations” over a century earlier. The Federalist and Anti-Federalists were right in incredible detail again, over a century earlier.
So, is this trolling? You cite the Milgram experiment, in which the authorities did not pretend to represent the government. The prevalence and importance of non-governmental authority in real life is one of the main objections to libertarianism, especially the version you seem to promote here (right-wing libertarianism as moral principle).
I’m on a mobile device right now—I’ll go over your arguments, links, and videos in more detail later, so here are my immediate responses, nothing more.
In fact, there is a blind spot in most people’s realities that’s filled by their evolutionarily-determined blindness to sociopaths.
Wait, why would evolution make us vulnerable to sociopaths? Wouldn’t patching such a weakness be an evolutionary advantage?
This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons...
Wouldn’t a total lack of mirror neurons make people much harder to predict, crippling social skills?
I’m not suggesting that you or anyone else in this conversation is “bad” or “ignorant,” but just that you might not be referencing an accurate picture of political thought, political reality, political networks.
“Ignorant” is not, and should not be, a synonym for “bad”. If you have valuable information for me, I’ll own up to it.
The world still doesn’t have much of a problem with the “initiation of force” or “aggression.”
Those strike me as near-meaningless terms, with connotations chosen specifically so people will have a problem with them despite their vagueness.
That he chose to follow “the path of compliance” “the path of obedience” and “the path of nonresistance” (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).
Did you accidentally a word there? I don’t follow your point.
The reason I still judge the Nazis … they chose a mindless interpretation of “the will to power.” The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler’s rise to power.
And clearly, they all deliberately chose the suboptimal choice, in full knowledge of their mistake.
Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.
You’re joking, right?
Statistical likelihood of being murdered by your own government, during peacetime, worldwide.
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn’t give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path (to the greatest extent allowed by the nation’s many “law students” who become “licensed lawyers.” What if all those law students had become STEM majors, and built better machines and technologies?) I dare say that that simple desire for an easier paycheck might be the cause of sociopathy on a grand scale. I have my own theories about this, but for a moment, nevermind _why.
If societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we’re playing in that fall. If societies don’t fall entirely to over-parasitism, then what forces ameliorate parasitism?
And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn’t take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.
But I think R. J. Rummel’s graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we’re not on the same course. Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? …Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.
But sure, the graph doesn’t mean anything if technology makes us smart enough to break free from past cycles. In that case, the warning didn’t need to be sounded as loudly as Rummel has sounded it.
...And I don’t care if the graph looks “skeevy.” That’s an ad-hominem attack that ignores the substance of the warning. I encourage you to familiarize yourself with his entire site. It contains a lot of valuable information. The more you rebel against the look and feel of the site, the more I encourage you to investigate it, and consider that you might be rebelling against the inconsequential and ignoring the substance.
Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.
There are a lot of people who really don’t understand the structure of reality, or how prevalent and how destructive sociopaths (and the conformists that they influence) are.
You know, this raises an interesting question: what would actually motivate a clinical psychopath in a position of power? Well, self-interest, right? I can see how there might be a lot of environmental disasters, defective products, poor working conditions as a result … probably also a certain amount of skullduggery would be related to this as well.
Of course, this is an example of society/economics leading a psychopath astray, rather than the other way around. Still, it might be worth pushing to have politicians etc. tested and found unfit if they’re psychopathic.
In fact, there is a blind spot in most people’s realities that’s filled by their evolutionarily-determined blindness to sociopaths.
I remain deeply suspicious of this sentence.
In fact, virtually everyone who hasn’t read Stanley Milgram’s book about it, and put in a lot of thought about its implications is in this category [...] you might not be referencing an accurate picture of political thought, political reality, political networks.
This seems reasonable, actually. I’m unclear why I should believe you know better, but we are on LessWrong.
The world still doesn’t have much of a problem with the “initiation of force” or “aggression.” (Minus a minority of enlightened libertarian dissenters.) …Especially not when it’s labeled as “majoritarian government.” ie: “Legitimized by a vote.” However, a large and growing number of people who see reality accurately (small-L libertarians) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people.
I … words fail me. I seriously cannot respond to this. Please, explain yourself, with actual reference to this supposed reality you perceive, and with the term “initiation of force” tabooed.
Talk about the prison industrial complex with anyone, and talk with someone who has family members imprisoned for a victimless crime offense.
And this is the result of … psychopaths? Human psychological blindspots evolved in response to psychopaths?
Talk with someone who knows Schaeffer Cox, (one of the many political prisoners in the USA).
Well, that’s … legitimately disturbing. Of course, it may be inaccurate, or even accurate but justified … still cause for concern.
Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.
You know, my government could be taken down with a few month’s terrorism, and has been. There are actual murderers in power here, from the ahem glorious revolution. I actually think someone who faced this sort of thing here might have a real chance of winning that fight, if they were smart.
This contributes to my vague like of american-style maintenance-of-a-well-organized-militia gun ownership, despite the immediate downsides.
And, of course, no other government is operating such attacks in Ireland, to my knowledge. I think I have a lot more to fear from organized crime than organized law, and I have a lot more unpopular political opinions than money.
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
The site appears to be explicitly talking about genocide etc. in third-world countries.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn’t give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path [...] societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we’re playing in that fall.
Citation very much needed, I’m afraid. You are skirting the edge of assuming your own conclusion, which suggests it’s a large part of your worldview; am I right?
What if all those law students had become STEM majors, and built better machines and technologies?
I’m going to say “surprisingly little”. Eh, it’s worth a shot in at least a state-level trial.
If societies don’t fall entirely to over-parasitism, then what forces ameliorate parasitism?
And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn’t take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.
Assuming “past” and “future” here are metaphorically referring to more/less advanced societies, absolutely.
But I think R. J. Rummel’s graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we’re not on the same course.
This doesn’t seem likely to fall into even the same order of magnitude as X-risks. In fact, I think the main effect would be the possible impact on reducing existential threats.
Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? …Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.
And you blame these on … psychopaths?
Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.
Hmm. Have you considered dressing better? Because those youtube documentaries are borderline unwatchable, and I am right only barely motivated enough to watch them because I would feel bad at potentially neglecting a source of info. (If they continue to consist of facts I already know and raw, unsupported declarations I will, in fact, stop watching them.)
Getting maths right is useless when youmhave got concpets wrong. Your graph throws
Liberal democracies in with authoritarian and totalitarianism regimes. From which you derive that mugasofer is AA likely to be killed by Michael Higgins as he is by Pol Pot.
Your first link (https://www.youtube.com/watch?v=MgGyvxqYSbE) both appears to be, and is, a farly typical YouTube conspiracy theory documentary that merely happens to focus on psychopaths. It was so bad I seriously considered giving up on reviewing your stuff. I strongly recommend that, whatever you do, you cease using this as your introductory point.
“The Psychology of Evil” was mildly interesting; although it didn’t contain much in the way of new data for me, it contained much that is relatively obscure. I did notice, however, that he appears to be not only anthropomorphizing but demonizingformless things. Not only are most bad things accomplished by large social forces, most things period are. It is easier for a “freethinker” to do damage than good, although obviously, considering we are on LW, I consider this a relatively minor point.
I find the identification of “people who see reality accurately” with “small-l libertarians” extremely dubious, especially when it goes completely unsupported, as if this were a background feature of reality barely worth remarking on.
Prison industrial complex link is meh; this, on the other hand, is excellent, and I may use it myself.
Schaeffer Cox is a fraud, although I can’t blame him for trying and I remain concerned about the general problem even if he is not an instance of it.
The chart remains utterly unrelated to anything you mentioned or seem particularly concerned about here.
One paper examining a sizable sample of business folk found that percentage of sociopaths in the corporate world is 3.5 times higher than in the general population. Another study of 346 white-collar workers found that the percentage of corporate sociopaths increased as you go up the corporate ladder. That’s consistent with the reasons why politicians tend to be sociopaths: corporate leaders have lots of power over others and arguably even less need for empathy and conscience than politicians.
No. If they were, say, psycopaths, or babyeater aliens in human skins, then living their life—holding the same beliefs, experienceing the same problems—would not make you act the same way. It’s a question of terminal value differences and instrumental value differences. The former must be fought, (or at most bargained with,) but the latter can be persuaded.
So anyone who’s actions have negative consequences “deserves” Bad Things to happen to them?
I am not saying that. I was only replying to the part ”… is fundamentally flawed because it assumes that there is such a thing as a bad person”.
My point is that the distinction between “Bad Person” and “Good Person” seems … well, arbitrary. Anyone’s actions can have Bad Consequences. I guess that didn’t come across so well, huh?
This is a flaw with (ETA: simpler versions of) consequentialism: no one can accurately predict the long range consequences of their actions. But it is unreasonable to hold someone culpable, to blame them, for what they cannot predict. So the consequentialist notion of good and bad actions doesn’t translate directly into what we want from a pratical moral theory, guidance as to apportion blame and praise. This line of thinking can lead to a kind of fusion of deontology and consequentialism: we praise someone for following the rules (“as a rule, try to save a life where you can”) even if the consequences were unwelcome (“The person you saved was a mass murderer”);
What I want out of a moral theory is to know what I ought to do.
As far as blame and praise go, consequentialism with game theory tells you how to use a system of blame and praise provide good incentives for desired behavior.
Knowledge without motivation may lend itself to akrasia. It would also be useful for a moral theory to motivate us to do what we ought to do.
So you don’t want to be able to understand how punishments and rewards are morally justified—why someone ought, or not, be sent to jail?
It seems to me that judging people and sending them to jail is on the level of actions, like whether you should donate to charity. Whether someone ought to be jailed should be judged like other moral questions; does it produce good consequences or follow good rules or whatever.
I don’t think a moral theory has to have special cases built in for judging other people’s actions, and then prescribing rewards/punishments. It should describe constriants on what is right, and then let you derive individual cases like the righteusness of jail from what is right in general.
But, unless JGWeissman is a judge, the question of whether someone should go to jail is a moral question (as you seem to accept) that is not concerned with what JGWeissman ought to do.
Universalisability rides again.
The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful. In general, I don’t want people to go to jail because jail is unpleasant, it prevents people from doing many useful things, and its dehumanizing nature can lead to people becoming more criminal. I want specific people to go jail because it prevents them from repeating their bad actions, and having jail as a predictable consequence for a well defined set of bad behaviors is an incentive for people not to execute those bad behaviors. (And I want our criminal justice system to be more efficient about this.) I don’t see why it has to be more complicated, or more fundamental, than that. Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They’re either in jail or they are not.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn’t directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don’t know what you think is blocking that off.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone’s wallet although the money is morally neutral.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality—and one that is often used as an objection against it.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn’t said “I don’t know what you think is blocking that off.”, I’d have assumed you were perfectly agreeing with me on those points.
If you want to put your own labels on everything, then yes, that’s exactly what my theory is and that’s exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I’m being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incredible coincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality.
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
That was mean sarcastically: so it isn’t coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
What is the point of that comment?
That is not obvious.
That is incomplete.
Oh, sorry. I was jumping from place to place. I’ve edited the comment, what I meant to say was:
“To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.”
For me, it’s a good heuristic that judgments and thoughts also count as actions when I’m thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they’re better for.
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
That isn’t a reduction that can be performed by real-world agents. You are using “reduction” in the peculiar LW sense of “ultimately composed of” rather than the more usual “understandable in terms of”. For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I’m not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than…
“understandable in terms of”? What do you even mean? How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in “near” (or “real”) mode.
Huh? I don’t think “instrumental” means “actually will work form an omniscicent PoV”. What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, “don’t kill unless there are serious extenuating circumsntaces” is both “what is considered moral now” and as instrumental as we can achieve.
I don’t see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
It’s what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
You may have been “using” in the sense of connoting, or intending that, but you cannot have been using it in the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
Eg:”All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance”.
That needs tabooing. It explains “reduction” in terms of “reducing”.
“In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states.”
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
Or an algorithm that can be understood and written down, like the “description” you mention above? That is a rather important distinction.
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
There’s not strong reason to think that something actually is good just because our genes say so. It’s a form of Euthyphro. As EY has noted.
If I’m parsing that right, you misunderstood my point. Sorry.
I am not trying to lose information by applying a universalizing instinct. It is fully OK, on the level of a particular moral theory, to make such judgements and prescriptions. I’m saying, though, that this is a matter of normative ethics, not metaethics.
As a matter of metaethics, I don’t think moral theories are about judging the actions of other people, or even yourself. I think they are about what you ought to do, with double emphasis on “you”. As a matter of normaitive ethics, I think it is terminally good to punish the evil and reward the just, (though it is also instrumentally a good idea for game thoery reasons), but this should not leak into metaethics.
Do you understand what I’m getting at better now?
What I oought to do is the kind of actions that attract praise. The kind of actions that attract praise are the kind that ought to be done. Those are surley different ways of saying the same thing.
Why would you differ? Maybe it’s the “double emphasis on you”, The situations in which I morally ought not do something to my advantage are where it would affect someone else. Maybe you are an ethical egoist.
Soooo...
Suppose I hypnotize all humans. All of them! And I give them all the inviolable command to always praise murder and genocide. I’m so good at hypnosis that it overrides everything else and this Law becomes a tightly-entangled part of their entire consciousnesses. However, they still hate murder and genocide, are still unhappy about their effects, etc. They just praise it, both vocally and internally and mentally. Somewhat like how many used to praise Zeus, despite most of his interactions with the world being “Rape people” and “Kill people”.
By the argument you’re giving, this would effectively hack and reprogram morality itself (gasp!) such that you should always do murder and genocide as much as possible (since they “always” praise it, without diminishing returns or habituation effects or desensitization).
Clearly this is not the same as what you ought to do.
(In this case, my first guess would be that you should revert my hypnosis and prevent me and anyone else from ever doing that again.)
For more exploration into this, suppose I’m always optimally good. Always. A perfectly optimally-morally-good human. What praise do I get? Well, some for that, some once in a while when I do something particularly heroic. Otherwise, various effects make the praise rather rare.
On the other hand, if I’m a super-sucky bad human that kills people by accident all the time (say, ten every hour on average), then each time I manage to prevent one such accident I get praise. I could optimize this and generate a much larger amount of praise with this strategy. Clearly this set of action attracts more praise. Should I ought to do this and seek to do it more than the previous one?
How can you hate something yet praise it internally? I’m having trouble coming up with an example.
I know a very good one, very grounded in reality, that millions if not billions of people have and do this.
Death.
No. Good acts are acts that should be praised, not acts that happen to be. I said the relationship between ought.good/praise was analytical, ie semantic. You don’t change that kind of relationship by re-arranging atoms..
And what’s the rule, the algorithm, then, for deciding which acts should be praised?
The only such algorithm I know of is by looking at their (expected) consequences, and checking whether the resulting possible-futures are more desirable for some set of human minds (preferably all of them) - which is a very complicated function that so far we don’t have access to and try to estimate using our intuitions.
Which seems, to me, isomorphic to praiseworthiness being an irrelevant intermediary step that just helps you form your intuitions, and points towards some form of something-close-to-what-I-would-call-”consequentialism” as the best method of judging Good and Bad, whether of past actions of oneself, or others, or of possible actions to take for oneself, or others
Moral acts are acts and decisions are a special category of acts and decisions and what makes them special is the way they conceptually relate to praise and blame and obligation.
Where did I differ? I said there was a tautlogy-style relationship between Good and Praisworthy, not a step-in-an-algorithm style relationship.
But that wasn’t what you were saying before. Before you were saying it was all abut JGWeissman.
Yes. There’s a tautology-style relationship between Good and Praiseworthy. That’s almost tautological. If it’s good, it’s “worthy of praise”, because we want what’s good.
Now that we agree, how do you determine, exactly, with detailed instructions I could feed into my computer, what is “praiseworthy”?
I notice that when I ask myself this, I return to consequentialism and my own intuitions as to what I would prefer the world to be like. When I replace “praiseworthy” with “good”, I get the same output. Unfortunately, the output is rather incomplete and not fully transparent to me, so I can’t implement it into a computer program yet.
I might have let some of that bleed through from other subthreads.
Doesn’t that depend on whether praise actually accomplishes getting more of the good?
Praising someone is an action, just as giving someone chocolate or money is. It would be silly to say that dieting is “chocolateworthy”, if chocolate breaks your diet.
No one can do that whatever theory they have. I don’t see how it is relevant.
Which isn’t actually computable.
I’ve never seen any proof of this. It’s also rather easy to approximate to acceptable levels of certainty:
I’ve loaded a pistol, read a manual on pistol operation that I purchased in a big bookstore that lots of people recommend, made sure myself that the pistol was in working order according to what I learned in that manual, and now I’m pointing that pistol at a glass bottle according to the instructions in the manual, and I start pulling the trigger. I expect that soon I will have to use this pistol to defend the lives of many people.
I’m rather confident that it is, in the above scenario, instrumentally useful towards bringing about worldstates where I successfully protect lives to practice rather than not practice, since the result will depend on my skills. However, you’d call this “morally neutral”, since there’s no moral good being made by the shooting of glass bottles in itself, and it isn’t exactly praiseworthy.
However, its expected consequence is that once I later decide to take an action to save lives, I will be more likely to succeed. Whether this practice is praiseworthy or not is irrelevant to me. It increases the chances of saving lives, therefore it is morally good, for me. This is according to a model of which the accuracy can be evaluated or at least estimated. And given the probability of the model’s accuracy, there is a tractable probability of lives saved.
I’m having a hard time seeing what else could be missing.
I mean there is no runnable algorithm, I can’t see how “approximations” could work because of divergences. Any life you save could be the future killer of 10 people one of whom is the future saviour of a 100 people, one of whom is the future killer of 1000 people. Well, I do see how approximations could work: deontologically.
Niether is half of math. Many differential equations are uncomputable, and yet they are very useful. Why should a moral theory be computable?
(and “maximize expected utility” can be approximated computably, like most of those uncomputable differential equations)
I don’t see what you’re getting at. I’ll lay out my full position to see if that helps.
First of all, there are seperate concepts for metaethics and normative ethics. They are a meta-level apart, and mixing them up is like telling me that 2+2=4 when I’m asking about whether 4 is an integer.
So, given those rigidly seperated mental buckets, I claim as a matter of metaethics, that moral theories solve the problem of what ought to be done. Then, as a practical concern, the only question interesting to me, is “what should I do?”, because it’s the only one I can act on. I don’t think this makes me an egoist, or in fact is any evidence at all about what I think ought to be done, because what ought to be done is a question for moral theories, not metaethics.
Then, on the level of normative ethics, i.e. looking from within a moral theory, (which I’ve decided answers the question “what ought to be done”), I claim that I ought to act in such a way as achieves the “best” outcome, and if outcomes are morally identical, then the oughtness of them is identitcal, and I don’t care which is done. You can call this “consequentialism” if you like. Then, unpacking “best” a bit, we find all the good things like fun, happiness, freedom, life, etc.
Among the good things, we may or may not find punishing the unjust and rewarding the just. i suspect we do find it. I claim that this punishableness is not the same as the rightness that the actions of moral agents have, because it includes things like “he didn’t know any better” and “can we really expect people to...”, which I claim are not included in what makes an action right or wrong. This terminal punishableness thing is also mixed up with the instrumental concerns of incentives and game theory, which I claim are a seperate problem to be solved once you’ve worked out what is terminally valueable.
So, anyways, this is all a long widned way of saying that when deciding what to do, I hold myself to a much more demanding standard than I use when judging the actions of others.
What’s wrong with sticking with “what ought to be done” as formulation?
Meaning others shouldn’t? Your use of the “I” formulation is making your theory unclear.
They seem different to you because you are a consequentialist. Consequentialist good and bad outcomes can;t be directly transalted in praiseworthiness and blamewoorthiness because they are too hard to predict.
I don’t see why. Do you think you are much better at making predictions?
A consequentialist considers the moral action to be the one that has good consequences.
But that means moral behaviour is to perform the acts that we anticipate to have good consequences.
And moral blame or praise on people is likewise assigned on the consequences of their actions as they anticipated them...
So the consequentialist assigns moral blame if it was anticipated that the person saved was a mass murderer and was likely to kill multiple times again....
And how do we anticipate or project, save on the basis of relatively tractable rules?
We must indeed use rules as a matter of practical necessity, but it’s just that: a matter of practical necessity. We can’t model the entirety of our future lightcone in sufficient detail so we make generic rules like “do not lie” “do not murder” “don’t violate the rights of others” which seem to be more likely to have good consequences than the opposite.
But the good consequences are still the thing we’re striving for—obeying rules is just a means to that end, and therefore can be replaced or overriden in particular contexts where the best consequences are known to be achievable differently...
A consequentialist is perhaps a bit scarier in the sense that you don’t know if they’ll stupidly break some significant rule by using bad judgment. But a deontologist that follows rules can likewise be scary in blindly obeying a rule which you were hoping them to break.
In the case of super-intelligent agents that shared my values, I’d hope them to be consequentialists. As intelligence of agent decreases, there’s assurance in some limited type of deontology… “For the good of the tribe, do not murder even for the good of the tribe...”
That’s the kind of Combination approach I was arguing for.
My understanding of pure Consequentialism is that this is exactly the approach it promotes.
Am I to understand that you’re arguing for consequentialism by rejecting “consequentialism” and calling it a “combination approach”?
That would be why he specified “simpler versions”, yes?
Yes
I agree that if what I want is a framework for assigning blame in a socially useful fashion, consequentialism violates many of our intuitions about reasonableness of such a framework.
So, sure, if the purpose of morality is to guide the apportionment of praise and blame, and we endorse those intuitions, then it follows that consequentialism is flawed relative to other models.
It’s not clear to me that either of those premises is necessary.
There’s a confusion here between consequentialistically good acts (ones that have good consequences) and consequentialistically good behaviour (acting according to your beliefs of what acts have good consequences).
People can only act according to their model of the consequences, not accoriding to the consequences themselves.
I find your terms confusing, but yes, I agree that classifying acts is one thing and making decisions is something else, and that a consequentialist does the latter based on their expectations about the consequences, and these often get confused.
That’s not a flaw in consequentialism. It’s a flaw in judging other people’s morality.
Consequentialists (should) generally reject the idea that anyone but themselves has moral responsibility.
judging the moral worth of others actions is something a moral theory should enable one to do. It’s not something you can just give up on.
So two consequentialists would decide that each of them has moral responsibility and the other doesn’t? Does that make sense? It is intended as a reductio ad absurdum of consequentialism, or as a bullet to be bitten.
What for? It doesn’t help me achieve good things to know whether you are morally good, except to the extent that “you are morally good” makes useful predictions about your behaviour that I can use to achieve more good. And that’s a question for epistemology, not morality.
They would see it as a two-place concept instead of a one-place concept. Call them A and B. For A, A is morally responsible for everything that goes on in the world. Likewise for B. For A, the question “what is B morally responsible for” does not answer the question “what should A do”, which is the only question A is interested in.
A would agree that for B, B is morally responsible for everything, but would comment that that’s not very interesting (to A) as a moral question.
So another way of looking at it is that for this sort of consequentialist, morality is purely personal.
By extension, however, in case this corollary was lost in inferential distance:
For A, “What should A do?” may include making moral evaluations of B’s possible actions within A’s model of the world and attempting to influence them, such that A-actions that affect the actions of B can become very important.
Thus, by instrumental utility, A often should make a model of B in order to influence B’s actions on the world as much as possible, since this influence is one possible action A can take that influences A’s own moral responsibility towards the world.
Indeed. I would consider it a given that you should model the objects in your world if you want to predict and influence the world.
Because then you apportion reward and punishment where they are deserved. That is itself a Good, called “justice”
I don’t see how that follows from consequentialism or anything else.
Then it is limited.
I get it now. I think I ought to hold myself to a higher standard than I hold other people, because it would be ridiculous to judge everyone in the world for failing to try as hard as they can to improve it, and ridiculous to let myself off with anything less than that full effort. And I take it you don’t see things this way.
It follows from the practical concern that A only gets to control the actions of A, so any question not in some way useful for determining A’s actions isn’t interesting to A.
It doesn’t follow from that that you have no interest in praise and blame.
Isn’t A interested in the actions of B and C that impinge on A?
A is interested in:
1) The state of the world. This is important information for deciding anything.
2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
“actions of B and C that impinge on A” is a subset of 1) and “giving praise and blame” is a subset of 2). “Influencing the actions of B and C” is also a subset of 2).
1) The state of the world. This is important information for deciding anything. 2) A’s possible actions, and their consequences. “Their consequences” == expected future state of the world for each action.
Or, briefly “The Union of A and not-A”
or, more briefly still:
“Everything”.
Yes, and it doesn’t follow that because I am interested in praise and blame, I must hold other people to the same standard I hold myself. I said right there in the passage you quoted that I do in fact hold other people to some standard, it’s just not the same as I use for myself.
Yes as a matter of epistemology and normative ethics, but not as a matter of metaethics.
Your metaethics treats everyone as acting but not acted on?
But some people take more actions that have Bad Consequences than others, don’t they?
Yes, but even that is subject to counter-arguments and further debate, so I think the point is in trying to find something that more appropriately describes exactly what we’re looking for.
After all, proportionality and other factors have to be taken into account. If Einstein takes more actions with Good Consequences and less actions with Bad Consequences than John Q. Eggfart, I don’t anticipate this to be solely because John Q. Eggfart is a Bad Person with a broken morality system. I suspect Mr. Eggfart’s IQ of 75 to have something to do with it.
I wonder if 1,000 people upvoted this comment, in series with 1,000 people voting it down. I’d like to know 1/(# of reads) or 1/(number of votes). Can we use network theory to assume that people here conform to the first-mover theory? (ie: “If a post starts getting upvoted, it then continues to be upvoted, whereas if a post starts getting downvoted or ignored, it continues to get downvoted or ignored, or at least has a greater probability of being so.”)
He also might be a sociopath with an IQ superior to Einstein’s. He also might be a John von Neumann, (successfully?) arguing in favor of nuking Russia, because he thinks that Russia is evil (correct) and that Russia is full of scientists who are almost as smart as himself (maybe correct), and because it’s logical to do so (possibly correct, but seemingly not, based on the outcome), or he might think that everyone is as logical as possible (incorrect), or he might not have empathy for those who don’t take the opportunities they’re given (who’s to say if he’s right?). In hindsight, I’m really glad the USA didn’t nuke Russia. In hindsight, I’m very glad that Von Neumann wasn’t killed in order to minimize his destructiveness, but that democracy managed to mitigate his (and Goldwater’s) destructiveness. (Goldwater was the better candidate overall, on all subjects, but his willingness to use the bomb was a fatal, grotesque, and unacceptable flaw in that otherwise “better overall.” Goldwater’s attitude towards the bomb was similar to, and seemingly informed by, von Neumann.)
I do support punishing sociopaths legally, even if they didn’t think it was wrong when they raped and murdered your wife. What the sociopath thinks doesn’t diminish the harm they knowingly caused. The legal system should be a disincentive toward actual wrong. When the legal system operates properly, it is a blessing that allows the emergence of market-based civilization. The idea of a “right” is not necessarily a deontological philosophical claim, but a legal one.
As a consequentialist, I don’t necessarily hate sociopaths. I understand why they exist, from an evolutionary perspective. …But I might still kill one if I had to, in order to serve what I anticipated to be the optimal good. I might also kill one in retaliation, because they had taken something valuable from me (such as the life of a loved one), and I wished to make it clear to them that their choice to steal from me rightfully enraged me (vengeance, punishment).
While I don’t think that (even righteous) punishment is the grandest motive, I also don’t deny others their (rightful) desires for punishment. There is a “right” and a “wrong” external to outcomes, based on philosophy that is mutually-compatible with consequentialism. If we were all submissive slaves, there would be a lot of “peace,” but I still wouldn’t likely choose such an existence over a violent but possibly more free existence.
If you mean that some people choose poorly or are simply unlucky, yes.
If you mean that some people are Evil and so take Evil actions, then … well, yes, I suppose, psychopaths. But most Bad Consequences do not reflect some inherent deformity of the soul, which is all I’m saying.
Classifying people as Bad is not helpful. Classifying people as Dangerous … is. My only objection is turning people into Evil Mutants—which the comment I originally replied to was doing. (“Bad Things are done by Bad People who deserve to be punished.”)
I’d prefer to leave “the soul” out of this.
How do you know that most bad consequences don’t involve sociopaths or their influence? It seems unlikely that that’s not the case, to me.
Also, don’t forget conformists who obey sociopaths. Franz Stangl said he felt “weak in the knees” when he was pushing gas chamber doors shut on a group of women and kids. …But he did it anyway.
Wagner gleefully killed women and kids.
Yet, we also rightfully call Stangl an evil person, and rightfully punish him, even though he was “Just following orders.” In hindsight, even his claims that the democide of over 6 million Jews and 10 million German dissidents and dissenters was solely for theft and without racist motivations, doesn’t make me want to punish him less.
In before this is downvoted to the point where discussion is curtailed.
And yet here you are arguing for Evil Mutants.
I’m aware many people who believe this don’t literally think of it in terms of the soul - if only because they don’t think about it all—but I think it’s a good shorthand for the ideas involved.
Observing simple incompetence in the environment.
I should probably note I’m not familiar with these individuals, although the names do ring a faint bell.
Seems like evidence for my previous statements. No?
These are Nazis, yes? I wouldn’t be that surprised if some of them were “gleeful” even if they had literally no psychopaths among their ranks—unlikely from a purely statistical standpoint.
While my contrarian tendencies are screaming at me to argue this was, in fact, completely unjust … I can see some neat arguments for that …
We punished Nazis who were “just obeying orders”—and now nobody can use that excuse. Seems like a pretty classic example of punishment setting an example for others. No “they’re monsters and must suffer” required.
I’m probably more practiced at empathising with racists, and specifically Nazis—just based on your being drawn from our culture—but surely racist beliefs is a more sympathetic motivation than greed?
(At least, if we ignore the idea of bias possibly leading to racist beliefs that justify benefiting ourselves at their expense, which you are, right?)
There are a lot of people who really don’t understand the structure of reality, or how prevalent and how destructive sociopaths (and the conformists that they influence) are.
In fact, there is a blind spot in most people’s realities that’s filled by their evolutionarily-determined blindness to sociopaths. This makes them easy prey for sociopaths, especially intelligent, extreme sociopaths (total sociopathy, lack of mirror neurons, total lack of empathy, as described by Robert Hare in “without conscience”) with modern technology and a support network of other sociopaths.
In fact, virtually everyone who hasn’t read Stanley Milgram’s book about it, and put in a lot of thought about its implications is in this category. I’m not suggesting that you or anyone else in this conversation is “bad” or “ignorant,” but just that you might not be referencing an accurate picture of political thought, political reality, political networks.
The world still doesn’t have much of a problem with the “initiation of force” or “aggression.” (Minus a minority of enlightened libertarian dissenters.) …Especially not when it’s labeled as “majoritarian government.” ie: “Legitimized by a vote.” However, a large and growing number of people who see reality accurately (small-L libertarians) consistently denounce the initiated use of force as grossly sub-optimal, immoral, and wrong. It is immoral because it causes suffering to innocent people.
Stangl could have recognized that the murder of women and children was “too wrong to tolerate.” In fact, he did recognize this, by his comment that he felt “weak in the knees” while pushing women and children into the gas chamber. That he chose to follow “the path of compliance” “the path of obedience” and “the path of nonresistance” (all those prior paths are different ways of saying the same thing, with different emphasis on personal onus, and on the extent to which fear plays a defensible part in his decision-making).
The reason I still judge the Nazis (and their modern equivalents) harshly is because they faced significant opposition, but it was almost as wrong as they were. The levellers innovated proper jury trials in the 1600s, and restored them by the 1670, in the trial of William Penn. It wasn’t as if Austria was without its “Golden Bull” either. Instead, they chose a mindless interpretation of “the will to power.”
The rest of the world viewed Hitler as a raving madman. There were plenty of criticisms of Nazism in existence at the time of Hitler’s rise to power. Adam Smith had written “The Wealth of Nations” over a century earlier. The Federalist and Anti-Federalists were right in incredible detail again, over a century earlier.
Talk about the prison industrial complex with anyone, and talk with someone who has family members imprisoned for a victimless crime offense. Talk with someone who knows Schaeffer Cox, (one of the many political prisoners in the USA). Most people will choose not to talk to these people (to remain ignorant) because knowledge imparts onus to act morally, and stop supporting immoral systems. To meet the Jews is to activate your mirror neurons, is to empathize with them, …a dangerous thing to do when you’re meeting them standing outside of a cattle car. Your statistical likelihood of being murdered by your own government, during peacetime, worldwide.
So, is this trolling? You cite the Milgram experiment, in which the authorities did not pretend to represent the government. The prevalence and importance of non-governmental authority in real life is one of the main objections to libertarianism, especially the version you seem to promote here (right-wing libertarianism as moral principle).
I’m on a mobile device right now—I’ll go over your arguments, links, and videos in more detail later, so here are my immediate responses, nothing more.
Wait, why would evolution make us vulnerable to sociopaths? Wouldn’t patching such a weakness be an evolutionary advantage?
Wouldn’t a total lack of mirror neurons make people much harder to predict, crippling social skills?
“Ignorant” is not, and should not be, a synonym for “bad”. If you have valuable information for me, I’ll own up to it.
Those strike me as near-meaningless terms, with connotations chosen specifically so people will have a problem with them despite their vagueness.
Did you accidentally a word there? I don’t follow your point.
And clearly, they all deliberately chose the suboptimal choice, in full knowledge of their mistake.
You’re joking, right?
Statistical likelihood of being murdered by your own government, during peacetime, worldwide.
i.e. not my statistical likelihood, i.e. nice try, but no-one is going going to have a visceral fear reaction and skip past their well-practiced justification (or much reaction at all, unless you can do better than that skeevy-looking graph.)
I suggest asking yourself whether the math that created that graph was correctly calculated. A bias against badly illustrated truths may be pushing you toward the embrace of falsehood.
If sociopath-driven collectivism was easy for social systems to detect and neutralize, we probably wouldn’t give so much of our wealth to it. Yet, social systems repeatedly, and cyclically fail for this reason, just as the USA is now, once again, proceeding down this well-worn path (to the greatest extent allowed by the nation’s many “law students” who become “licensed lawyers.” What if all those law students had become STEM majors, and built better machines and technologies?) I dare say that that simple desire for an easier paycheck might be the cause of sociopathy on a grand scale. I have my own theories about this, but for a moment, nevermind _why.
If societies typically fall to over-parasitism, (too many looters, too few producers), we should ask ourselves what part we’re playing in that fall. If societies don’t fall entirely to over-parasitism, then what forces ameliorate parasitism?
And, how would you know how likely you are to be killed by a system in transition? You may be right: maybe the graph doesn’t take into account changes in the future that make societies less violent and more democratic. It just averages the past results over time.
But I think R. J. Rummel’s graph makes a good point: we should look at the potential harm caused by near-existential (extreme) threats, and ask ourselves if we’re not on the same course. Have we truly eliminated the variables of over-legislation, destruction or elimination of legal protections, and consolidation of political power? …Because those things have killed a lot of people in the past, and where those things have been prevented, a lot of wealth and relative peace has been generated.
But sure, the graph doesn’t mean anything if technology makes us smart enough to break free from past cycles. In that case, the warning didn’t need to be sounded as loudly as Rummel has sounded it.
...And I don’t care if the graph looks “skeevy.” That’s an ad-hominem attack that ignores the substance of the warning. I encourage you to familiarize yourself with his entire site. It contains a lot of valuable information. The more you rebel against the look and feel of the site, the more I encourage you to investigate it, and consider that you might be rebelling against the inconsequential and ignoring the substance.
Truth can come from a poorly-dressed source, and lies can (and often do) come in slick packages.
You know, this raises an interesting question: what would actually motivate a clinical psychopath in a position of power? Well, self-interest, right? I can see how there might be a lot of environmental disasters, defective products, poor working conditions as a result … probably also a certain amount of skullduggery would be related to this as well.
Of course, this is an example of society/economics leading a psychopath astray, rather than the other way around. Still, it might be worth pushing to have politicians etc. tested and found unfit if they’re psychopathic.
I remain deeply suspicious of this sentence.
This seems reasonable, actually. I’m unclear why I should believe you know better, but we are on LessWrong.
I … words fail me. I seriously cannot respond to this. Please, explain yourself, with actual reference to this supposed reality you perceive, and with the term “initiation of force” tabooed.
And this is the result of … psychopaths? Human psychological blindspots evolved in response to psychopaths?
Well, that’s … legitimately disturbing. Of course, it may be inaccurate, or even accurate but justified … still cause for concern.
You know, my government could be taken down with a few month’s terrorism, and has been. There are actual murderers in power here, from the ahem glorious revolution. I actually think someone who faced this sort of thing here might have a real chance of winning that fight, if they were smart.
This contributes to my vague like of american-style maintenance-of-a-well-organized-militia gun ownership, despite the immediate downsides.
And, of course, no other government is operating such attacks in Ireland, to my knowledge. I think I have a lot more to fear from organized crime than organized law, and I have a lot more unpopular political opinions than money.
The site appears to be explicitly talking about genocide etc. in third-world countries.
Citation very much needed, I’m afraid. You are skirting the edge of assuming your own conclusion, which suggests it’s a large part of your worldview; am I right?
I’m going to say “surprisingly little”. Eh, it’s worth a shot in at least a state-level trial.
Assuming “past” and “future” here are metaphorically referring to more/less advanced societies, absolutely.
This doesn’t seem likely to fall into even the same order of magnitude as X-risks. In fact, I think the main effect would be the possible impact on reducing existential threats.
And you blame these on … psychopaths?
Hmm. Have you considered dressing better? Because those youtube documentaries are borderline unwatchable, and I am right only barely motivated enough to watch them because I would feel bad at potentially neglecting a source of info. (If they continue to consist of facts I already know and raw, unsupported declarations I will, in fact, stop watching them.)
Getting maths right is useless when youmhave got concpets wrong. Your graph throws Liberal democracies in with authoritarian and totalitarianism regimes. From which you derive that mugasofer is AA likely to be killed by Michael Higgins as he is by Pol Pot.
You’re making lots of typos these days; is there something wrong with your keyboard or something?
Having reviewed your links:
Your first link (https://www.youtube.com/watch?v=MgGyvxqYSbE) both appears to be, and is, a farly typical YouTube conspiracy theory documentary that merely happens to focus on psychopaths. It was so bad I seriously considered giving up on reviewing your stuff. I strongly recommend that, whatever you do, you cease using this as your introductory point.
“The Psychology of Evil” was mildly interesting; although it didn’t contain much in the way of new data for me, it contained much that is relatively obscure. I did notice, however, that he appears to be not only anthropomorphizing but demonizing formless things. Not only are most bad things accomplished by large social forces, most things period are. It is easier for a “freethinker” to do damage than good, although obviously, considering we are on LW, I consider this a relatively minor point.
I find the identification of “people who see reality accurately” with “small-l libertarians” extremely dubious, especially when it goes completely unsupported, as if this were a background feature of reality barely worth remarking on.
Prison industrial complex link is meh; this, on the other hand, is excellent, and I may use it myself.
Schaeffer Cox is a fraud, although I can’t blame him for trying and I remain concerned about the general problem even if he is not an instance of it.
The chart remains utterly unrelated to anything you mentioned or seem particularly concerned about here.
The non aggression principle is horribly broken
Concern about sociopaths applies to both business and government:
http://thinkprogress.org/justice/2014/01/09/3140081/bridge-sociopathy/
double-posted