The question of whether or not someone ought to go jail, independent of whether or not any agent ought to put them in jail, doesn’t seem very meaningful.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They’re either in jail or they are not.
Nyan is exactly right, judging other people’s actions is just another sort of action you can choose, it is not fundamentally a special case.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn’t directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
Standard consequentialists can and do judge the actions of others to be right or wrong according to their
consequences. I don’t know what you think is blocking that off.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways.
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone’s wallet although the money is morally neutral.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality—and one that is often used as an objection against it.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory
is actually egoism: you are saying that there is no sense in which I should care about people unknown to me,
but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
(...)
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don’t know what you think is blocking that off.
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn’t said “I don’t know what you think is blocking that off.”, I’d have assumed you were perfectly agreeing with me on those points.
(...)
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
If you want to put your own labels on everything, then yes, that’s exactly what my theory is and that’s exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I’m being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incrediblecoincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Your mental judgments are actions, in the useful sense when discussing metaethics
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
How incredibly coincidental and curious!
That was mean sarcastically: so it isn’t coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant.
That is not obvious.
To return to your previous words, I believe you’ll agree that someone who
To return to your previous words, I believe you’ll agree that someone who
That is incomplete.
Oh, sorry. I was jumping from place to place. I’ve edited the comment, what I meant to say was:
“To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.”
Your mental judgments are actions, in the useful sense when discussing metaethics
What is the point of that comment?
For me, it’s a good heuristic that judgments and thoughts also count as actions when I’m thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they’re better for.
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality[?]
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
(...)
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
That isn’t a reduction that can be performed by real-world agents. You are using “reduction” in the peculiar LW sense of “ultimately composed of” rather than the more usual “understandable in terms of”. For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I’m not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than…
“understandable in terms of”? What do you even mean? How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in
“near” (or “real”) mode.
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
Huh? I don’t think “instrumental” means “actually will work form an omniscicent PoV”. What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, “don’t kill unless there are serious extenuating circumsntaces” is both “what is considered moral now” and as instrumental as we can achieve.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people?
I don’t see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves
ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
I’m not sure I understand your line of reasoning for that last part of your comment.
It’s what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than...
You may have been “using” in the sense of connoting, or intending that, but you cannot have been using it in
the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
“understandable in terms of”? What do you even mean?
Eg:”All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance”.
How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
That needs tabooing. It explains “reduction” in terms of “reducing”.
“In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states.”
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
The desirability of a world-state is a black-box process
Or an algorithm that can be understood and written down, like the “description” you mention above? That is a rather important distinction.
that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates,
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
There’s not strong reason to think that something actually is good just because our genes say so. It’s a form of Euthyphro. As EY has noted.
So when you said morailty was about what you ought to do, you mean it was about was people in general ought to do. ETA: And what if agent A would jail them, and agent B would free them? They’re either in jail or they are not.
But morality is not about deciding what to do next, because many actions are morally neutral, and many actions that are morally wrong are justfiable in other ways. Morailty is not just decision theory. Moraility is about what people ought to do. What people ought to do the good. When something is judged good, praise and reward are given, when something is judged wrong, blame and punishment are given.
No. It’s about what JGWeissman in general ought to do, including “JGWeissman encourages and/or forces everyone else to do X, and convinces everyone to be consequentialist and follow the same principles and JGWeissman”.
Does that make it clearer? Prescription is just an action to take like any other. Take another step back into meta and higher-order. These discussions we’re having, convincing people, thinking in certain ways that promote certain general behaviors, are all things we individually are doing, actions that one individual consequentialist agent will evaluate in the same manner as they would evaluate “Give fish or not?”
This is technically unknown, unverifiable, and seems very dubious and unlikely and irrelevant to me. Unless you completely exclude transitivity and instrumentality from your entire model of the world.
Basically, most actions I can think of will either increase or decrease the probability of a ton of possible-futures at the same time, so one would want to take actions which increase the odds of the more desirable possible futures at the expense of less desirable ones. Even if the action doesn’t directly impact or impacts it in a non-obvious way.
For example, a policy of not lying, even if in this case it would save some pain, could be much more useful for increasing the odds of possible futures where yourself and people you care about lie to each other a lot less, and since lying is much more likely to be hurtful than beneficial and economies of scale apply, you might be consequentially better to prescribe yourself the no-lying policy even in this particular instance where it will be immediately negative.
Also note that “judging something good” and “giving praise and rewards”, as well as “judging something bad” and “attributing blame and giving punishment”, are also actions to decide upon. So deciding whether to blame or to praise is a set of actions where, yes, morality is about deciding which one to do.
Your mental judgments are actions, in the useful sense when discussing metaethics.
Is it? That isn’t relevant to me. It isn’t relevant to interaction between people, it isn’t relevant to society as a whole, and it isn’t relevant to criminal justice. I don’t see why I should call anything so jejune “morality”.
Standard consequentialists can and do judge the actions of others to be right or wrong according to their consequences. I don’t know what you think is blocking that off.
Discussions of metaethics are typically pinned to sets of common-sense intuitions. It is a common sense intutiion that choosing vanilla instead of chocolate is morally neutral. It is common sense that I should not steal someone’s wallet although the money is morally neutral.
That is not an fact about morality that is a implication of the naive consequentualist theory of morality—and one that is often used as an objection against it.
Or I might be able to prudently predate. Although you are using the language of consequentialsim, your theory is actually egoism: you are saying that there is no sense in which I should care about people unknown to me, but instead I should just maximise the values I happen to have (thereby collapsing ethics into instrumental rationality).
Morality is a particular kind of deciding and acting. You cannot eliminate the difference between ethics and instrumental decision theory, by noting that they are both to do with acts and decisions. There is still the distinction between instrumental and moral acts, instrumental and moral decisions
Indeed. “Judge actions of Person X” leads to better consequences than not doing it as far as they can predict. “Judging past actions of others” is an action that can be taken. “Judging actions of empirical cluster Y” is also an action, and using past examples of actions within this cluster that were done by others as a reference for judging the overall value of actions of this cluster is an extremely useful method of determining what to do in the future (which may include “punish the idiot who did that” and “blame the person” and whatever other moral judgments are appropriate).
Did I somehow communicate that something was blocking that off? If you hadn’t said “I don’t know what you think is blocking that off.”, I’d have assumed you were perfectly agreeing with me on those points.
If you want to put your own labels on everything, then yes, that’s exactly what my theory is and that’s exactly how it works.
It just also happens to coincide that the values I happen to have include a strong component for what other people value, and the expected consequences of my actions whether I will know the consequences or not, and for the well-being of others whether I will be aware of it or not.
So yes, by your words, I’m being extremely egoist and just trying to maximize my own utility function alone by evaluating and calculating the consequences of my actions. It just so happens, by some incredible coincidence, that maximizing my own utility function mostly correlates with maximizing some virtual utility function that maximizes the well-being of all humans.
How incredibly coincidental and curious!
Indeed. And when you take a step back, it is more moral to act instrumentally than to act as if the instrumental value of actions were irrelevant. To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.
The point being what? That moral judgments have an instrumental value? That, they don’t have a moral value? That morality collapses into instrumentality.
Yes, but the idiosyncratic disposition of your values doesn’t make egoism into standard c-ism.
That was mean sarcastically: so it isn’t coincidence. So somethig makes egoism systematically coincide with c-ism. What? I really have no idea.
What is the point of that comment?
That is not obvious.
That is incomplete.
Oh, sorry. I was jumping from place to place. I’ve edited the comment, what I meant to say was:
“To return to your previous words, I believe you’ll agree that someone who acts in a manner that instrumentally encourages others to take morally good actions is something that attracts praise, and I think this also means it’s more moral.
I would extend this such that all instrumentally-useful-towards-moral-things actions (that are also expected to give this result and done for this reason) be called “morally good” themselves.”
For me, it’s a good heuristic that judgments and thoughts also count as actions when I’m thinking of metaethics, because thinking that something is good or judging an action as bad will influence how I act in the future indirectly.
So a good metaethics has to also be able to tell which kinds of thoughts and judgments are good or bad, and what methods and algorithms of making judgments are better, and what / who they’re better for.
Mu, yes, no, yes.
Moral judgments are instrumentally valuable for bringing about more morally-good behavior. Therefore they have moral value in that they bring about more expected moral good. Moral good can be reduced to instrumental things that bring about worldstates that are considered better, and the “considered better” is a function executed by human brains, a function that is how it is because it was more instrumental than other functions (i.e. by selection effects).
I suppose. The wikipedia page for Consequentialism seems to suggest that a significant portion of consequentialism takes a view very similar to this.
That isn’t a reduction that can be performed by real-world agents. You are using “reduction” in the peculiar LW sense of “ultimately composed of” rather than the more usual “understandable in terms of”. For real-world agents, morality does not reduce (2) to instrumentality: they may be obliged to overide their instrumental concerns in order to be moral.
Errh, could you reduce/taboo/refactor “instrumental concerns” here?
If I act in an instrumentally-moral manner, I bring about more total moral good than if I act in a manner that is just “considered moral now” but would result in lots of moral bad later.
One weird example here is making computer programs. Isn’t it rather a moral good to make computer programs that are useful to at least some people? Should this override the instrumental part where the computer program in question is an unsafe paperclip-maximizing AGI?
I’m not sure I understand your line of reasoning for that last part of your comment.
On another note, I agree that I was using “reduction” in the sense of describing a system according to its ultimate elements and rules, rather than…
“understandable in terms of”? What do you even mean? How is this substantially different? The wikipedia article’s “an approach to understanding the nature of complex things by reducing them to the interactions of their parts” definition seems close to the sense LW uses.
In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states. The desirability of a world-state is a black-box process that compares the world-state to “ideal” world-states in an abstract manner, where the ideal worldstates are those most instrumental towards having more instrumental worldstates, the recursive stack being most easily described as “worldstates that these genetics prefer, given that these genetics prefer worldstates where more of these genetics exist, given that these genetics have (historically) caused worldstates that these genetics preferred”, etc. etc. and then you get the standard Evolution Theory statements.
If I am morally prompted to put money in the collecting tin, I lose its instrumental value As before, I am thinking in “near” (or “real”) mode.
Huh? I don’t think “instrumental” means “actually will work form an omniscicent PoV”. What we think of as instrumental is just an approximation, and so is what we think of as moral.. Given our limitations, “don’t kill unless there are serious extenuating circumsntaces” is both “what is considered moral now” and as instrumental as we can achieve.
I don’t see why. Is it moral for trees to grow fruit that people can eat? Morality involves choices,and it involves ends. You can choose to drive a nail in with a hammer, or to kill someone with it. Likewise software.
It’s what I say at the top: If I am morally prompted to put money in the collecting tin, I lose its instrumental value
You may have been “using” in the sense of connoting, or intending that, but you cannot have been using it in the sense of denoting or referencing that, since no such reduction exists (in the sense that a reduction of heat to molecular motion exists as a theory).
Eg:”All the phenomena associated with heat are understandable in terms of the disorganised motion of the molecules making up a substance”.
That needs tabooing. It explains “reduction” in terms of “reducing”.
“In the real world, my only algorithm for evaluating morality is the instrumentality of something towards bringing about more desirable world-states.”
Says who? if the non-cognitivists are right, you have an inaccessible black-box source of moral insights. If the opponents of hedonism are right, morality cannot be conceptually equated with desirabililty. (What a world of heroin addicts desire is not necessaruly what is good).
Or an algorithm that can be understood and written down, like the “description” you mention above? That is a rather important distinction.
How does that ground out? The whole point of instrumental values is that they are instrumental for something.
There’s not strong reason to think that something actually is good just because our genes say so. It’s a form of Euthyphro. As EY has noted.