Possibly because your moral values arose from a process that was almost exactly the same for other individuals,
Why describe them as subjecive when they are intersubjective?
I’m not sure why it should be necessary for a moral value to regulate behaviour across individuals in order to be valid.
It would be necessary for them to be moral values and not something else, like aesthetic values. Because morality is largely to regulate interactions between individuals. That’s its job. Aesthetics is there to make things beautiful, logic is there to work things out...
morality is largely to regulate interactions between individuals. That’s its job.
I don’t want to get into a discussion of this, but if there’s an essay-length-or-less explanation you can point to somewhere of why I ought to believe this, I’d be interested.
But, as I say, I don’t want to get into a discussion of this.
I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it’s important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.
I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it’s important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.
Well, the law compells those who arent compelled by exhortation. But laws need justiication.
Nope. An agent without a value system would have no purpose in creating a moral system. An agent with one might find it intrinsically valuable, but I personally don’t. I do find it instrumentally valuable.
Laws are justified because subjective desires are inherently justified because they’re inherently motivational. Many people reverse the burden of proof, but in the real world it’s your logic that has to justify itself to your values rather than your values that have to justify themselves to your logic. That’s the way we’re designed and there’s no getting around it. I prefer it that way and that’s its own justification. Abstract lies which make me happy are better than truths that make me sad because the concept of better itself mandates that it be so.
Clarification: From the perspective of a minority, the laws are unjustified. Or, they’re justified, but still undesirable. I’m not sure which. Justification is an awkward paradigm to work within because you haven’t proven that the concept makes sense and you haven’t precisely defined the concept.
They’re justified in that there are no arguments which adequately refute them, and that they’re motivated to take those actions. There are no arguments which can refute one’s motivations because facts can only influence values via values. Motivations are what determine actions taken, not facts. That is why perfectly rational agents with identical knowledge but different values would respond differently to certain data. If a babykiller learned about a baby they would eat it, if I learned about a baby I would give it a hug.
In terms of framing, it might help you understand my perspective if you try not to think of it in terms of past atrocities. Think in terms of something more neutral. The majority wants to make a giant statue out of purple bubblegum, but the minority wants to make a statue out of blue cotton candy, for example.
Lack of counterargument is not justification, nor is motivation from some possible irraitonal source.
In utilitarian terms, motivation is not “I’m motivated today!”. The utilitarian meaning of motivation is that a program which displays “Hello World!” on a computer screen has for (exclusive) motivation to do the exact process which makes it display those words. The motivation of this program is imperative and ridiculously simple and very blunt—it’s the pattern we’ve built into the computer to do certain things when it gets certain electronic inputs.
Motivations are those core things which literally cause actions, whether it’s a simple reflex built into the nervous system which always causes some jolt of movement whenever a certain thing happens (such as being hit on the knee) or a very complex value system sending interfering signals within trillions of cells causing a giant animal to move one way or another depending on the resulting outcome.
Motivation is the only thing that causes actions, it’s the only thing that it makes sense to talk about in reference to prescriptive statements. Why do you define motivation as irrational? At worst, it should be arrational. Even then, I see motivation as its own justification and indeed the ultimate source of all justifications for belief in truth, etc. Until you can solve every paradox ever, you need to either embrace nihilism or embrace subjective value as the foundation of justification.
The majority verdict isn’t moral justification because morality is subjective. But for people within the majority, their decision makes sense. If I were in the community, I would do what they do. I believe that it would be morally right for me to do so. Values are the only source of morality that there is.
Motivation is the only thing that causes actions, it’s the only thing that it makes sense to talk about in reference to prescriptive statements.
That doesn’t follow. If it is the only thing that causes actions, then it is relevant to why, as a matter of fact, people do what they do—but that is description, not prescription. Prescription requires extra ingredients.
Why do you define motivation as irrational?
I said that as a matter fof fact it is not necessarily rational. My grounds are that you cna’t always explain you motivations on a ratioanl basis.
Even then, I see motivation as its own justification and indeed the ultimate source of all justifications for belief in truth, etc.
It may be the source of caring about truth and rationality. That does not mke it the source of truth and rationality.
Until you can solve every paradox ever, you need to either embrace nihilism or embrace subjective value as the foundation of justification.
That doens’t follow. I could embrace non-evaluative intutions, for instance.
The majority verdict isn’t moral justification because morality is subjective.
Subjective morality cannot justify laws tha pply to eveybody.
But for people within the majority, their decision makes sense.
It may make sense as a set of personal preferences, but that doens’t justify it being binding on others.
If I were in the community, I would do what they do.
Then you would have colluded with atrocities in other historical societies.
Values are the only source of morality that there is.
That doesn’t follow. If it is the only thing that causes actions, then it is relevant to why, as a matter of fact, people do what they do—but that is description, not prescription. Prescription requires extra ingredients.
In that case, prescription is impossible. Your system can’t handle the is-ought problem.
I said that as a matter fof fact it is not necessarily rational. My grounds are that you cna’t always explain you motivations on a ratioanl basis.
Values are rationality neutral. If you don’t view motivations and values as identical, explain why?
That doens’t follow. I could embrace non-evaluative intutions, for instance.
These intuitions are violated by paradoxes such as the problem of induction or the fact that logical justification is infinitely regressive (turtles all the way down). Your choice is nihilism or an arbitrary starting point, but logic isn’t a valid option.
Subjective morality cannot justify laws tha pply to eveybody.
Sure. Technically this is false if everyone is the same or very similar but I’ll let that slide. Why does this invalidate subjective morality?
It may make sense as a set of personal preferences, but that doens’t justify it being binding on others.
Why would I be motivated by someone else’s preferences? The only thing relevant to my decision is me and my preferences. The fact that this decision effects other people is irrelevant, literally every decision effects other people.
Then you would have colluded with atrocities in other historical societies.
In that case, prescription is impossible. Your system can’t handle the is-ought problem.
Something is not impossible just because it requires extra ingredients.
Values are rationality neutral. If you don’t view motivations and values as identical, explain why?
I don’t care about the difference between irrational and arational, they’re both non-rational.
These intuitions are violated by paradoxes such as the problem of induction or the fact that logical justification is infinitely regressive (turtles all the way down).
Grounding out in an intuitionthat can’t be justified is no worse than grounding out in a value that can’t
be justified.
Your choice is nihilism or an arbitrary starting point, but logic isn’t a valid option.
You are (trying to) use logic right now, How come it works for you?
Why does this invalidate subjective morality?
Because morality needs to be able to tell people why they should not always
act on their first-oder impulses.
Why would I be motivated by someone else’s preferences?
I didn’t say you should.If you have morality as a higher order prefernce, you can be persuaded
to override some of your first ortder preferences. In favour of morality. Which is not subjectiv,e and
therefore not just someone else;s values.
The only thing relevant to my decision is me and my preferences.
You’ve admitted that prefernces can include empathy. They can include respect for universalisable moral
principles too. “My prefernces” does not have to equate to “selfish preferneces”
The fact that this decision effects other people is irrelevant, literally every decision effects other people.
How does choosing vanilla over chocale chip affect other people?
I value human life, you are wrong.
You need to make up your mind whether you value human life more or less than going along with the majority.
Something is not impossible just because it requires extra ingredients.
How do you generate moral principles that conflict with desire? How do you justify moral principles that don’t spring from desire? Why would anyone adopt these moral principles or care what they have to say? How do you overcome the is-ought gap?
Give me a specific example of an objective system that you think is valid and that overcomes the is-ought gap.
Because morality needs to be able to tell people why they should not always act on their first-oder impulses.
Mine can do that. Some impulses contradict other values. Some values outweigh others. Sometimes you make sacrifices now for later gains.
I don’t know why you believe morality needs to be able to restrict impulses, either. Morality is a guide to action. If that guide to action is identical to your inherent first-order impulses, all the better for you.
I didn’t say you should.If you have morality as a higher order prefernce, you can be persuaded to override some of your first ortder preferences. In favour of morality. Which is not subjectiv,e and therefore not just someone else;s values.
Let me rephrase. How can you generate motivational force from abstract principles? Why does morality matter if it has nothing to do with our values?
You’ve admitted that prefernces can include empathy. They can include respect for universalisable moral principles too. “My prefernces” does not have to equate to “selfish preferneces”
Your preferences might include this, yes. I think that would be a weird thing to have built in your preferences and that you should consider self-modifying it out. Regardless, that would be justifying a belief in a universalisable moral principle through subjective principles. You’re trying to justify that belief through nothing but logic, because that is the only way you can characterize your system as truly objective.
How does choosing vanilla over chocale chip affect other people?
There are less vanilla chips for other people. It effects your diet which effects the way you will behave. It will increase your happiness if you value vanilla chips more than chocolate ones. If someone values your happiness, they will be happy you ate vanilla chips. If someone hates when you’re happy, they will be sad.
You need to make up your mind whether you value human life more or less than going along with the majority.
I don’t value going along with the majority in and of itself. If I’m a member of the majority and I have certain values then I would act on those values, but my status as a member of the majority wouldn’t be relevant to morality.
That claim needs justification.
Sure. Pain and pleasure and value are the roots of morality. They exist only in internal experiences. My pain and your pleasure are not interchangeable because there is no big Calculating utility god in the sky to aggregate the content of our experiences. Experience is always individual and internal and value can’t exist outside of experience and morality can’t exist outside of value. The parts of your brain that make you value certain experiences are not connected to the parts of my brain that make me value certain experiences, which means the fact that your experiences aren’t mine is sufficient to refute the idea that your experiences would or should somehow motivate me in and of themselves.
How do you generate moral principles that conflict with desire?
Did you notice my references to “firist order” and “higher order”?
How do you overcome the is-ought gap?
By using rational-should as an intermediate.
Mine can do that. Some impulses contradict other values. Some values outweigh others. Sometimes you make sacrifices now for later gains.
Sometimes you need to follow impersonal, universaliable,...maybe even objective...moral reasoning?
I don’t know why you believe morality needs to be able to restrict impulses,
i don’t know why you think “do what thou wilt” is morlaity. It would be like having a system of logic
that can prove any claim.
either. Morality is a guide to action. If that guide to action is identical to your inherent first-order impulses, all the better for you.
“All the better for me” does not mean “optimal morality”. The job of logic is not to prove everything I happen to believe, and the job of morality is not to confirm all my impulses.
Let me rephrase. How can you generate motivational force from abstract principles?
Some people value reason, and the rest have value systems tweaked by the threat of punishment.
Why does morality matter if it has nothing to do with our values?
You think no one values morality?
Your preferences might include this, yes. I think that would be a weird thing to have built in your preferences and that you should consider self-modifying it out.
What’s weird? Empathy? Morality? Ratioanlity?
You’re trying to justify that belief through nothing but logic, because that is the only way you can characterize your system as truly objective.
You say that like its a bad thing.
There are less vanilla chips for other people
Not necessarily. There might be a surplus.
But if you want to say that everything effects others, albeit to a ti y extent, then it follows that everything is a tiny
bit moral.
I don’t value going along with the majority in and of itself.
You previouly made some statements that sounded a lot like that.
Sure. Pain and pleasure and value are the roots of morality.
That statement needs some justification. Is it better to do good things voluntarily, or because you are forced to?
Experience is always individual and internal and value can’t exist outside of experience and morality can’t exist outside of value.
OK, I though ti was something like that. The things is that subjects can have values which are inherently
interpersonal and even objective...things like empathy and rationality. So “value held by a subject” does not imply “selfish value”.
The parts of your brain that make you value certain experiences are not connected to the parts of my brain that make me value certain experiences, which means the fact that your experiences aren’t mine is sufficient to refute the idea that your experiences would or should somehow motivate me in and of themselves.
Yet agian, objective morality is not a case of one subject being motivated by another subjects values. Objectivity is not achieved by swapping subjects.
Did you notice my references to “firist order” and “higher order”?
This is a black box. Explain what they mean and how you generate the connection between the two.
By using rational-should as an intermediate.
You claim that a rational-should exists. Prove it.
Sometimes you need to follow impersonal, universaliable,...maybe even objective...moral reasoning?
Using objective principles as a tool to evaluate tradeoffs between subjective values is not the same as using objective principles to produce moral truths.
i don’t know why you think “do what thou wilt” is morlaity. It would be like having a system of logic that can prove any claim.
That’s not my definition of morality, it’s the conclusion I end up with. Your analogy doesn’t seem valid to me because I don’t conclude that all moral claims are equal but that all desires are good. Repressing desires or failing to achieve desires is bad. Additionally, its clear to me why a logical system that proves everything is good is bad, but why would a moral system that did the same be invalid?
“All the better for me” does not mean “optimal morality”. The job of logic is not to prove everything I happen to believe, and the job of morality is not to confirm all my impulses.
I agree. I didn’t claim either of those things. Morality doesn’t have a job outside of distinguishing between right and wrong.
What’s weird? Empathy? Morality? Ratioanlity?
The idea that all principles you act upon must be universalizable. It’s bad because individuals are different and should act differently. The principle I defend is a universalizable one, that individuals should do what they want. The difference between mine and yours is that mine is broad and all people are happy when its applied to their case, but yours is narrow and exclusive and egocentric because it neglects differences in individual values, or holds those differences to be morally irrelevant.
Not necessarily. There might be a surplus.
But if you want to say that everything effects others, albeit to a ti y extent, then it follows that everything is a tiny bit moral.
Subtraction, have you heard of it?
Some things are neutral even though they effect others.
That statement needs some justification. Is it better to do good things voluntarily, or because you are forced to?
Voluntarily, because that means you’re acting on your values.
OK, I though ti was something like that. The things is that subjects can have values which are inherently interpersonal and even objective...things like empathy and rationality. So “value held by a subject” does not imply “selfish value”.
If I valued rationality, why would that result in specific moral decrees? Value held by a subject doesn’t imply selfish value, but it does imply that the values of others are only relevant to my morality insofar as I empathize with those others.
Yet agian, objective morality is not a case of one subject being motivated by another subjects values. Objectivity is not achieved by swapping subjects.
“Objectivity” in ethics is achieved by abandoning individual values and beliefs and trying to produce statements which would be valued and believed by everyone. That’s stupid because we can never escape the locus of the self and because morality emerges from internal processes and neglecting those internal processes means that there is zero foundation for any sort of morality. I’m saying that morality is only accessible internally, and that the things which produce morality are internal subjective beliefs.
If you continue to disagree, I suggest we start over. Let me know and I’ll post an argument that I used last year in debate. I feel like starting over would clarify things a lot because we’re getting muddled down in a line-by-line back-and-forth hyperspecific conversation here.
This is a black box. Explain what [first order and higher order] mean and how you generate the connection between the two.
Usual meaning in this type of disucssion.
You claim that a rational-should exists. Prove it.
If I can prove anything to you, you are already running on rational_should.
Using objective principles as a tool to evaluate tradeoffs between subjective values is not the same as using objective principles to produce moral truths.
Why not?
That’s not my definition of morality, it’s the conclusion I end up with.
That doens’t help. It;s not morality whether it’s assumed or concluded.
The idea that all principles you act upon must be [is weird]
It’s bad because individuals are different and should act differently.
Individuals are different and would act differntly. You are arguing as though people should never
do anythng unless it is morally obligated, as though moral rules are all encompassing. I never
said that. Morality does not need to detemine evey action any more than civil law does.
The principle I defend is a universalizable one, that individuals should do what they want.
That isn’t universalisable because you don;t want to be murdered.
The correct form is “individuals should do what they want unless it harms another”.
The difference between mine and yours is that mine is broad and all people are happy when its applied to their case,
We don’t have it.
if people wanted your principle, they would abolish all laws.
but yours is narrow and exclusive and egocentric
!!!
If I valued rationality, why would that result in specific moral decrees?
Look at examples of people arguing about morality.
ETA: Better restrict that to liberals.
There’s plenty about, even on this site.
Value held by a subject doesn’t imply selfish value, but it does imply that the values of others are only relevant to my morality insofar as I empathize with those others.
Nope. Rationality too.
“Objectivity” in ethics is achieved by abandoning individual values and beliefs
Of course not. It is a perfectly acceptable principle that people should be allowed to realise
their values so long as they do not harm others. Where do you ge these ideas?
and trying to produce statements which would be valued and believed by everyone.
Just everyone rational. The police are there for a reason
That’s stupid because we can never escape the locus of the self and because morality emerges from internal processes
Yet again: we can internally value what is objective and impartial. “In me” doesn’t imply “for me”.
and neglecting those internal processes means that there is zero foundation for any sort of morality.
“Neglect” is your straw man.
I’m saying that morality is only accessible internally, and that the things which produce morality are internal subjective beliefs.
Yet again: “In me” doesn’t imply “for me”.
If you continue to disagree, I suggest we start over. Let me know and I’ll post an argument that I used last year in debate. I feel like starting over would clarify things a lot because we’re getting muddled down in a line-by-line back-and-forth hyperspecific conversation here.
Laws are justified because subjective desires are inherently justified because they’re inherently motivational.
What you need to justfy is imprisoning someone for offending against values they don’t necessarily subsribe to. That you are motivated by your values, and the criminal by theirs, doens’t give you the right to jail them.
“Morality” is indeed being used to regulate individuals by some individuals or groups. When I think of morality, however, I think “greater total utility over multiple agents, whose value systems (utility functions) may vary”. Morality seems largely about taking actions and making decisions which achieve greater utility.
I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don’t inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.
I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they’ve overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.
Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).
In my view there’s a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn’t relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.
My computer won’t load the website because it’s apparently having issues with flash, can you please summarize? If you’re just making a distinction between yourself and your beliefs, sure, I’ll concede that. I was a bit sloppy with my terminology there.
Its not “My beliefs” either.” justification is the reason why someone (properly) holds the belief, the explanation as to why the belief is a true one, or an account of how one knows what one knows.”
Okay. I think I’ve explained the justification then. Specific moral systems aren’t necessarily interchangeable from person to person, but they can still be explained and justified in a general sense. “My values tell me X, therefore X is moral” is the form of justification that I’ve been defending.
No I don’t. I need to be stronger than the people who want to murder me, or to live in a society that deters murder. If someone wants to murder me, it’s probably not the best strategy to start trying to convince them that they’re being immoral.
You’re making an argumentum ad consequentum. You don’t decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards. Just because you don’t like the type of system that morality leads to overall doesn’t mean that you’re justified in ignoring other moral arguments.
The benefit of my system is that it’s right for me to murder people if I want to murder them. This means I can do things like self defense or killing Nazis and pedophiles with minimal moral damage. This isn’t a reason to support my system, but it is kind of neat.
No I don’t. I need to be stronger than the people who want to murder me,
That’s giving up on morality not defending subjective morality.
or to live in a society that deters murder.
Same problem. That’s either group morality or non morality.
If someone wants to murder me, it’s probably not the best strategy to start trying to convince them that they’re being immoral.
I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying
that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order
values. That morality is not legality or brue force or a a magic spell is not relevant.
You’re making an argumentum ad consequentum. You don’t decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards.
I am starting wth what kind of morality it would be adequate to have. If you can’t bang in a nail with it, it isn’t a hammer.
Just because you don’t like the type of system that morality leads to overall
Where on eath did I say that?
The benefit of my system is that it’s right for me to murder people if I want to murder them.
That’s not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for “i can do whatever I want regardless”.
This means I can do things like self defense
Justifiable self defense is not murder. You seem to have confused ethical objectiv ism (morality is not just personal preference) with ethical absolutism (moral principles have no exceptions). Read yer wikipedia!
That’s giving up on morality not defending subjective morality.
Morality is a guide for your own actions, not a guide for getting people to do what you want.
Same problem. That’s either group morality or non morality.
Rational self interested individuals decide to create a police force.
Argumentum ad consequentums are still invalid.
I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order values. That morality is not legality or brue force or a a magic spell is not relevant.
Sure, but morality needs to have motivational force or its useless and stupid. Why should I care? Why should the burglar? If you’re going to keep insisting that morality is what’s preventing people from doing evil things, you need to explain how your accounting of morality overrules inherent motivation and desire, and why its justified in doing that.
I am starting wth what kind of morality it would be adequate to have. If you can’t bang in a nail with it, it isn’t a hammer.
This is not how metaethics works. You don’t get to start with a predefined notion of adequate. That’s the opposite of objectivity. By neglecting metaethics, you’re defending a model that’s just as subjective as mine, except that you don’t acknowledge that and you seek to vilify those who don’t share your preferences.
Where on eath did I say that?
You’re arguing that subjective morality can’t be right because it would lead to conclusions you find undesirable, like random murders.
That’s not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for “i can do whatever I want regardless”.
Stop muddling the debate with unjustified assumptions about what morality is for. If you want to talk about something else, fine. My definition of morality is that morality is what tells individuals what they should and should not do. That’s all I intend to talk about.
You’ve conceded numerous things in this conversation, also. I’m done arguing with you because you’re ignoring any point that you find inconvenient to your position and because you haven’t shown that you’re rational enough to escape your dogma.
No, it is largely about regulating interactions such as rape theft and murder.
not a guide for getting people to do what you want.
I never said morality is to make others do what I want. That is persistent straw man on your part
Rational self interested individuals decide to create a police force.
So?
Argumentum ad consequentums are still invalid.
“It’s not a hammer if it can’t bang in nail” isn’t invalid.
Sure, but morality needs to have motivational force or its useless and stupid. Why should I care?
If your ar rational you will care abotu raitonality based morality. If you are not...what are you doing on LW?
Why should the burglar? If you’re going to keep insisting that morality is what’s preventing people from doing evil things, you need to explain how your accounting of morality overrules inherent motivation and desire, and why its justified in doing that.
The motivation to be rational is a motivation. I didn’t say non-motivations override motivations. Higher order and lower order, remember.
This is not how metaethics works. You don’t get to start with a predefined notion of adequate.
Why not? I can see apriori what would make a hammer adequate.
You’re arguing that subjective morality can’t be right because it would lead to conclusions you find undesirable, like random murders.
Conclusions that just about anyne would find undersirable. Objection to random murder is not some
weird pecadillo of mine.
Stop muddling the debate with unjustified assumptions about what morality is for.
My definition of morality is that morality is what tells individuals what they should and should not do.
What’s the differnce? If you should not do a murder (your defintiion), then a potential interaction has been regulated (my version).;
You’ve conceded numerous things in this conversation, also. I’m done arguing with you because you’re ignoring any point that you find inconvenient to your position
Please list them.
and because you haven’t shown that you’re rational enough to escape your dogma.
No, it is largely about regulating interactions such as rape theft and murder.
This is a subset of my possible individual actions. Every interaction is an action.
Morality is not political, which is what you’re making it into. Morality is about right and wrong, and that’s all.
I never said morality is to make others do what I want. That is persistent straw man on your part
You’re using morality for more than individual actions. Therefore, you’re using it for other people’s actions, for persuading them to do what you want to do. Otherwise, your attempt to distinguish your view from mine fails.
“It’s not a hammer if it can’t bang in nail” isn’t invalid.
Then you’re using a different definition of morality which has more constraints than my definition. My definition is that morality is anything that tells an individual which actions should or should not be taken, and that no other requirements are necessary for morality to exist. If your conception of morality guides individual actions as well, but also has additional requirements, I’m contending that your additional requirements have no valid metaphysical foundation.
The motivation to be rational is a motivation. I didn’t say non-motivations override motivations. Higher order and lower order, remember.
Rationality is not a motivation, it is value-neutral.
Why not? I can see apriori what would make a hammer adequate.
You can start with a predefined notion of adequate, but only if you justify it explicitly.
What moral system do you defend? How does rationality result in moral principles? Can you give me an example?
Conclusions that just about anyne would find undersirable. Objection to random murder is not some weird pecadillo of mine.
Not relevant. People are stupid. Argumentum ad consequentums are logically invalid. Use Wikipedia if you doubt this.
If your assumptions were justified, I missed it. Please justify them for me.
What’s the differnce? If you should not do a murder (your defintiion), then a potential interaction has been regulated (my version).;
Our definitions overlap in some instances but aren’t identical. You add constraints, such as the idea that any moral system which justifies murder is not a valid moral system. Yours is also narrower than mine because mine holds that morality exists even in the context of wholly isolated individuals, whereas yours says morality is about interpersonal interactions.
Please list them.
I was mistaken because I hadn’t seen your other comment. I read the comments out of order. My apologies.
What dogma?
You’re arguing from definitions instead of showing the reasoning process which starts with rational principles and ends up with moral principles.
It is not ratinal to decide actions which are inteactions on the preferneces of one party alone.
Morality is not political
Weren’t you saying that the majority decide what is moral?
You’re using morality for more than individual actions.
Arent you?
Therefore, you’re using it for other people’s actions, for persuading them to do what you want to do.
Everybody is using it for their and everybody elses actions. I play no central role.
If your conception of morality guides individual actions as well, but also has additional requirements, I’m contending that your additional requirements have no valid metaphysical foundation.
That depends on whether or not your “individual actions” inlcude interacitons. if they do, the interests
of the other parties need to be taken into account.
Rationality is not a motivation, it is value-neutral.
How does anyone end up raitional if no-one is motivated to be? Are you quite sure you haven’t confused
“rationality is value neutral because if you don’t get any values out of it your don’t put into it”
with
“No one would ever value rationality”
You can start with a predefined notion of adequate, but only if you justify it explicitly.
I don’t have to justify common defintions.
What moral system do you defend?
Where did I say I was defending one? I said subjectivism doen’t work.
Argumentum ad consequentums are logically invalid.
You cannot logically conclude that something exists in objective reality because you like its consequences.
But morality doens’t exist in objective reality. it is a human creation, and humans are entitled to reject
versions of ti that don’t work becaue they dont work.
If your assumptions were justified, I missed it. Please justify them for me.
The burden is on you to explain how your definition “morality is about right and wrong” is different
from mine: “morality is about the requation of conduct”.
Our definitions overlap in some instances but aren’t identical. You add constraints, such as the idea that any moral system which justifies murder is not a valid moral system.
It obviiusly isn’t. If our definitions differ, mine is right.
Yours is also narrower than mine because mine holds that morality exists even in the context of wholly isolated individuals, whereas yours says morality is about interpersonal interactions.
I said “largely”.
You’re arguing from definitions
You say that like its a bad thing.
instead of showing the reasoning process which starts with rational principles and ends up with moral principles.
Why would I need to do that to show that subjectivism is wrong?
Your usage of the words “subjective” and “objective” is confusing.
Utilitarianism doesn’t forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize “morality” (total sum utility).
It is “objective” in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also “objective” in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also “subjective” in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don’t, but that’s a theoretical nitpick).
Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for, AFAIK.
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.
I’m currently convinced that there’s at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.
Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for,
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
I’ve never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?
No, they don’t have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.
I share this view. When I appear to forfeit some utility in favor of someone else, it’s because I’m actually maximizing my own utility by deriving some from the knowledge that I’m improving the utility of other agents.
Other agents’s utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/profess the belief of an “innate goodness of humanity” are mind-projecting their own value-of-others’-utility.
Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It’s also possible that this is a purely environment-learned value that becomes “terminal” in the process of being trained into the brain’s reward centers due to its instrumental value in many situations.
Because morality is largely to regulate interactions between individuals. That’s its job.
You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.
Morality is a useful tool to regulate interactions between individuals. There are efforts to make it a better tool for that purpose. That does not mean that morality should be used to regulate interactions.
If I put a hammer under a table to keep the table from wobbling, am I using a tool or not? If the hammer is the only object within range that is the right size for the table, and there is no task which requires a weighted lever, is the hammer intended to balance the table simply by virtue of being the best tool for the job?
Fit-for-task is a different quality than purpose. Hammers are useful tools to drive nails, but poor tools for determining what nails should be driven. There are many nails that should not be driven, despite the presence of hammers.
If I put a hammer under a table to keep the table from wobbling, am I using a tool or not?
f you can’t bang in nails with it, it isnt a hammer. What you can do with it isn’t relevant.
There are many nails that should not be driven, despite the presence of hammers.
???
So we can judge things morally wrong, because we have a tool to do the job, but we shouldn’t in many cases, because...? (And what kind of “shouldn’t” is that?)
If you can’t bang in nails with it, it isnt a hammer. What you can do with it isn’t relevant.
By that, the absence of nails makes the weighted lever not a hammer. I think that hammerness is intrinsic and not based on the presence of nails; likewise morality can exist when there is only one active moral agent.
The metaphor was that you could, in principle, drive nails literally everywhere you can see, including in your brain. Will you agree that one should not drive nails literally everywhere, but only in select locations, using the right type of nail for the right location? If you don’t, this part of the conversation is not salvageable.
What is that supposed to be analgous to? If you have a workable system of ethics, then it doens’t make judgments willy nilly, anymore than a workable system of logic allows quodlibet.
The metaphor was that you could, in principle, make rules and laws for literally any possible action, including living. Will you agree that one should not make fixed rules for literally all actions, but only for select high-negative-impact ones, using the right type of rule for the right action?
(Edited for explicit analogy.)
Basically, it’s not because you have a morality (hammer) that happens to be convenient for making laws and rules of interactions (balancing the table) that morality is necessarily the best and intended tool for making rules and that morality itself tells you what you should make laws about or that you even should make laws in the first place.
Moral rules and legal laws aren’t the same thing. Modern socities don’t legislate against adultery, although they may consider it against the moral rules.
If you are going to override a moral rule, (ie neither punish nor even disaprove of) an action, what would you override it in favour of? What would count more?
I would refuse to allow moral judgement on things which lie outside of the realm of appropriate morality. Modern societies don’t legislate against adultery because consensual sex is amoral. Using moral guidelines to determine which people are allowed to have consensual sex is like using a hammer to open a window.
I don’t see where I’ve implied that one would override a moral rule. What I’m saying is that most current moral systems are not good enough to even make rational rules about some types of actions in the first place, and that in the long run we would regret doing so after doing some metaethics.
Uncertainty and the lack of reliability of our own minds and decision systems are key points of the above.
Why describe them as subjecive when they are intersubjective?
Because they’re not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans? And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?
I have warning lights that there’s an argument about definitions here.
Because they’re not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans?
That would make them not-objective. Subjective and intersubjective remain as options.
And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?
Then, again, why would anyone else be beholden to my values?
Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?
The free-loader problem is an obvious downside of the above simplification, but that and other issues don’t seem to be part of the present discussion.
Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
That doesn’t make them beholden—obligated. They can opt not to play that game. They can opt not to vvalue winning.
If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?
Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.
Could you taboo “beholden” in that first? I’m not sure the “feeling of moral duty borned from guilt” I associate with the word “obligated” is quite what you have in mind.
They can opt not to play that game. They can opt not to value winning.
Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
In other words, you just didn’t truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
(sorry if I’m arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent’s utility to its maximum possible value if there exists a maximum for that agent’s function)
I wouldn’t let my values be changed if doing so would thwart my current values. I think you’re contending that the utopia would satisfy my current values better than the status quo would, though.
In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don’t have very strong ones but I think they’re in here somewhere and for some things). You would call this a hidden utility function, I don’t think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.
Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.
Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
Games emerge where people have things other people value. If someone doens’t value those sorts
of things, they are not going to game-play.
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
I don’t see where higher-tier functions come in.
You are assumign that a utopia will maximise everyones value indiividually AND that values diverge.
That’s a tall order.
Why describe them as subjecive when they are intersubjective?
It would be necessary for them to be moral values and not something else, like aesthetic values. Because morality is largely to regulate interactions between individuals. That’s its job. Aesthetics is there to make things beautiful, logic is there to work things out...
I don’t want to get into a discussion of this, but if there’s an essay-length-or-less explanation you can point to somewhere of why I ought to believe this, I’d be interested.
I dont see that “morality is largely to regulate interactions between individuals” is contentious. Did you have another job in mind for it?
Well, since you ask: identifying right actions.
But, as I say, I don’t want to get into a discussion of this.
I certainly agree with you that if there exists some thing whose purpose is to regulate interactions between individuals, then it’s important that that thing be compelling to all (or at least most) of the individuals whose interactions it is intended to regulate.
Is that an end in itself?
Well, the law compells those who arent compelled by exhortation. But laws need justiication.
Not for me, no.
Is regulating interactions between individuals an end in itself?
Do you think it is pointless? Do you think it is a prelude to something else>?
I think identifying right actions can be, among other things, a prelude to acting rightly.
Is regulating interactions between individuals an end in itself?
What does that concept even mean? Are you asking if there’s a moral obligation to improve one’s own understanding of morality?
The justification for laws can be a combination of pragmatism and the values of the majority.
Does it serve a purpose by itself? Judging actions to be right or wrong is ususally the prelude to hadnig out praise and blame, reward and punishment.
if the values of the majority arent justified, how does thast justify laws?
Also, sometimes it’s a prelude to acting rightly and not acting wrongly.
Nope. An agent without a value system would have no purpose in creating a moral system. An agent with one might find it intrinsically valuable, but I personally don’t. I do find it instrumentally valuable.
Laws are justified because subjective desires are inherently justified because they’re inherently motivational. Many people reverse the burden of proof, but in the real world it’s your logic that has to justify itself to your values rather than your values that have to justify themselves to your logic. That’s the way we’re designed and there’s no getting around it. I prefer it that way and that’s its own justification. Abstract lies which make me happy are better than truths that make me sad because the concept of better itself mandates that it be so.
Clarification: From the perspective of a minority, the laws are unjustified. Or, they’re justified, but still undesirable. I’m not sure which. Justification is an awkward paradigm to work within because you haven’t proven that the concept makes sense and you haven’t precisely defined the concept.
Proof is a strong form of justification. If i don;t have justification, you don″t have proof.
Why would the majority regard them as justifed just because they happen to have them?
They’re justified in that there are no arguments which adequately refute them, and that they’re motivated to take those actions. There are no arguments which can refute one’s motivations because facts can only influence values via values. Motivations are what determine actions taken, not facts. That is why perfectly rational agents with identical knowledge but different values would respond differently to certain data. If a babykiller learned about a baby they would eat it, if I learned about a baby I would give it a hug.
In terms of framing, it might help you understand my perspective if you try not to think of it in terms of past atrocities. Think in terms of something more neutral. The majority wants to make a giant statue out of purple bubblegum, but the minority wants to make a statue out of blue cotton candy, for example.
Well, it’s ike proof, but weaker.
Lack of counterargument is not justification, nor is motivation from some possible irraitonal source.
Or the majority want to shoot all left handed people, for example. Majority verdict isn’t even close to moral justification.
In utilitarian terms, motivation is not “I’m motivated today!”. The utilitarian meaning of motivation is that a program which displays “Hello World!” on a computer screen has for (exclusive) motivation to do the exact process which makes it display those words. The motivation of this program is imperative and ridiculously simple and very blunt—it’s the pattern we’ve built into the computer to do certain things when it gets certain electronic inputs.
Motivations are those core things which literally cause actions, whether it’s a simple reflex built into the nervous system which always causes some jolt of movement whenever a certain thing happens (such as being hit on the knee) or a very complex value system sending interfering signals within trillions of cells causing a giant animal to move one way or another depending on the resulting outcome.
I know.
Motivation is the only thing that causes actions, it’s the only thing that it makes sense to talk about in reference to prescriptive statements. Why do you define motivation as irrational? At worst, it should be arrational. Even then, I see motivation as its own justification and indeed the ultimate source of all justifications for belief in truth, etc. Until you can solve every paradox ever, you need to either embrace nihilism or embrace subjective value as the foundation of justification.
The majority verdict isn’t moral justification because morality is subjective. But for people within the majority, their decision makes sense. If I were in the community, I would do what they do. I believe that it would be morally right for me to do so. Values are the only source of morality that there is.
That doesn’t follow. If it is the only thing that causes actions, then it is relevant to why, as a matter of fact, people do what they do—but that is description, not prescription. Prescription requires extra ingredients.
I said that as a matter fof fact it is not necessarily rational. My grounds are that you cna’t always explain you motivations on a ratioanl basis.
It may be the source of caring about truth and rationality. That does not mke it the source of truth and rationality.
That doens’t follow. I could embrace non-evaluative intutions, for instance.
Subjective morality cannot justify laws tha pply to eveybody.
It may make sense as a set of personal preferences, but that doens’t justify it being binding on others.
Then you would have colluded with atrocities in other historical societies.
Individual values do not sum to group morality.
In that case, prescription is impossible. Your system can’t handle the is-ought problem.
Values are rationality neutral. If you don’t view motivations and values as identical, explain why?
These intuitions are violated by paradoxes such as the problem of induction or the fact that logical justification is infinitely regressive (turtles all the way down). Your choice is nihilism or an arbitrary starting point, but logic isn’t a valid option.
Sure. Technically this is false if everyone is the same or very similar but I’ll let that slide. Why does this invalidate subjective morality?
Why would I be motivated by someone else’s preferences? The only thing relevant to my decision is me and my preferences. The fact that this decision effects other people is irrelevant, literally every decision effects other people.
I value human life, you are wrong.
Group morality does not exist.
Something is not impossible just because it requires extra ingredients.
I don’t care about the difference between irrational and arational, they’re both non-rational.
Grounding out in an intuitionthat can’t be justified is no worse than grounding out in a value that can’t be justified.
You are (trying to) use logic right now, How come it works for you?
Because morality needs to be able to tell people why they should not always act on their first-oder impulses.
I didn’t say you should.If you have morality as a higher order prefernce, you can be persuaded to override some of your first ortder preferences. In favour of morality. Which is not subjectiv,e and therefore not just someone else;s values.
You’ve admitted that prefernces can include empathy. They can include respect for universalisable moral principles too. “My prefernces” does not have to equate to “selfish preferneces”
How does choosing vanilla over chocale chip affect other people?
You need to make up your mind whether you value human life more or less than going along with the majority.
That claim needs justification.
How do you generate moral principles that conflict with desire? How do you justify moral principles that don’t spring from desire? Why would anyone adopt these moral principles or care what they have to say? How do you overcome the is-ought gap?
Give me a specific example of an objective system that you think is valid and that overcomes the is-ought gap.
Mine can do that. Some impulses contradict other values. Some values outweigh others. Sometimes you make sacrifices now for later gains.
I don’t know why you believe morality needs to be able to restrict impulses, either. Morality is a guide to action. If that guide to action is identical to your inherent first-order impulses, all the better for you.
Let me rephrase. How can you generate motivational force from abstract principles? Why does morality matter if it has nothing to do with our values?
Your preferences might include this, yes. I think that would be a weird thing to have built in your preferences and that you should consider self-modifying it out. Regardless, that would be justifying a belief in a universalisable moral principle through subjective principles. You’re trying to justify that belief through nothing but logic, because that is the only way you can characterize your system as truly objective.
There are less vanilla chips for other people. It effects your diet which effects the way you will behave. It will increase your happiness if you value vanilla chips more than chocolate ones. If someone values your happiness, they will be happy you ate vanilla chips. If someone hates when you’re happy, they will be sad.
I don’t value going along with the majority in and of itself. If I’m a member of the majority and I have certain values then I would act on those values, but my status as a member of the majority wouldn’t be relevant to morality.
Sure. Pain and pleasure and value are the roots of morality. They exist only in internal experiences. My pain and your pleasure are not interchangeable because there is no big Calculating utility god in the sky to aggregate the content of our experiences. Experience is always individual and internal and value can’t exist outside of experience and morality can’t exist outside of value. The parts of your brain that make you value certain experiences are not connected to the parts of my brain that make me value certain experiences, which means the fact that your experiences aren’t mine is sufficient to refute the idea that your experiences would or should somehow motivate me in and of themselves.
Did you notice my references to “firist order” and “higher order”?
By using rational-should as an intermediate.
Sometimes you need to follow impersonal, universaliable,...maybe even objective...moral reasoning?
i don’t know why you think “do what thou wilt” is morlaity. It would be like having a system of logic that can prove any claim.
“All the better for me” does not mean “optimal morality”. The job of logic is not to prove everything I happen to believe, and the job of morality is not to confirm all my impulses.
Some people value reason, and the rest have value systems tweaked by the threat of punishment.
You think no one values morality?
What’s weird? Empathy? Morality? Ratioanlity?
You say that like its a bad thing.
Not necessarily. There might be a surplus.
But if you want to say that everything effects others, albeit to a ti y extent, then it follows that everything is a tiny bit moral.
You previouly made some statements that sounded a lot like that.
That statement needs some justification. Is it better to do good things voluntarily, or because you are forced to?
OK, I though ti was something like that. The things is that subjects can have values which are inherently interpersonal and even objective...things like empathy and rationality. So “value held by a subject” does not imply “selfish value”.
Yet agian, objective morality is not a case of one subject being motivated by another subjects values. Objectivity is not achieved by swapping subjects.
This is a black box. Explain what they mean and how you generate the connection between the two.
You claim that a rational-should exists. Prove it.
Using objective principles as a tool to evaluate tradeoffs between subjective values is not the same as using objective principles to produce moral truths.
That’s not my definition of morality, it’s the conclusion I end up with. Your analogy doesn’t seem valid to me because I don’t conclude that all moral claims are equal but that all desires are good. Repressing desires or failing to achieve desires is bad. Additionally, its clear to me why a logical system that proves everything is good is bad, but why would a moral system that did the same be invalid?
I agree. I didn’t claim either of those things. Morality doesn’t have a job outside of distinguishing between right and wrong.
The idea that all principles you act upon must be universalizable. It’s bad because individuals are different and should act differently. The principle I defend is a universalizable one, that individuals should do what they want. The difference between mine and yours is that mine is broad and all people are happy when its applied to their case, but yours is narrow and exclusive and egocentric because it neglects differences in individual values, or holds those differences to be morally irrelevant.
Subtraction, have you heard of it?
Some things are neutral even though they effect others.
Voluntarily, because that means you’re acting on your values.
If I valued rationality, why would that result in specific moral decrees? Value held by a subject doesn’t imply selfish value, but it does imply that the values of others are only relevant to my morality insofar as I empathize with those others.
“Objectivity” in ethics is achieved by abandoning individual values and beliefs and trying to produce statements which would be valued and believed by everyone. That’s stupid because we can never escape the locus of the self and because morality emerges from internal processes and neglecting those internal processes means that there is zero foundation for any sort of morality. I’m saying that morality is only accessible internally, and that the things which produce morality are internal subjective beliefs.
If you continue to disagree, I suggest we start over. Let me know and I’ll post an argument that I used last year in debate. I feel like starting over would clarify things a lot because we’re getting muddled down in a line-by-line back-and-forth hyperspecific conversation here.
Usual meaning in this type of disucssion.
If I can prove anything to you, you are already running on rational_should.
Why not?
That doens’t help. It;s not morality whether it’s assumed or concluded.
Individuals are different and would act differntly. You are arguing as though people should never do anythng unless it is morally obligated, as though moral rules are all encompassing. I never said that. Morality does not need to detemine evey action any more than civil law does.
That isn’t universalisable because you don;t want to be murdered. The correct form is “individuals should do what they want unless it harms another”.
We don’t have it. if people wanted your principle, they would abolish all laws.
!!!
Look at examples of people arguing about morality.
ETA: Better restrict that to liberals.
There’s plenty about, even on this site.
Nope. Rationality too.
Of course not. It is a perfectly acceptable principle that people should be allowed to realise their values so long as they do not harm others. Where do you ge these ideas?
Just everyone rational. The police are there for a reason
Yet again: we can internally value what is objective and impartial. “In me” doesn’t imply “for me”.
“Neglect” is your straw man.
Yet again: “In me” doesn’t imply “for me”.
If you like.
I don’t want to spend any more time on this. I’m done.
What you need to justfy is imprisoning someone for offending against values they don’t necessarily subsribe to. That you are motivated by your values, and the criminal by theirs, doens’t give you the right to jail them.
I actually see that as counter-intuitive.
“Morality” is indeed being used to regulate individuals by some individuals or groups. When I think of morality, however, I think “greater total utility over multiple agents, whose value systems (utility functions) may vary”. Morality seems largely about taking actions and making decisions which achieve greater utility.
I do this, except I only use my own utility and not other agents. For me, outside of empathy, I have no more reason to help other people achieve their values than I do to help the Babyeaters eat babies. The utility functions of others don’t inherently connect to my motivational states, and grafting the values of others onto my decision calculus seems weird.
I think most people become utilitarians instead of egoists because they empathize with other people, while never seeing the fact that to the extent that this empathy moves them it is their own value and within their own utility function. They then build the abstract moral theory of utilitarianism to formalize their intuitions about this, but because they’ve overlooked the egoist intermediary step the model is slightly off and sometimes leads to conclusions which contradict egoist impulses or egoist conclusions.
Or they adopt ultitariansim, or some other non-subjective system, because they value having a moral system that can apply to, persuade, and justify itself to others. (Or in short: they value having a moral system).
In my view there’s a difference between having a moral system (defined as something that tells you what is right and what is wrong) and having a system that you use to justify yourself to others. That difference generally isn’t relevant because humans tend to empathize with each other and humans have a very close cluster of values so there are lots of common interests.
It’s nothing about justifyng “myself”
My computer won’t load the website because it’s apparently having issues with flash, can you please summarize? If you’re just making a distinction between yourself and your beliefs, sure, I’ll concede that. I was a bit sloppy with my terminology there.
Its not “My beliefs” either.” justification is the reason why someone (properly) holds the belief, the explanation as to why the belief is a true one, or an account of how one knows what one knows.”
Okay. I think I’ve explained the justification then. Specific moral systems aren’t necessarily interchangeable from person to person, but they can still be explained and justified in a general sense. “My values tell me X, therefore X is moral” is the form of justification that I’ve been defending.
Yet again, you run into the problem that you need it to be wrong for other people to murder you, which you can’t justify with your values alone.
No I don’t. I need to be stronger than the people who want to murder me, or to live in a society that deters murder. If someone wants to murder me, it’s probably not the best strategy to start trying to convince them that they’re being immoral.
You’re making an argumentum ad consequentum. You don’t decide metaethical issues by deciding what kind of morality it would be ideal to have and then working backwards. Just because you don’t like the type of system that morality leads to overall doesn’t mean that you’re justified in ignoring other moral arguments.
The benefit of my system is that it’s right for me to murder people if I want to murder them. This means I can do things like self defense or killing Nazis and pedophiles with minimal moral damage. This isn’t a reason to support my system, but it is kind of neat.
That’s giving up on morality not defending subjective morality.
Same problem. That’s either group morality or non morality.
I didn;t say it was the best practical strategy. The moral an the practical are differnt things. I am saying that for morality to be what it is, it needs to offer reasons for people to not act on some of their first order values. That morality is not legality or brue force or a a magic spell is not relevant.
I am starting wth what kind of morality it would be adequate to have. If you can’t bang in a nail with it, it isn’t a hammer.
Where on eath did I say that?
That’s not a benefit, because murder is just the sort of thing morlaity is supposed to condemn.. Hammers are for nails, not screws, and morality is not for “i can do whatever I want regardless”.
Justifiable self defense is not murder. You seem to have confused ethical objectiv ism (morality is not just personal preference) with ethical absolutism (moral principles have no exceptions). Read yer wikipedia!
Morality is a guide for your own actions, not a guide for getting people to do what you want.
Rational self interested individuals decide to create a police force.
Argumentum ad consequentums are still invalid.
Sure, but morality needs to have motivational force or its useless and stupid. Why should I care? Why should the burglar? If you’re going to keep insisting that morality is what’s preventing people from doing evil things, you need to explain how your accounting of morality overrules inherent motivation and desire, and why its justified in doing that.
This is not how metaethics works. You don’t get to start with a predefined notion of adequate. That’s the opposite of objectivity. By neglecting metaethics, you’re defending a model that’s just as subjective as mine, except that you don’t acknowledge that and you seek to vilify those who don’t share your preferences.
You’re arguing that subjective morality can’t be right because it would lead to conclusions you find undesirable, like random murders.
Stop muddling the debate with unjustified assumptions about what morality is for. If you want to talk about something else, fine. My definition of morality is that morality is what tells individuals what they should and should not do. That’s all I intend to talk about.
You’ve conceded numerous things in this conversation, also. I’m done arguing with you because you’re ignoring any point that you find inconvenient to your position and because you haven’t shown that you’re rational enough to escape your dogma.
No, it is largely about regulating interactions such as rape theft and murder.
I never said morality is to make others do what I want. That is persistent straw man on your part
So?
“It’s not a hammer if it can’t bang in nail” isn’t invalid.
If your ar rational you will care abotu raitonality based morality. If you are not...what are you doing on LW?
The motivation to be rational is a motivation. I didn’t say non-motivations override motivations. Higher order and lower order, remember.
Why not? I can see apriori what would make a hammer adequate.
Conclusions that just about anyne would find undersirable. Objection to random murder is not some weird pecadillo of mine.
Calling something unjustified doens’t prove antyhing.
What’s the differnce? If you should not do a murder (your defintiion), then a potential interaction has been regulated (my version).;
Please list them.
What dogma?
This is a subset of my possible individual actions. Every interaction is an action.
Morality is not political, which is what you’re making it into. Morality is about right and wrong, and that’s all.
You’re using morality for more than individual actions. Therefore, you’re using it for other people’s actions, for persuading them to do what you want to do. Otherwise, your attempt to distinguish your view from mine fails.
Then you’re using a different definition of morality which has more constraints than my definition. My definition is that morality is anything that tells an individual which actions should or should not be taken, and that no other requirements are necessary for morality to exist. If your conception of morality guides individual actions as well, but also has additional requirements, I’m contending that your additional requirements have no valid metaphysical foundation.
Rationality is not a motivation, it is value-neutral.
You can start with a predefined notion of adequate, but only if you justify it explicitly.
What moral system do you defend? How does rationality result in moral principles? Can you give me an example?
Not relevant. People are stupid. Argumentum ad consequentums are logically invalid. Use Wikipedia if you doubt this.
If your assumptions were justified, I missed it. Please justify them for me.
Our definitions overlap in some instances but aren’t identical. You add constraints, such as the idea that any moral system which justifies murder is not a valid moral system. Yours is also narrower than mine because mine holds that morality exists even in the context of wholly isolated individuals, whereas yours says morality is about interpersonal interactions.
I was mistaken because I hadn’t seen your other comment. I read the comments out of order. My apologies.
You’re arguing from definitions instead of showing the reasoning process which starts with rational principles and ends up with moral principles.
It is not ratinal to decide actions which are inteactions on the preferneces of one party alone.
Weren’t you saying that the majority decide what is moral?
Arent you?
Everybody is using it for their and everybody elses actions. I play no central role.
That depends on whether or not your “individual actions” inlcude interacitons. if they do, the interests of the other parties need to be taken into account.
How does anyone end up raitional if no-one is motivated to be? Are you quite sure you haven’t confused
“rationality is value neutral because if you don’t get any values out of it your don’t put into it”
with
“No one would ever value rationality”
I don’t have to justify common defintions.
Where did I say I was defending one? I said subjectivism doen’t work.
You cannot logically conclude that something exists in objective reality because you like its consequences. But morality doens’t exist in objective reality. it is a human creation, and humans are entitled to reject versions of ti that don’t work becaue they dont work.
The burden is on you to explain how your definition “morality is about right and wrong” is different from mine: “morality is about the requation of conduct”.
It obviiusly isn’t. If our definitions differ, mine is right.
I said “largely”.
You say that like its a bad thing.
Why would I need to do that to show that subjectivism is wrong?
I don’t want to spend any more time on this. I’m done.
Your usage of the words “subjective” and “objective” is confusing.
Utilitarianism doesn’t forbid that each individual person (agent) have different things they value (utility functions). As such, there is no universal specific simple rule that can apply to all possible agents to maximize “morality” (total sum utility).
It is “objective” in the sense that if you know all the utility functions, and try to achieve the maximum possible total utility, this is the best thing to do from an external standpoint. It is also “objective” in the sense that when your own utility is maximized, that is the best possible thing that you could have, regardless of whatever anyone might think about it.
However, it is also “subjective” in the sense that each individual can have their own utility function, and it can be whatever you could imagine. There are no restrictions in utilitarianism itself. My utility is not your utility, unless your utility function has a component that values my utility and you have full knowledge of my utility (or even if you don’t, but that’s a theoretical nitpick).
Utilitarianism alone doesn’t apply to, persuade, or justify any action that affects values to anyone else. It can be abused as such, but that’s not what it’s there for, AFAIK.
I think specific applications of utilitarianism might say that modifying the values of yourself or of others would be beneficial even in terms of your current utility function.
Yeah.
When things start getting interesting is when not only are some values implemented as variable-weight within the function, but the functions themselves become part of the calculation, and utility functions become modular and partially recursive.
I’m currently convinced that there’s at least one (perhaps well-hidden) such recursive module of utility-for-utility-functions currently built into the human brain, and that clever hacking of this module might be very beneficial in the long run.
Are you saying that no form of utilitariansim will ever conclude fhat one person should sacrifice some value for the benefit of the many?
No form of the official theory in the papers I read, at the very least.
Many applications or implementations of utilitarianism or utilitarian (-like) systems do, however, enforce rules that if one agent’s weighed utility loss improves the total weighed utility of multiple other agents by a significant margin, that is what is right to do. The margin’s size and specific numbers and uncertainty values will vary by system.
I’ve never seen a system that would enforce such rules without a weighing function for the utilities of some kind to correct for limited information and uncertainty and diminishing-returns-like problems.
It seems to me that these two paragraphs contaradict each other. Do you think the “he should” means something different to “it is right for him to do so”?
No, they don’t have any major differences in utilitarian systems.
It seems I was confused when trying to answer your question. Utilitarianism can be seen as an abstract system of rules to compute stuff.
Certain ways to apply those rules to compute stuff are also called utilitarianism, including the philosophy that the maximum total utility of a population should preclude over the utility of one individual.
If utilitarianism is simply the set of rules you use to compute which things are best for one single purely selfish agent, then no, nothing concludes that the agent should sacrifice anything. If you adhere to the classical philosophy related to those rules, then yes, any human will conclude what I’ve said in that second paragraph in the grandparent (or something similar). This latter (the philosophy) is historically what appeared first, and is also what’s exposed on wikipedia’s page on utilitarianism.
Isn’t that decision theory?
I share this view. When I appear to forfeit some utility in favor of someone else, it’s because I’m actually maximizing my own utility by deriving some from the knowledge that I’m improving the utility of other agents.
Other agents’s utility functions and values are not directly valued, at least not among humans. Some (most?) of us just do indirectly value improving the value and utility of other agents, either as an instrumental step or a terminal value. Because of this, I believe most people who have/profess the belief of an “innate goodness of humanity” are mind-projecting their own value-of-others’-utility.
Whether this is a true value actually shared by all humans is unknown to me. It is possible that those who appear not to have this value are simply broken in some temporal, environment-based manner. It’s also possible that this is a purely environment-learned value that becomes “terminal” in the process of being trained into the brain’s reward centers due to its instrumental value in many situations.
You are anthropomorphizing concepts. Morality is a human artifact, and artifacts have no more purpose than natural objects.
Morality is a useful tool to regulate interactions between individuals. There are efforts to make it a better tool for that purpose. That does not mean that morality should be used to regulate interactions.
Human artifacts are generally created to do jobs, eg hammers
Tool. Like i said.
Does that mean you have a better tool in mind, or that interaction don’t need regulation?
If I put a hammer under a table to keep the table from wobbling, am I using a tool or not? If the hammer is the only object within range that is the right size for the table, and there is no task which requires a weighted lever, is the hammer intended to balance the table simply by virtue of being the best tool for the job?
Fit-for-task is a different quality than purpose. Hammers are useful tools to drive nails, but poor tools for determining what nails should be driven. There are many nails that should not be driven, despite the presence of hammers.
f you can’t bang in nails with it, it isnt a hammer. What you can do with it isn’t relevant.
???
So we can judge things morally wrong, because we have a tool to do the job, but we shouldn’t in many cases, because...? (And what kind of “shouldn’t” is that?)
By that, the absence of nails makes the weighted lever not a hammer. I think that hammerness is intrinsic and not based on the presence of nails; likewise morality can exist when there is only one active moral agent.
The metaphor was that you could, in principle, drive nails literally everywhere you can see, including in your brain. Will you agree that one should not drive nails literally everywhere, but only in select locations, using the right type of nail for the right location? If you don’t, this part of the conversation is not salvageable.
What is that supposed to be analgous to? If you have a workable system of ethics, then it doens’t make judgments willy nilly, anymore than a workable system of logic allows quodlibet.
(Edited for explicit analogy.)
Basically, it’s not because you have a morality (hammer) that happens to be convenient for making laws and rules of interactions (balancing the table) that morality is necessarily the best and intended tool for making rules and that morality itself tells you what you should make laws about or that you even should make laws in the first place.
Moral rules and legal laws aren’t the same thing. Modern socities don’t legislate against adultery, although they may consider it against the moral rules.
If you are going to override a moral rule, (ie neither punish nor even disaprove of) an action, what would you override it in favour of? What would count more?
I would refuse to allow moral judgement on things which lie outside of the realm of appropriate morality. Modern societies don’t legislate against adultery because consensual sex is amoral. Using moral guidelines to determine which people are allowed to have consensual sex is like using a hammer to open a window.
Oh, that was your concern. I has no bearing on what I was saying.
Can you provide an example of a moral rule that you believe might be/has been overridden, then?
I don’t see where I’ve implied that one would override a moral rule. What I’m saying is that most current moral systems are not good enough to even make rational rules about some types of actions in the first place, and that in the long run we would regret doing so after doing some metaethics.
Uncertainty and the lack of reliability of our own minds and decision systems are key points of the above.
Because they’re not written on a stone tablet handed down to Humanity from God the Holy Creator, or derived some other verifiable, falsifiable and physical fact of the universe independent of humans? And because there are possible variations within the value systems, rather than them being perfectly uniform and identical across the entire species?
I have warning lights that there’s an argument about definitions here.
That would make them not-objective. Subjective and intersubjective remain as options.
Then, again, why would anyone else be beholden to my values?
Because valuing others’ subjective values, or acting as if one did, is often a winning strategy in game-theoretic terms.
If one posits that by working together we can achieve an utopia where each individual’s values are maximized, and that to work together efficiently we need to at least act according to a model that would assign utility to others’ values, would it not follow that it’s in everyone’s best interests for everyone to build and follow such models?
The free-loader problem is an obvious downside of the above simplification, but that and other issues don’t seem to be part of the present discussion.
That doesn’t make them beholden—obligated. They can opt not to play that game. They can opt not to vvalue winning.
Only if they achieve satisfaction for individuals better than their behaving selfishly. A utopia that is better on averae or in total need not be better for everyone individually.
Could you taboo “beholden” in that first? I’m not sure the “feeling of moral duty borned from guilt” I associate with the word “obligated” is quite what you have in mind.
Within context, you cannot opt to not value winning. If you wanted to “not win”, and the preferred course of action is to “not win”, this merely means that you had a hidden function that assigned greater utility to a lower apparent utility within the game.
In other words, you just didn’t truly value what you thought you valued, but some other thing instead, and you end up having in fact won at your objective of not winning that sub-game within your overarching game of opting to play the game or not (the decision to opt to play the game or not is itself a separate higher-tier game, which you have won by deciding to not-win the lower-tier game).
A utopia which purports to maximize utility for each individual but fails to optimize for higher-tier or meta utilities and values is not truly maximizing utility, which violates the premises.
(sorry if I’m arguing a bit by definition with the utopia thing, but my premise was that the utopia brings each individual agent’s utility to its maximum possible value if there exists a maximum for that agent’s function)
I wouldn’t let my values be changed if doing so would thwart my current values. I think you’re contending that the utopia would satisfy my current values better than the status quo would, though.
In that case, I would only resist the utopia if I had a deontic prohibition against changing my values (I don’t have very strong ones but I think they’re in here somewhere and for some things). You would call this a hidden utility function, I don’t think that adequately models the idea that humans are satisficers and not perfect utilitarians. Deontology is sometimes a way of identifying satisficing conditions for human behavior, in that sense I think it can be a much stronger argument.
Even supposing that we were perfect utilitarians, if I placed more value on maintaining my current values than I do on anything else, I would still reject modifying myself and moving towards your utopia.
Do you think the utopia is feasible?
Naw. But even if it was, if I placed value on maintaining my current values to a high degree, I wouldn’t modify.
Games emerge where people have things other people value. If someone doens’t value those sorts of things, they are not going to game-play.
I don’t see where higher-tier functions come in.
You are assumign that a utopia will maximise everyones value indiividually AND that values diverge. That’s a tall order.