There are experts in moral philosophy, and they generally regard the question realism versus relativism (etc) to be wide open. The “realism—huh, what, no?!?” respsonse is standard on LW and only on LW. But I don’t see any superior understanding on LW.
Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there’s a passable sequence on it.
¹ As you’ve defined it here, anyway. Moral realism as normally defined simply means “moral statements have truth values” and does not imply universal compellingness.
Well, there’s the more obvious sense, that there can always exist an “irrational” mind that simply refuses to believe in gravity, regardless of the strength of the evidence. “Gravity makes things fall” is true, because it does indeed make things fall. But not compelling to those types of minds.
But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form “action A is xyzzy” may be a true classification of A, and may be trivial to show, once “xyzzy” is defined. But an agent that did not care about xyzzy would not be moved to act based on that. It could recognise the truth of the statement but would not care.
For a stupid example, I could say to you “if you do 13 push-ups now, you’ll have done a prime number of push-ups”. Well, the statement is true, but the majority of the world’s population would be like “yeah, so what?”.
In contrast, a statement like “if you drink-drive, you could kill someone!” is generally (but sadly not always) compelling to humans. Because humans like to not kill people, they will generally choose not to drink-drive once they are convinced of the truth of the statement.
But isn’t the whole debate about moral realism vs. anti-realism is whether “Don’t murder” is universally compelling to humans. Noticing that pebblesorters aren’t compelled by our values doesn’t explain whether humans should necessarily find “don’t murder” compelling.
I identify as a moral realist, but I don’t believe all moral facts are universally compelling to humans, at least not if “universally compelling” is meant descriptively rather than normatively. I don’t take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.
What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that’s it.
When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don’t care about it.
Whether “don’t murder” (or rather, “murder is bad” since commands don’t have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn’t. Though I would hope it to be compelling to most.
I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.
What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that’s it.
This is incorrect, in my experience. Although “moral realism” is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren’t merely debating whether moral statements have truth values. The position you’re describing is usually labeled “moral cognitivism”.
Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values (“false” is a truth value, after all). But I don’t think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses—not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
I think you guys should taboo “moral realism”. I understand that it’s important to get the terminology right, but IMO debates about nothing but terminology have little value.
Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values (“false” is a truth value, after all).
Err, right, yes, that’s what I meant. Error theorists do of course also claim that moral statements have truth values.
Moral realists are usually defending a whole suite of theses—not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
True enough, though I guess I’d prefer to talk about a single well-specified claim than a “usually” cluster in philosopher-space.
If that philosopher believes that statements like “murder is wrong” are true, then they are indeed a realist. Did I say something that looked like I would disagree?
You guys are talking past each other, because you mean something different by ‘compelling’. I think Tim means that X is compelling to all human beings if any human being will accept X under ideal epistemic circumstances. You seem to take ‘X is universally compelling’ to mean that all human beings already do accept X, or would on a first hearing.
Would agree that all human beings would accept all true statements under ideal epistemic circumstances (i.e. having heard all the arguments, seen all the evidence, in the best state of mind)?
I guess I must clarify. When I say ‘compelling’ here I am really talking mainly about motivational compellingness. Saying “if you drink-drive, you could kill someone!” to a human is generally, motivationally compelling as an argument for not drink-driving: because humans don’t like killing people, a human will decide not to drink-drive (one in a rational state of mind, anyway).
This is distinct from accepting statements as true or false! Any rational agent, give or take a few, will presumably believe you about the causal relationship between drink-driving and manslaughter once presented with sufficient evidence. But it is a tiny subset of these who will change their decisions on this basis. A mind that doesn’t care whether it kills people will see this information as an irrelevant curiosity.
Having looked over that sequence, I haven’t found any proof that moral realism (on either definition) or moral relativism is false. Could you point me more specifically to what you have in mind (or just put the argument in your own words, if you have the time)?
Edit: (Sigh), I appreciate the link, but I can’t make heads or tails of ‘No Universally Compelling Arguments’. I speak from ignorance as to the meaning of the article, but I can’t seem to identify the premises of the argument.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn’t buy it.
So, there’s some sort of assumption as to what minds are:
I also wish to establish the notion of a mind as a causal, lawful, physical system… [emphasis original]
and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.
Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:
Oh, there might be argument sequences that would compel any neurologically intact human...
From which we get Coherent Extrapolated Volition and friends.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
This doesn’t seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form ‘s:X(s)’ has two to the trillionth chances to be false (e.g. ‘have more than one base pair’, ‘involve hydrogen’ etc.). Given that this doesn’t hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization ‘for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false’ (which does seem to be of the form m:X(m)) is somehow more likely.
Also, doesn’t this inference imply that ‘being convinced by an argument’ is a bit that can flip on or off independently of any others? Eliezer doesn’t think that’s true, and I can’t imagine why he would think his (hypothetical) interlocutor would accept it.
I mean to say, I think the argument is something of a paradox:
The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).
The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).
If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.
The argument seems to be fixable at this stage, since there’s a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?
for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind.
That’s not what it says; compare the emphasis in both quotes.
If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization “All minds m: X(m)” has two to the trillionth chances to be false, while each existential generalization “Exists mind m: X(m)” has two to the trillionth chances to be true.
Sorry, I may have misunderstood and presumed that ‘two to the trillionth chances to be false’ meant ‘one in two to the trillionth chances to be true’. That may be wrong, but it doesn’t affect my argument at all: EY’s argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).
Sorry, I was speaking ambiguously. I mean’t ‘rational’ not in the normative sense that distinguishes good agents from bad ones, but ‘rational’ in the broader, descriptive sense that distinguishes anything capable of responding to reasons (even terrible or false ones) from something that isn’t. I assumed that was the sense of ‘rational’ Prawn was using, but that may have been wrong.
Irrelevant. I am talking about rational minds, he is talking about physically possible ones.
UFAI sounds like a counterexample, but I’m not interested in arguing with you about it. I only responded because someone asked for a shortcut in the metaethics sequence.
Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?
Also, moral philosophers mostly regard the question as open in the sense that some of them think that it’s clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it’s clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don’t know yet.
Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?
What I am seeing is
much-repeated confusions—the Standard Muddle
*appeals to LW doctrines which aren’t well-founded or well respected outside LW.
In I knew exactly what superior insight into the problem was, I would write it up and become famous. Insight doesn’t work like that; you don’t know it in advance, you get an “Aha” when you see it.
Also, moral philosophers mostly regard the question as open in the sense that some of them think that it’s clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it’s clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don’t know yet.
If people can’t agree on how a question is closed, it’s open.
Can you explain what these confusions are, and why they’re confused?
In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this. This is one of the primary reasons I bothered sticking around in the community.
If people can’t agree on how a question is closed, it’s open.
A question can still be “open” in that sense when all the information necessary for a rational person to make a definite judgment is available.
Can you explain what these confusions are, and why they’re confused?
Eg.
You are trying to impose your morality/
I can think of one model of moral realism, and it doesn’t work, so I will ditch the whole thing.
In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this.
LW doesn’t even claim to have more than about two “dissolutions”. There are probably hundreds of outstanding
philosophical problems. Whence the “largely”
Luke wrote a series of posts on this
Which were shot down by philosophers.
A question can still be “open” in that sense when all the information necessary for a rational person to make a definite judgment is available.
Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.
I can think of one model of moral realism, and it doesn’t work, so I will ditch the whole thing.
This certainly doesn’t describe my reasoning on the matter, and I doubt it describes many others’ here either.
The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.
LW doesn’t even claim to have more than about two “dissolutions”. There are probably hundreds of outstanding philosophical problems. Whence the “largely”
I gave up my study of philosophy because I found such confusions so pervasive. Many “outstanding” philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.
Which were shot down by philosophers.
Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?
Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.
Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent “experts” perpetuating them. This is the conclusion that my experience with the field has wrought.
This certainly doesn’t describe my reasoning on the matter, and I doubt it describes many others’ here either.
I mentioned them because they both came up recently
The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.
I have no idea what you mean by that. I don’t think value systems don’t come into it, I just think they are not
isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from
“the ground up”, whether its morality or mortgages.
I gave up my study of philosophy because I found such confusions so pervasive. Many “outstanding” philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.
Where is it proven they can be discarded?
Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?
All of them.
Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent “experts” perpetuating them. This is the conclusion that my experience with the field has wrought.
Are you aware that that is basically what every crank says about some other field?
Are you aware that that is basically what every crank says about some other field?
Presumably, if I’m to treat as meaningful evidence about Desrtopa’s crankiness the fact that cranks make statements similar to Desrtopa, I should first confirm that non-cranks don’t make similar statements.
It seems likely to me that for every person P, there exists some field F such that P believes many aspects of F exist only because of incompetent “experts” perpetuating them. (Consider cases like F=astrology, F=phrenology, F=supply-side economics, F= feminism, etc.) And that this is true whether P is a crank or a non-crank.
So it seems this line of reasoning depends on some set F2 of fields such that P believes this of F in F2 only if P is a crank.
I understand that you’re asserting implicitly that moral philosophy is a field in F2, but this seems to be precisely what Desrtopa is disputing.
Could we reasonably say that an F is in F2 if most of the institutional participants in that F are intelligent, well-educated people? This leaves room for cranks who are right to object to F, of course.
So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.
So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?
No, I don’t think we can reasonably say that. Dan Dennet might be a crank, but it takes more than that argument to demonstrate the fact.
Good point. So how about this: someone is a crank if they object to F, where F is in F2 (by my above standard), and the reasons they have for objecting to F are not recognized as sound by a proportionate number of intelligent and well educated people.
(shrug) I suppose that works well enough, for some values of “proportionate.”
Mostly I consider this a special case of the basic “who do I trust?” social problem, applied to academic disciplines, and I don’t have any real problem saying about an academic discipline “this discipline is fundamentally confused, and the odds of work in it contributing anything valuable to the world is slim.”
Of course, as Prawn has pointed out a few times, there’s also the question of where we draw the lines around a discipline, but I mostly consider that an orthogonal question to how we evaluate the discipline.
I think this question is moot in the case of philosophy in general then; I think any philosopher worth their shirt should tell you that trust is a wholly inappropriate attitude toward philosophers, philosophical institutions and philosophical traditions.
Not in the sense I meant it. If a philosopher makes a claim that seems on the surface to be false or incoherent, I have to decide whether to devote the additional effort to evaluating it to confirm or deny that initial judgment. One of the factors that will feed into that decision will be my estimate of the prior probability that they are saying something false or incoherent. If I should refer to that using a word other than “trust”, that’s fine, tell me what word will refer to that to you and I’ll try to use it instead.
No, that describes what I’m talking about, so long as by trust you mean ‘a reason to hear out an argument that makes reference to the credibility of a field or its professionals’, rather than just ‘a reason to hear out an argument’. If the former, then I do think this is an inappropriate attitude toward philosophy. One reason for this is that such trust seems to depend on having a good standard for the success of a field independently of hearing out an argument. I can trust physicists because they make such good predictions, and because their work leads to such powerful technological advances. I don’t need to be a physicist to observe that. I don’t think philosophy has anything like that to speak for it. The only standards of success are the arguments themselves, and you can only evaluate them by just going ahead and doing some philosophy.
You can find trust in an institution independently of such standards by watching to see whether people you think are otherwise credible take it seriously. That will of course work with philosophy too, but if you trust Tom to be able to judge whether or not a philosophical claim is worth pursuing (and if I’m right about the above), then Tom can only be trustworthy in this regard because he has been doing philosophy (i.e. engaging with the argument). This could get you through the door on some particular philosophical claim, but not into philosophy generally.
so long as by trust you mean ‘a reason to hear out an argument that makes reference to the credibility of a field or its professionals’, rather than just ‘a reason to hear out an argument’.
I mean neither, I mean ‘a reason to devote time and resources to evaluating the evidence for and against a position.’ As you say, I can only evaluate a philosophical argument by ‘going ahead and doing some philosophy,’ (for a sufficiently broad understanding of ‘philosophy’), but my willingness to do, say, 20 hours of philosophy in order to evaluate Philosopher Sam’s position is going to depend on, among other things, my estimate of the prior probability that Sam is saying something false or incoherent. The likelier I think that is, the less willing I am to spend those 20 hours.
I mean neither, I mean ‘a reason to devote time and resources to evaluating the evidence for and against a position.’
That’s fine, that’s not different from ‘hearing out an argument’ in any way important to my point (unless I’m missing something).
EDIT: Sorry, if you don’t want to include ‘that makes some reference to the credibility...etc.’ (or something like that) in what you mean by ‘trust’ then you should use a different term. Curiosity, or money, or romantic interest would all be reasons to devote time...etc. and clearly none of those are rightly called ‘trust’.
my estimate of the prior probability that Sam is saying something false or incoherent.
What do you have in mind as the basis for such a prior? Can you give me an example?
Point taken about other reasons to devote resources other than trust. I think we’re good here.
Re: example… I don’t mean anything deeply clever. E.g., if the last ten superficially-implausible ideas Sam espoused were false or incoherent, my priors for it will be higher than if the last ten such ideas were counterintuitive and brilliant.
Hm. I can’t argue with that, and I suppose it’s trivial to extend that to ‘if the last ten superficially-implausible ideas philosophy professors/books/etc. espoused were false or incoherent...’. So, okay, trust is an appropriate (because necessary) attitude toward philosophers and philosophical institutions. I think it’s right to say that philosophy doesn’t have external indicators in the way physics or medicine does, but the importance of that point seems diminished.
So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.
Dennett only thinks the idea of qualia is confused. He has no problem with his own books on consciousness.
So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?
No. He isn’t dismissing a whole academic subject, or a sub-field. Just one idea.
What is Dennett’s account for why philosophers of consciousness other than himself continue to think that a dismissable idea like qualia is worth continuing to discuss, even though he considers it closed?
While going on tangents is a common and expected occurrence, each such tangent has a chance of steering/commandeering the original conversation. LW has a tendency of going meta too much, when actual object level discourse would have a higher content value.
While you were practically invited to indulge in the death-by-meta with the hook of “Are you aware that that is basically what every crank says about some other field?”, we should be aware when leaving the object-level debating, and the consequences thereof. Especially since the lure can be strong:
When sufficiently meta, object-level disagreements may fizzle into cosmic/abstract insignificance, allowing for a peaceful pseudo-resolution, which ultimately just protects that which should be destroyed by the truth from being destroyed.
Such lures may be interpreted similarly to ad hominems: The latter try to drown out object-level disagreements by flinging shit until everyone’s dirty, the former zoom out until everyone’s dizzy floating in space, with vertigo. Same result to the actual debate. It’s an effective device, and one usually embraced by someone who feels like object-level arguments no longer serve his/her goals.
Ironically, this very comment goes meta lamenting going meta.
I have no idea what you mean by that. I don’t think value systems don’t come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from “the ground up”, whether its morality or mortgages.
I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing. We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us. We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.
Create a reasoning engine that doesn’t have those ethical systems built into it, and it would have no reason to care about them.
Where is it proven they can be discarded?
You can’t build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on “this defies our moral intuitions, therefore it’s wrong,” and that was never addressed with “moral intuitions don’t work that way,” then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.
All of them.
That’s not an example. Please provide an actual one.
Are you aware that that is basically what every crank says about some other field?
Sure, but it’s also what philosophers say about each other, all the time. Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy. Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don’t get it. “Most philosophers are incompetent, except for the ones who’re sensible enough to see things my way,” is a perfectly ordinary perspective among philosophers.
I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing.
But I wans’t saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities.
That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.
We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us.
We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too,
and that doens’t stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient
criteria for being indexed to us.
We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.
We can see, in reductionistic terms, how the entities could converge on a unform set of truth values. There is nothing non reductionist about anything I have said. Reductionsm does not force one answer to metaethics.
reate a reasoning engine that doesn’t have those ethical systems built into it, and it would have no reason to care about them.
Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.
You can’t build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on “this defies our moral intuitions, therefore it’s wrong,” and that was never addressed with “moral intuitions don’t work that way,” then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.
Please explain why moral intuitions don’t work that way.
Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.
That’s not an example. Please provide an actual one
You can select one at random. obviously.
Sure, but it’s also what philosophers say about each other, all the time.
No, philosophers don’t regularly accuse each other of being incpompetent..just of being wrong. There’s a difference.
Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy.
You are inferring a lot from one example.
Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don’t get it. “Most philosophers are incompetent, except for the ones who’re sensible enough to see things my way,” is a perfectly ordinary perspective among philosophers.
But I wans’t saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.
I don’t understand, can you rephrase this?
We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens’t stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.
The standards by which we judge the truth of mathematical claims are not just inside us. One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we’ve created within ourselves, but something we’ve discovered and observed.
If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.
What evidence do we have that this is the case for morality?
Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.
My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on. You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like “happiness is good” will not itself be able to prove the goodness of happiness.
While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.
Please explain why moral intuitions don’t work that way.
People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.
Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.
I don’t think I understand this, can you rephrase it?
You can select one at random. obviously.
I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them. You’re the one claiming that they’re there at all, that’s why I’m asking you to do it.
No, philosophers don’t regularly accuse each other of being incpompetent..just of being wrong. There’s a difference.
Philosophers don’t usually accuse each other of being incompetent in their publications, because it’s not conducive to getting other philosophers to regard their arguments dispassionately, and that sort of open accusation is generally frowned upon in academic circles whether one believes it or not. They do regularly accuse each other of being comprehensively wrong for their entire careers. In my personal conversations with philosophers (and I never considered myself to have really taken a class, or attended a lecture by a visitor, if I didn’t speak with the person teaching it on a personal basis to probe their thoughts beyond the curriculum,) I observed a whole lot of frustration with philosophers who they think just don’t get their arguments. It’s unsurprising that people would tend to become so frustrated participating in a field that basically amounts to long running arguments extended over decades or centuries. Imagine the conversation we’re having now going on for eighty years, and neither of us has changed our minds. If you didn’t find my arguments convincing, and I hadn’t budged in all that time, don’t you’d think you’d start to suspect that I was particularly thick?
You are inferring a lot from one example.
I’m using an example illustrative of my experience.
Sounds to me like PrawnOfFate is saying that any sufficiently rational cognitive system will converge on a certain set of ethical goals as a consequence of its structure, i.e. that (human-style) ethics is a property that reliably emerges in anything capable of reason.
I’d say the existence of sociopathy among humans provides a pretty good counterargument to this (sociopaths can be pretty good at accomplishing their goals, so the pathology doesn’t seem to be indicative of a flawed rationality), but at least the argument doesn’t rely on counting fundamental particles of morality or something.
I would say so also, but PrawnOfFate has already argued that sociopaths are subject to additional egocentric bias relative to normal people and thereby less rational. It seems to me that he’s implicitly judging rationality by how well it leads to a particular body of ethics he already accepts, rather than how well it optimizes for potentially arbitrary values.
Well, I’m not a psychologist, but if someone asked me to name a pathology marked by unusual egocentric bias I’d point to NPD, not sociopathy.
That brings up some interesting questions concerning how we define rationality, though. Pathologies in psychology are defined in terms of interference with daily life, and the personality disorder spectrum in particular usually implies problems interacting with people or societies. That could imply either irreconcilable values or specific flaws in reasoning, but only the latter is irrational in the sense we usually use around here. Unfortunately, people are cognitively messy enough that the two are pretty hard to distinguish, particularly since so many human goals involve interaction with other people.
In any case, this might be a good time to taboo “rational”.
The standards by which we judge the truth of mathematical claims are not just inside us.
How do we judge claims about transfinite numbers?
One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we’ve created within ourselves, but something we’ve discovered and observed.
If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.
Mathematics isn’t physics. Mathematicians prove theorems from axioms, not from experiments.
Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.
My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on.
Not necessarily. Eg, for utilitarians, values are just facts that are plugged into the metaethics to get concrete
actions.
You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like “happiness is good” will not itself be able to prove the goodness of happiness.
Metaethical systems usually have axioms like “Maximising utility is good”.
While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.
I am not sure what you mean by “exist” here. Claims are objectively true if most rational minds converge on them. That doesn’t require Objective Truth to float about in space here.
Please explain why moral intuitions don’t work that way.
People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.
Does that mean we can;t use moral intuitions at all, or that they must be used with caution?
I don’t think I understand this, can you rephrase it?
Philosphers talk about intuitions, because that is the term for something foundational that seems
true, but can’t be justified by anything more foundational. LessWrongians don’t like intuitions,
but don’t see to be able to explain how to manage without them.
I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them.
Did you post any comments explaining to the professional philosophers where they had gone wrong?
Imagine the conversation we’re having now going on for eighty years, and neither of us has changed our minds. If you didn’t find my arguments convincing, and I hadn’t budged in all that time, don’t you’d think you’d start to suspect that I was particularly thick?
I don;’t see the problem. Philosophical competence is largely about understanding the problem.
Mathematics isn’t physics. Mathematicians prove theorems from axioms, not from experiments.
Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we’ve discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,
Metaethical systems usually have axioms like “Maximising utility is good”.
But utility is a function of values. A paperclipper will produce utility according to different values than a human.
I am not sure what you mean by “exist” here. Claims are objectively true if most rational minds converge on them. That doesn’t require Objective Truth to float about in space here.
Why would most rational minds converge on values? Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.
Does that mean we can;t use moral intuitions at all, or that they must be used with caution?
It means we should be aware of what our intuitions are and what they’ve developed to be good for. Intuitions are evolved heuristics, not a priori truth generators.
Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can’t be justified by anything more foundational. LessWrongians don’t like intuitions, but don’t see to be able to explain how to manage without them.
It seems like you’re equating intuitions with axioms here. We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.
Did you post any comments explaining to the professional philosophers where they had gone wrong?
If I did, I don’t remember them. I may have, I may have felt someone else adequately addressed them, I may not have felt it was worth the bother.
It seems to me that you’re trying to foist onto me the effort of locating something which you were the one to testify was there in the first place.
I don;’t see the problem. Philosophical competence is largely about understanding the problem.
And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they’re dealing with.
In any case, I reject the notion that dismissing large contingents of philosophers as lacking in competence is a valuable piece of evidence with respect t crankishness, and if you want to convince me that I am taking a crankish attitude, you’ll need to offer some other evidence.
Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we’ve discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,
But claims about transfinities don’t correspond directly to any object. Maths is “spun off” from other
facts, on your view. So, by analogy, moral realism could be “spun off” without needing any Form of the Good to correspond to goodness.
Metaethical systems usually have axioms like “Maximising utility is good”.
But utility is a function of values. A paperclipper will produce utility according to different values than a human.
You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens’t care what values are, it just sums or averages them.
Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which
starts from noting that rational agents have to have some value in common, because they are all rational.
Why would most rational minds converge on values?
a) they don’t have to converge on preferences, since thing like utilitariansim are preference-neutral.
b) they already have to some extent because they are rational
Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.
I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on
“maximise group utility” whilst what is utilitous varies considerably.
Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can’t be justified by anything more foundational. LessWrongians don’t like intuitions, but don’t see to be able to explain how to manage without them.
It seems like you’re equating intuitions with axioms here.
Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.
We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.
There is another sense of “intuition” where someone feels that it’s going to rain tomorrow or something. They’re
not the foundational kind.
And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they’re dealing with.
But claims about transfinities don’t correspond directly to any object. Maths is “spun off” from other facts, on your view. So, by analogy, moral realism could be “spun off” without needing any Form of the Good to correspond to goodness.
Spun off from what, and how?
You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens’t care what values are, it just sums or averages them.
Speaking as a utilitarian, yes, utilitarianism does care about what values are. If I value paperclips, I assign utility to paperclips, if I don’t, I don’t.
Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on “maximise group utility” whilst what is utilitous varies considerably.
So what if a paperclipper arrives at “maximize group utility,” and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn’t demand any overlap of end-goal with other utility maximizers.
Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.
But, as I’ve pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.
If our axioms are grounded in our intuitions, then entities which don’t share our intuitions will not share our axioms.
So do they call for them to be fired?
No, but neither do I, so I don’t see why that’s relevant.
Request accepted, I’m not sure if he’s being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it’s unlikely to be a productive use of my time.
What is your basis for the designation ? I am not arguing with your suggestion (I was leaning in the same direction myself), I’m just genuinely curious. In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?
In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?
“Troll” is a somewhat fuzzy label. Sometimes when I am wanting to be precise or polite and avoid any hint of Fundamental Attribution Error I will replace it with the rather clumsy or verbose “person who is exhibiting a pattern of behaviour which should not be fed”. The difference between “Person who gets satisfaction from causing disruption” and “Person who is genuinely confused and is displaying an obnoxiously disruptive social attitude” is largely irrelevant (particularly when one has their Hansonian hat on).
If there was a word in popular use that meant “person likely to be disruptive and who should not be fed” that didn’t make any assumptions or implications of the intent of the accused then that word would be preferable.
I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only
be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).
So what if a paperclipper arrives at “maximize group utility,” and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn’t demand any overlap of end-goal with other utility maximizers.
Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others
is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That’s kind of the point.
But, as I’ve pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.
Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That’s the foundational issue.
A lot of people here would seem to disagree, since I keep hearing the objection that ethics is all about values, and values are nothing to do with rationality.
It feels to me like the Orthogonality Thesis is a fairly precise statement, and moral anti-realism is a harder to make precise but at least well understood statement, and “values are nothing to do with rationality” is something rather vague that could mean either of those things or something else.
There are experts in moral philosophy, and they generally regard the question realism versus relativism (etc) to be wide open. The “realism—huh, what, no?!?” respsonse is standard on LW and only on LW. But I don’t see any superior understanding on LW.
Both realism¹ and relativism are false. Unfortunately this comment is too short to contain the proof, but there’s a passable sequence on it.
¹ As you’ve defined it here, anyway. Moral realism as normally defined simply means “moral statements have truth values” and does not imply universal compellingness.
What does it mean for a statement to be true but not universally compelling?
If it isn’t universally compelling for all agents to believe “gravity causes things to fall,” then what do we mean when we say the sentence is true?
Well, there’s the more obvious sense, that there can always exist an “irrational” mind that simply refuses to believe in gravity, regardless of the strength of the evidence. “Gravity makes things fall” is true, because it does indeed make things fall. But not compelling to those types of minds.
But, in a more narrow sense, which we are more interested in when doing metaethics, a sentence of the form “action A is xyzzy” may be a true classification of A, and may be trivial to show, once “xyzzy” is defined. But an agent that did not care about xyzzy would not be moved to act based on that. It could recognise the truth of the statement but would not care.
For a stupid example, I could say to you “if you do 13 push-ups now, you’ll have done a prime number of push-ups”. Well, the statement is true, but the majority of the world’s population would be like “yeah, so what?”.
In contrast, a statement like “if you drink-drive, you could kill someone!” is generally (but sadly not always) compelling to humans. Because humans like to not kill people, they will generally choose not to drink-drive once they are convinced of the truth of the statement.
But isn’t the whole debate about moral realism vs. anti-realism is whether “Don’t murder” is universally compelling to humans. Noticing that pebblesorters aren’t compelled by our values doesn’t explain whether humans should necessarily find “don’t murder” compelling.
I identify as a moral realist, but I don’t believe all moral facts are universally compelling to humans, at least not if “universally compelling” is meant descriptively rather than normatively. I don’t take moral realism to be a psychological thesis about what particular types of intelligences actually find compelling; I take it to be the claim that there are moral obligations and that certain types of agents should adhere to them (all other things being equal), irrespective of their particular desire sets and whether or not they feel any psychological pressure to adhere to these obligations. This is a normative claim, not a descriptive one.
What? Moral realism (in the philosophy literature) is about whether moral statements have truth values, that’s it.
When I said universally compelling, I meant universally. To all agents, not just humans. Or any large class. For any true statement, you can probably expect to find a surprisingly large number of agents who just don’t care about it.
Whether “don’t murder” (or rather, “murder is bad” since commands don’t have truth values, and are even less likely to be generally compelling) is compelling to all humans is a question for psychology. As it happens, given the existence of serial killers and sociopaths, probably the answer is no, it isn’t. Though I would hope it to be compelling to most.
I have shown you two true but non-universally-compelling arguments. Surely the difference must be clear now.
This is incorrect, in my experience. Although “moral realism” is a notoriously slippery phrase and gets used in many subtly different ways, I think most philosophers engaged in the moral realism vs. anti-realism debate aren’t merely debating whether moral statements have truth values. The position you’re describing is usually labeled “moral cognitivism”.
Anyway, I suspect you mis-spoke here, and intended to say that moral realists claim that (certain) moral statements are true, rather than just that they have truth values (“false” is a truth value, after all). But I don’t think that modification captures the tenor of the debate either. Moral realists are usually defending a whole suite of theses—not just that some moral statements are true, but that they are true objectively and that certain sorts of agents are under some sort of obligation to adhere to them.
I think you guys should taboo “moral realism”. I understand that it’s important to get the terminology right, but IMO debates about nothing but terminology have little value.
Err, right, yes, that’s what I meant. Error theorists do of course also claim that moral statements have truth values.
True enough, though I guess I’d prefer to talk about a single well-specified claim than a “usually” cluster in philosopher-space.
So, a philosopher who says:
is not a moral realist? Because that philosopher does not seem to be a subjectivist, an error theorist, or non-cognitivist.
If that philosopher believes that statements like “murder is wrong” are true, then they are indeed a realist. Did I say something that looked like I would disagree?
You guys are talking past each other, because you mean something different by ‘compelling’. I think Tim means that X is compelling to all human beings if any human being will accept X under ideal epistemic circumstances. You seem to take ‘X is universally compelling’ to mean that all human beings already do accept X, or would on a first hearing.
Would agree that all human beings would accept all true statements under ideal epistemic circumstances (i.e. having heard all the arguments, seen all the evidence, in the best state of mind)?
I guess I must clarify. When I say ‘compelling’ here I am really talking mainly about motivational compellingness. Saying “if you drink-drive, you could kill someone!” to a human is generally, motivationally compelling as an argument for not drink-driving: because humans don’t like killing people, a human will decide not to drink-drive (one in a rational state of mind, anyway).
This is distinct from accepting statements as true or false! Any rational agent, give or take a few, will presumably believe you about the causal relationship between drink-driving and manslaughter once presented with sufficient evidence. But it is a tiny subset of these who will change their decisions on this basis. A mind that doesn’t care whether it kills people will see this information as an irrelevant curiosity.
Having looked over that sequence, I haven’t found any proof that moral realism (on either definition) or moral relativism is false. Could you point me more specifically to what you have in mind (or just put the argument in your own words, if you have the time)?
No Universally Compelling Arguments is the argument against universal compellingness, as the name suggests.
Inseparably Right; or Joy in the Merely Good gives part of the argument that humans should be able to agree on ethical values. Another substantial part is in Moral Error and Moral Disagreement.
Thanks!
Edit: (Sigh), I appreciate the link, but I can’t make heads or tails of ‘No Universally Compelling Arguments’. I speak from ignorance as to the meaning of the article, but I can’t seem to identify the premises of the argument.
The central point is a bit buried.
So, there’s some sort of assumption as to what minds are:
and an assumption that a suitably diverse set of minds can be described in less than a trillion bits. Presumably the reason for that upper bound is because there are a few Fermi estimates that the information content of a human brain is in the neighborhood of one trillion bits.
Of course, if you restrict the set of minds to those with special properties (e.g., human minds), then you might find universally compelling arguments on that basis:
From which we get Coherent Extrapolated Volition and friends.
This doesn’t seem true to me, at least not as a general rule. For example, given every terrestrial DNA sequence describable in a trillion bits or less, it is not the case that every generalization of the form ‘s:X(s)’ has two to the trillionth chances to be false (e.g. ‘have more than one base pair’, ‘involve hydrogen’ etc.). Given that this doesn’t hold true of many other things, is this supposed to be a special fact about minds? Even then, it would seem odd to say that while all generalizations of the form m:X(m) have two to the trillionth chances to be false, nevertheless the generalization ‘for all minds, a generalization of the form m:X(m) has two to the trillionth chances to be false’ (which does seem to be of the form m:X(m)) is somehow more likely.
Also, doesn’t this inference imply that ‘being convinced by an argument’ is a bit that can flip on or off independently of any others? Eliezer doesn’t think that’s true, and I can’t imagine why he would think his (hypothetical) interlocutor would accept it.
It’s not a proof, no, but it seems plausible.
I mean to say, I think the argument is something of a paradox:
The claim the argument purports to defeat is something like this: for all minds, A is convincing. Lets call this m:A(m).
The argument goes like this: for all minds (at or under a trillion bits etc.), a generalization of the form m:X(m) has a one in two to the trillionth chance of being true for each mind. Call this m:U(m), if you grant me that this claim has the form m:X(m).
If we infer from m:U(m) that any claim of the form m:X(m) is unlikely to be true, then to whatever extent I am persuaded that m:A(m) is unlikely to be true, to that extent I ought to be persuaded that m:U(m) is unlikely to be true. You cannot accept the argument, because accepting it as decisive entails accepting decisive reasons for rejecting it.
The argument seems to be fixable at this stage, since there’s a lot of room to generate significant distinctions between m:A(m) and m:U(m). If you were pressed to defend it (presuming you still wish to be generous with your time) how would you fix this? Or am I getting something very wrong?
That’s not what it says; compare the emphasis in both quotes.
Sorry, I may have misunderstood and presumed that ‘two to the trillionth chances to be false’ meant ‘one in two to the trillionth chances to be true’. That may be wrong, but it doesn’t affect my argument at all: EY’s argument for the implausibility of m:A(m) is that claims of the form m:X(m) are all implausible. His argument to the effect that all claims of the form m:X(m) are implausible is itself a claim of the form m:X(m).
“Rational” is broader than “human” and narrower than “physically possible”.
Do you really mean to say that there are physically possible minds that are not rational? In virtue of what are they ‘minds’ then?
Yes. There are irrational people, and they still have minds.
Ah, I think I just misunderstood which sense of ‘rational’ you intended.
Haven’t you met another human?
Sorry, I was speaking ambiguously. I mean’t ‘rational’ not in the normative sense that distinguishes good agents from bad ones, but ‘rational’ in the broader, descriptive sense that distinguishes anything capable of responding to reasons (even terrible or false ones) from something that isn’t. I assumed that was the sense of ‘rational’ Prawn was using, but that may have been wrong.
Irrelevant. I am talking about rational minds, he is talking about physically possible ones.
As noted at the time
UFAI sounds like a counterexample, but I’m not interested in arguing with you about it. I only responded because someone asked for a shortcut in the metaethics sequence.
I have essentially being arguing against a strong likelihood of UFAI, so that would be more like gainsaying.
Congratulations on being able to discern an overall message to EY’s metaethical disquisitions. I never could.
Can you explain what you could see which would suggest to you a greater level of understanding than is prevalent among moral philosophers?
Also, moral philosophers mostly regard the question as open in the sense that some of them think that it’s clearly resolved in favor on non-realism, and some philosophers are just not getting it, or that it’s clearly resolved in favor of realism, and some philosophers are just not getting it. Most philosophers are not of the opinion that it could turn out either way and we just don’t know yet.
What I am seeing is
much-repeated confusions—the Standard Muddle
*appeals to LW doctrines which aren’t well-founded or well respected outside LW.
In I knew exactly what superior insight into the problem was, I would write it up and become famous. Insight doesn’t work like that; you don’t know it in advance, you get an “Aha” when you see it.
If people can’t agree on how a question is closed, it’s open.
Can you explain what these confusions are, and why they’re confused?
In my time studying philosophy, I observed a lot of confusions which are largely dispensed with on Less Wrong. Luke wrote a series of posts on this. This is one of the primary reasons I bothered sticking around in the community.
A question can still be “open” in that sense when all the information necessary for a rational person to make a definite judgment is available.
Eg.
You are trying to impose your morality/
I can think of one model of moral realism, and it doesn’t work, so I will ditch the whole thing.
LW doesn’t even claim to have more than about two “dissolutions”. There are probably hundreds of outstanding philosophical problems. Whence the “largely”
Which were shot down by philosophers.
Then it can only be open in the opinions of the irrational. So basically you are saying the experts are incompetent.
In what respect?
This certainly doesn’t describe my reasoning on the matter, and I doubt it describes many others’ here either.
The way I consider the issue, if I try to work out how the universe works from the ground up, I cannot see any way that moral realism would enter into it, whereas I can easily see how value systems would, so I regard assigning non-negligible probability to moral realism as privileging the hypothesis until I find some compelling evidence to support it, which, having spent a substantial amount of time studying moral philosophy, I have not yet found.
I gave up my study of philosophy because I found such confusions so pervasive. Many “outstanding” philosophical problems can be discarded because they rest on other philosophical problems which can themselves be discarded.
Can you give any examples of such, where you think that the philosophers in question addressed legitimate errors?
Yes. I am willing to assert that while there are some competent philosophers, many philosophical disagreements exist only because of incompetent “experts” perpetuating them. This is the conclusion that my experience with the field has wrought.
I mentioned them because they both came up recently
I have no idea what you mean by that. I don’t think value systems don’t come into it, I just think they are not isolated from rationality. And I am sceptical that you could predict any higher-level phenomenon from “the ground up”, whether its morality or mortgages.
Where is it proven they can be discarded?
All of them.
Are you aware that that is basically what every crank says about some other field?
Presumably, if I’m to treat as meaningful evidence about Desrtopa’s crankiness the fact that cranks make statements similar to Desrtopa, I should first confirm that non-cranks don’t make similar statements.
It seems likely to me that for every person P, there exists some field F such that P believes many aspects of F exist only because of incompetent “experts” perpetuating them. (Consider cases like F=astrology, F=phrenology, F=supply-side economics, F= feminism, etc.) And that this is true whether P is a crank or a non-crank.
So it seems this line of reasoning depends on some set F2 of fields such that P believes this of F in F2 only if P is a crank.
I understand that you’re asserting implicitly that moral philosophy is a field in F2, but this seems to be precisely what Desrtopa is disputing.
Could we reasonably say that an F is in F2 if most of the institutional participants in that F are intelligent, well-educated people? This leaves room for cranks who are right to object to F, of course.
So, just to pick an example, IIRC Dan Dennett believes the philosophical study of consciousness (qualia, etc.) is fundamentally confused in more or less the same way Desrtopa claims of the philosophical study of ethics is.
So under this formulation, if most of the institutional participants in the philosophical study of consciousness are intelligent, well-educated people, Dan Dennet is a crank?
No, I don’t think we can reasonably say that. Dan Dennet might be a crank, but it takes more than that argument to demonstrate the fact.
Good point. So how about this: someone is a crank if they object to F, where F is in F2 (by my above standard), and the reasons they have for objecting to F are not recognized as sound by a proportionate number of intelligent and well educated people.
(shrug) I suppose that works well enough, for some values of “proportionate.”
Mostly I consider this a special case of the basic “who do I trust?” social problem, applied to academic disciplines, and I don’t have any real problem saying about an academic discipline “this discipline is fundamentally confused, and the odds of work in it contributing anything valuable to the world is slim.”
Of course, as Prawn has pointed out a few times, there’s also the question of where we draw the lines around a discipline, but I mostly consider that an orthogonal question to how we evaluate the discipline.
I think this question is moot in the case of philosophy in general then; I think any philosopher worth their shirt should tell you that trust is a wholly inappropriate attitude toward philosophers, philosophical institutions and philosophical traditions.
Not in the sense I meant it.
If a philosopher makes a claim that seems on the surface to be false or incoherent, I have to decide whether to devote the additional effort to evaluating it to confirm or deny that initial judgment. One of the factors that will feed into that decision will be my estimate of the prior probability that they are saying something false or incoherent.
If I should refer to that using a word other than “trust”, that’s fine, tell me what word will refer to that to you and I’ll try to use it instead.
No, that describes what I’m talking about, so long as by trust you mean ‘a reason to hear out an argument that makes reference to the credibility of a field or its professionals’, rather than just ‘a reason to hear out an argument’. If the former, then I do think this is an inappropriate attitude toward philosophy. One reason for this is that such trust seems to depend on having a good standard for the success of a field independently of hearing out an argument. I can trust physicists because they make such good predictions, and because their work leads to such powerful technological advances. I don’t need to be a physicist to observe that. I don’t think philosophy has anything like that to speak for it. The only standards of success are the arguments themselves, and you can only evaluate them by just going ahead and doing some philosophy.
You can find trust in an institution independently of such standards by watching to see whether people you think are otherwise credible take it seriously. That will of course work with philosophy too, but if you trust Tom to be able to judge whether or not a philosophical claim is worth pursuing (and if I’m right about the above), then Tom can only be trustworthy in this regard because he has been doing philosophy (i.e. engaging with the argument). This could get you through the door on some particular philosophical claim, but not into philosophy generally.
I mean neither, I mean ‘a reason to devote time and resources to evaluating the evidence for and against a position.’ As you say, I can only evaluate a philosophical argument by ‘going ahead and doing some philosophy,’ (for a sufficiently broad understanding of ‘philosophy’), but my willingness to do, say, 20 hours of philosophy in order to evaluate Philosopher Sam’s position is going to depend on, among other things, my estimate of the prior probability that Sam is saying something false or incoherent. The likelier I think that is, the less willing I am to spend those 20 hours.
That’s fine, that’s not different from ‘hearing out an argument’ in any way important to my point (unless I’m missing something).
EDIT: Sorry, if you don’t want to include ‘that makes some reference to the credibility...etc.’ (or something like that) in what you mean by ‘trust’ then you should use a different term. Curiosity, or money, or romantic interest would all be reasons to devote time...etc. and clearly none of those are rightly called ‘trust’.
What do you have in mind as the basis for such a prior? Can you give me an example?
Point taken about other reasons to devote resources other than trust. I think we’re good here.
Re: example… I don’t mean anything deeply clever. E.g., if the last ten superficially-implausible ideas Sam espoused were false or incoherent, my priors for it will be higher than if the last ten such ideas were counterintuitive and brilliant.
Hm. I can’t argue with that, and I suppose it’s trivial to extend that to ‘if the last ten superficially-implausible ideas philosophy professors/books/etc. espoused were false or incoherent...’. So, okay, trust is an appropriate (because necessary) attitude toward philosophers and philosophical institutions. I think it’s right to say that philosophy doesn’t have external indicators in the way physics or medicine does, but the importance of that point seems diminished.
Dennett only thinks the idea of qualia is confused. He has no problem with his own books on consciousness.
No. He isn’t dismissing a whole academic subject, or a sub-field. Just one idea.
What is Dennett’s account for why philosophers of consciousness other than himself continue to think that a dismissable idea like qualia is worth continuing to discuss, even though he considers it closed?
Desrtopa doesn’t think moral philosophy is uniformly nonsense, since Desrtopa thinks one of its well known claims, moral relativism, is true.
While going on tangents is a common and expected occurrence, each such tangent has a chance of steering/commandeering the original conversation. LW has a tendency of going meta too much, when actual object level discourse would have a higher content value.
While you were practically invited to indulge in the death-by-meta with the hook of “Are you aware that that is basically what every crank says about some other field?”, we should be aware when leaving the object-level debating, and the consequences thereof. Especially since the lure can be strong:
When sufficiently meta, object-level disagreements may fizzle into cosmic/abstract insignificance, allowing for a peaceful pseudo-resolution, which ultimately just protects that which should be destroyed by the truth from being destroyed.
Such lures may be interpreted similarly to ad hominems: The latter try to drown out object-level disagreements by flinging shit until everyone’s dirty, the former zoom out until everyone’s dizzy floating in space, with vertigo. Same result to the actual debate. It’s an effective device, and one usually embraced by someone who feels like object-level arguments no longer serve his/her goals.
Ironically, this very comment goes meta lamenting going meta.
I mean that value systems are a function of physically existing things, the way a 747 is a function of physically existing things, but we have no evidence suggesting that objective morality is an existing thing. We have standards by which we judge beauty, and we project those values onto the world, but the standards are in us, not outside of us. We can see, in reductionist terms, how the existence of ethical systems within beings, which would feel from the inside like the existence of an objective morality, would come about.
Create a reasoning engine that doesn’t have those ethical systems built into it, and it would have no reason to care about them.
You can’t build a tower on empty air. If a debate has been going on for hundreds of years, stretching back to an argument which rests on “this defies our moral intuitions, therefore it’s wrong,” and that was never addressed with “moral intuitions don’t work that way,” then the debate has failed to progress in a meaningful direction, much as a debate over whether a tree falling in an empty forest makes a sound has if nobody bothers to dissolve the question.
That’s not an example. Please provide an actual one.
Sure, but it’s also what philosophers say about each other, all the time. Wittgenstein condemned practically all his predecessors and peers as incompetent, and declared that he had solved nearly the entirety of philosophy. Philosophy as a field is full of people banging their heads on a wall at all those other idiots who just don’t get it. “Most philosophers are incompetent, except for the ones who’re sensible enough to see things my way,” is a perfectly ordinary perspective among philosophers.
But I wans’t saying that. I am arguing that moral claims truth values, that aren;t indexed to individuals or socieities. That epistemic claim can be justified by appeal to an ontoogy including Moral Objects, but that is not how I am justifying it: my argument is based on rationality, as I have said many times.
We have standards by which we jusdge the truth values of mathematical claims, and they are inside us too, and that doens’t stop mathematics being objective. Relativism requires that truthvalues are indexed to us, that there is one truth for me and another for thee. Being located in us, or being operated by us are not sufficient criteria for being indexed to us.
We can see, in reductionistic terms, how the entities could converge on a unform set of truth values. There is nothing non reductionist about anything I have said. Reductionsm does not force one answer to metaethics.
Provide evidence that ethics is a whole separate modue, and not part of general reasoning ability.
Please explain why moral intuitions don’t work that way.
Please provide some foundations for somethng that aren;t unjustofied by anything more foundationa.
You can select one at random. obviously.
No, philosophers don’t regularly accuse each other of being incpompetent..just of being wrong. There’s a difference.
You are inferring a lot from one example.
Nope.
I don’t understand, can you rephrase this?
The standards by which we judge the truth of mathematical claims are not just inside us. One object plus another object will continue to equal two objects whether or not there are any living beings to make that judgment. Math is not something we’ve created within ourselves, but something we’ve discovered and observed.
If our mathematical models ever stop being able to predict in advance the behavior of the universe, then we will have rather more reason to doubt that the math inside us is different from the math outside of us.
What evidence do we have that this is the case for morality?
My assertion is that, if we judge ethics as a rational system, innate values are among the axioms that the system is predicated on. You cannot prove the axioms of a system within that system, and an ethical system predicated on premises like “happiness is good” will not itself be able to prove the goodness of happiness.
While we could suppose that the axioms which our ethical systems are predicated on are objectively true, we have considerable reason to believe that we would have developed these axioms for adaptive reasons, even if there were no sense in which objective moral axioms exist, and we do not have evidence which suggests that objective, independently existing true moral axioms do exist.
People can be induced to strongly support opposing responses to the same moral dilemma, just by rephrasing it differently to trigger different heuristics. Our moral intuitions are incoherent.
I don’t think I understand this, can you rephrase it?
I do not recall any creditable attempts, which places me in a disadvantaged position with respect to locating them. You’re the one claiming that they’re there at all, that’s why I’m asking you to do it.
Philosophers don’t usually accuse each other of being incompetent in their publications, because it’s not conducive to getting other philosophers to regard their arguments dispassionately, and that sort of open accusation is generally frowned upon in academic circles whether one believes it or not. They do regularly accuse each other of being comprehensively wrong for their entire careers. In my personal conversations with philosophers (and I never considered myself to have really taken a class, or attended a lecture by a visitor, if I didn’t speak with the person teaching it on a personal basis to probe their thoughts beyond the curriculum,) I observed a whole lot of frustration with philosophers who they think just don’t get their arguments. It’s unsurprising that people would tend to become so frustrated participating in a field that basically amounts to long running arguments extended over decades or centuries. Imagine the conversation we’re having now going on for eighty years, and neither of us has changed our minds. If you didn’t find my arguments convincing, and I hadn’t budged in all that time, don’t you’d think you’d start to suspect that I was particularly thick?
I’m using an example illustrative of my experience.
Sounds to me like PrawnOfFate is saying that any sufficiently rational cognitive system will converge on a certain set of ethical goals as a consequence of its structure, i.e. that (human-style) ethics is a property that reliably emerges in anything capable of reason.
I’d say the existence of sociopathy among humans provides a pretty good counterargument to this (sociopaths can be pretty good at accomplishing their goals, so the pathology doesn’t seem to be indicative of a flawed rationality), but at least the argument doesn’t rely on counting fundamental particles of morality or something.
I would say so also, but PrawnOfFate has already argued that sociopaths are subject to additional egocentric bias relative to normal people and thereby less rational. It seems to me that he’s implicitly judging rationality by how well it leads to a particular body of ethics he already accepts, rather than how well it optimizes for potentially arbitrary values.
Well, I’m not a psychologist, but if someone asked me to name a pathology marked by unusual egocentric bias I’d point to NPD, not sociopathy.
That brings up some interesting questions concerning how we define rationality, though. Pathologies in psychology are defined in terms of interference with daily life, and the personality disorder spectrum in particular usually implies problems interacting with people or societies. That could imply either irreconcilable values or specific flaws in reasoning, but only the latter is irrational in the sense we usually use around here. Unfortunately, people are cognitively messy enough that the two are pretty hard to distinguish, particularly since so many human goals involve interaction with other people.
In any case, this might be a good time to taboo “rational”.
Since no claim has a probability of 1.0, I only need to argue that a clear majority of rational minds converge.
How do we judge claims about transfinite numbers?
Mathematics isn’t physics. Mathematicians prove theorems from axioms, not from experiments.
Not necessarily. Eg, for utilitarians, values are just facts that are plugged into the metaethics to get concrete actions.
Metaethical systems usually have axioms like “Maximising utility is good”.
I am not sure what you mean by “exist” here. Claims are objectively true if most rational minds converge on them. That doesn’t require Objective Truth to float about in space here.
Does that mean we can;t use moral intuitions at all, or that they must be used with caution?
Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can’t be justified by anything more foundational. LessWrongians don’t like intuitions, but don’t see to be able to explain how to manage without them.
Did you post any comments explaining to the professional philosophers where they had gone wrong?
I don;’t see the problem. Philosophical competence is largely about understanding the problem.
Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we’ve discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,
But utility is a function of values. A paperclipper will produce utility according to different values than a human.
Why would most rational minds converge on values? Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.
It means we should be aware of what our intuitions are and what they’ve developed to be good for. Intuitions are evolved heuristics, not a priori truth generators.
It seems like you’re equating intuitions with axioms here. We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.
If I did, I don’t remember them. I may have, I may have felt someone else adequately addressed them, I may not have felt it was worth the bother.
It seems to me that you’re trying to foist onto me the effort of locating something which you were the one to testify was there in the first place.
And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they’re dealing with.
In any case, I reject the notion that dismissing large contingents of philosophers as lacking in competence is a valuable piece of evidence with respect t crankishness, and if you want to convince me that I am taking a crankish attitude, you’ll need to offer some other evidence.
But claims about transfinities don’t correspond directly to any object. Maths is “spun off” from other facts, on your view. So, by analogy, moral realism could be “spun off” without needing any Form of the Good to correspond to goodness.
You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens’t care what values are, it just sums or averages them.
Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.
a) they don’t have to converge on preferences, since thing like utilitariansim are preference-neutral.
b) they already have to some extent because they are rational
I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on “maximise group utility” whilst what is utilitous varies considerably.
Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.
There is another sense of “intuition” where someone feels that it’s going to rain tomorrow or something. They’re not the foundational kind.
So do they call for them to be fired?
Spun off from what, and how?
Speaking as a utilitarian, yes, utilitarianism does care about what values are. If I value paperclips, I assign utility to paperclips, if I don’t, I don’t.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
So what if a paperclipper arrives at “maximize group utility,” and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn’t demand any overlap of end-goal with other utility maximizers.
But, as I’ve pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.
If our axioms are grounded in our intuitions, then entities which don’t share our intuitions will not share our axioms.
No, but neither do I, so I don’t see why that’s relevant.
Designating PrawnOfFate a probable troll or sockpuppet. Suggest terminating discussion.
Request accepted, I’m not sure if he’s being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it’s unlikely to be a productive use of my time.
What is your basis for the designation ? I am not arguing with your suggestion (I was leaning in the same direction myself), I’m just genuinely curious. In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?
Combined behavior in other threads. Check the profile.
“Troll” is a somewhat fuzzy label. Sometimes when I am wanting to be precise or polite and avoid any hint of Fundamental Attribution Error I will replace it with the rather clumsy or verbose “person who is exhibiting a pattern of behaviour which should not be fed”. The difference between “Person who gets satisfaction from causing disruption” and “Person who is genuinely confused and is displaying an obnoxiously disruptive social attitude” is largely irrelevant (particularly when one has their Hansonian hat on).
If there was a word in popular use that meant “person likely to be disruptive and who should not be fed” that didn’t make any assumptions or implications of the intent of the accused then that word would be preferable.
I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.
Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).
Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That’s kind of the point.
Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That’s the foundational issue.
The question of moral realism is AFAICT orthogonal to the Orthogonality Thesis.
A lot of people here would seem to disagree, since I keep hearing the objection that ethics is all about values, and values are nothing to do with rationality.
Could you make the connection to what I said more explicit please? Thanks!
″ values are nothing to do with rationality”=the Orthogonality Thesis, so it’s a step in the argument.
It feels to me like the Orthogonality Thesis is a fairly precise statement, and moral anti-realism is a harder to make precise but at least well understood statement, and “values are nothing to do with rationality” is something rather vague that could mean either of those things or something else.