Thanks for taking the time to conduct and then analyze this survey!
What surprised me:
Average IQ seemed insane to me. Thanks for dealing extensively with that objection.
Time online per week seems plausible from personal experience, but I didn’t expect the average to be so high.
The overconfidence data hurts, but as someone pointed out in the comments, it’s hard to ask a question which isn’t misunderstood.
What disappointed me:
Even I was disappointed by the correlations between P(significant man-made global warming) vs. e.g. taxation/feminism/etc. Most other correlations were between values, but this one was between one’s values and an empirical question. Truly Blue/Green. On the topic of politics in general, see below.
People, use spaced repetition! It’s been studied academically and been shown to work brilliantly; it’s really easy to incorporate in your daily life in comparison to most other LW material etc… Well, I’m comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities.
And a comment at the end:
“We are doing terribly at avoiding Blue/Green politics, people.”
Given that LW explicitly tries to exclude politics from discussion (and for reasons I find compelling), what makes you expect differently?
Incorporating LW debiasing techniques into daily life will necessarily be significantly harder than just reading the Sequences, and even those have only been read by a relatively small proportion of posters...
To me it has always sounded right. I’m MENSA-level (at least according to the test the local MENSA association gave me) and LessWrong is the first forum I ever encountered where I’ve considered myself below-average—where I’ve found not just one or two but several people who can think faster and deeper than me.
Below average or simply not exceptional? I’m certainly not exceptional here but I don’t think I’m particularly below average. I suppose it depends on how you weight the average.
Average IQ seemed insane to me. Thanks for dealing extensively with that objection.
With only 500 people responding to the IQ question, it is entirely possible that this is simply a selection effect. I.e. only people with high IQ test themselves or report their score while lower IQ people keep quiet.
Even I was disappointed by the correlations between P(significant man-made global warming) vs. e.g. taxation/feminism/etc. Most other correlations were between values, but this one was between one’s values and an empirical question. Truly Blue/Green.
There’s nothing necessarily wrong with this. You are assuming that feminism is purely a matter of personal preference, incorrectly I feel. If you reduce feminism to simply asking “should women have the right to vote” then you should in fact find a correlation between that and “is there such a thing as global warming”, because the correct answer in each case is yes.
Not saying I am necessarily in favour of modern day feminism, but it does bother me that people simply assume that social issues are independent of fact. This sounds like “everyone is entitled to their opinion” nonsense to me.
What I find more surprising is that there is no correlation between IQ and political beliefs whatsoever. I suspect that this is simply because the significance level is too strict to find anything.
Given that LW explicitly tries to exclude politics from discussion (and for reasons I find compelling), what makes you expect differently?
I’ve heard GMOs described as the left equivalent for global warming—maybe there should be a question about GMOs on next survey.
While we’re here, there may be questions about animal testing, alternative medicine, gun control, euthanasia, and marijuana legalization. (I’m not saying that the left is wrong about all of these.)
I object to GMOs, but I object to GMOs not because of fears that they may be unnoticed health hazards, but rather because they are often used to apply DRM and patents to food, and applying DRM and patents to food has the disadvantages of applying DRM and patents to computer software. Except it’s much worse since 1) you can do without World of Warcraft, but you can’t do without food, and 2) traditional methods of producing food involve copying and organisms used for food normally copy themselves.
2) traditional methods of producing food involve copying and organisms used for food normally copy themselves
ISTR I’ve read farmers have preferred to buy seeds from specialized companies rather than planting their own from the previous harvest since decades before the first commercial GMO was introduced.
I object to GMOs, but I object to GMOs not because of fears that they may be unnoticed health hazards, but rather because they are often used to apply DRM and patents to food
It seems that should make you object to certain aspects of the Western legal system.
Given your reasoning I don’t understand why you object to GMOs but don’t object on the same grounds to, say, music and videos which gave us DMCA, etc.
I object to DRM and patents on entertainment as well. (You can’t actually patent music and videos, but software is subject to software patents and I do object to those.)
If you’re asking why I don’t object to entertainment as a class, it’s because of practical considerations—there is quite a bit of entertainment without DRM, small scale infringers are much harder to catch for entertainment, much entertainment is not patented, and while entertainment is copyrighted, it does not normally copy itself and copying is not a routine part of how one uses it in the same way that producing and saving seeds is of using seeds. Furthermore, pretty much all GMO organisms are produced by large companies who encourage DRM and patents. There are plenty of producers of entertainment who have no interest in such things, even if they do end up using DVDs with CSS.
Is it, though? I did a quick fact check on this, and found this article which seems to say it is more split down the middle (for as much as US politicians are representative, anyway). It also highlights political divides for other topics.
It’s a pity that some people here are so anti-politics (not entirely unjustified, but still). I think polling people here on issues which are traditionally right or left wing but which have clear-cut correct answers to them would make for quite a nice test of rationality.
Am I sure that some political questions have clear cut answers? Well, yes… of course. Just because someone points at a factual question and says “that’s political!” doesn’t magically cause that question to fall into a special subcategory of questions that can never be answered. That just seems really obvious to me.
It’s much harder to give examples that everyone here will agree on of course, and which won’t cause another of those stupid block-downvoting sprees, but I can give it a try: -My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what’s between their legs. I have heard similar claims from gender studies classes. That counts as obviously false, surely? -A guy in college tried to convince me that literally any child could be raised to be Mozart. More generally, the whole “blank slate” notion where people claim that genes don’t matter at all. Can we all agree that this is false? Regardless of whether you see yourself as left or right or up or down? -Women should be allowed to apply for the same jobs as men. Surely even people who think that women are less intelligent than men on average should agree with this? Even though in the past it was a hot-button issue? -People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone. Is this contentious? It shouldn’t be.
Do you agree that the above list gives some examples of political questions that every rational person should nonetheless agree with?
Do you agree that the above list gives some examples of political questions that every rational person should nonetheless agree with?
No, I don’t. To explain why, let me point out that you list of four questions neatly divides into two halves.
Your first two questions are empirically testable questions about what reality is. As such they are answerable by the usual scienc-y means and a rational person will have to accept the answers.
Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.
The question “should people be allowed to do in their bedroom whatever they want as long as it doesn’t harm [directly] anyone [else]?” (extra words added to address Vaniver’s point) can be split into two: “which states of the world would allowing people to do in their bedroom etc. result in?”, and “which states of the world are good?”
Now, it’s been claimed that most disagreements about policies are about the former and all neurologically healthy people would agree about the latter if they thought about it clearly enough
First, I don’t think this claim is true. Second, I’m not sure what “neurologically healthy” means. I know a lot of people I would call NOT neurotypical. And, of course, labeling people mentally sick for disagreeing with the society’s prevailing mores was not rare in history.
all neurologically healthy people would agree about the latter if they thought about it clearly enough
This is what you are missing. The simple fact that someone disagrees does not mean they are mentally sick or have fundamentally different value systems. It could equally well mean that either they or the “prevailing social mores” are simply mistaken. People have been known to claim that 51 is a prime number, and not because they actually disagree about what makes a number prime, but just because they were confused at the time.
It’s not reasonable to take people’s claims that “by ‘should’ I mean that X maximises utility for everyone” or “by ‘should’ I mean that I want X” at face value, because people don’t have access to or actually use logical definitions of the everyday words they use, they “know it when they see it” instead.
No, I don’t think I’m missing this piece. The claim is very general: ALL “neurologically healthy people”.
People can certainly be mistaken about matters of fact. So what?
It’s not reasonable to take people’s claims that “by ‘should’ I mean that X maximises utility for everyone”
Of course not, the great majority of people are not utilitarians and have no interest in maximizing utility for everyone. In normal speech “should” doesn’t mean anything like that.
Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.
If “should” has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of “should” employed by Sophronius in the text. It would be more accurate to say that you can be very rational and still disapprove of homosexuality (as disapproval is an attitude, as opposed to a propositional statement).
If “should” has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of “should” employed by Sophronius
Maybe. But that’s a personal “should”, specific to a particular individual and not binding on anyone else.
Sophronius asserts that values (and so “should”s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.
But the original context was “we should”. Sophronius obviously intended the sentence to refer to everyone. I don’t see anything relative about his use of words.
Sophronius obviously intended the sentence to refer to everyone.
Correct, and that’s why I said
Sophronius asserts that values (and so “should”s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.
I’m struggling to figure out how to communicate the issue here.
If you agree that what Sophronius intended to say was “everyone should” why would you describe it as a personal “should”? (And what does “binding on someone” even mean, anyway?)
Well, to me it’s obvious that “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone.” was a logical proposition, either true or false. And whether it’s true or false has nothing to do with whether anyone else has the same terminal values as Sophronius. But you seem to disagree?
Well, to me it’s obvious that “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone.” was a logical proposition, either true or false.
Do you mean it would be true or false for everyone? At all times? In all cultures and situations? In the same way “Sky is blue” is true?
Yes. Logical propositions are factually either true or false. It doesn’t matter who is asking. In exactly the same way that “everyone p-should put pebbles into prime heaps” doesn’t care who’s asking, or indeed how “the sky is blue” doesn’t care who’s asking.
Well then, I disagree. Since I just did a whole circle of the mulberry bush with Sophronius I’m not inclined to do another round. Instead I’ll just state my position.
I think that statements which do not describe reality but instead speak of preferences, values, and “should”s are NOT “factually either true or false”. They cannot be unconditionally true or false at all. Instead, they can be true or false conditional on the specified value system and if you specify a different value system, the true/false value may change. To rephrase it in a slightly different manner, value statements can consistent or inconsistent with some value system, and they also can be instrumentally rational or not in pursuit of some goals (and whether they are rational or not is conditional on the the particular goals).
To get specific, “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone” is true within some value system and false within some other value systems. Both kinds of value systems exist. I see no basis for declaring one kind of value systems “factually right” and another kind “factually wrong”.
As a example consider a statement “The sum of the triangle’s inner angles is 180 degrees”. Is this true? In some geometries, yes, in others, no. This statement is not true unconditionally, to figure out whether it’s true in some specific case you have to specify a particular geometry. And in some real-life geometries it is true and in other real-life geometries it is false.
Well, I’m not trying to say that some values are factual and others are imaginary. But when someone makes a “should” statement (makes a moral assertion), “should” refers to a particular predicate determined by their actual value system, as your value system determines your language. Thus when people talk of “you should do X” they aren’t speaking of preferences or values, rather they are speaking of whatever it is their value system actually unfolds into.
(The fact that we all use the same word, “should” to describe what could be many different concepts is, I think, justified by the notion that we mostly share the same values, so we are in fact talking about the same thing, but that’s an empirical issue.)
As a example consider a statement “The sum of the triangle’s inner angles is 180 degrees”. Is this true?
Hopefully this will help demonstrate my position. I would say that when being fully rigorous is it a type error to ask whether a sentence is true. Logical propositions have a truth value, but sentences are just strings of symbols. To turn “The sum of the triangle’s inner angles is 180 degrees” into a logical proposition you need to know what is meant by “sum”, “triangle”, “inner angles”, “180”, “degrees” and indeed “is”.
As an example, if the sentence was uttered by Bob, and what he meant by “triangle” was a triangle in euclidean space, and by “is” he meant “is always” (universally quantified), then what he said is factually (unconditionally) true. But if he uttered the same sentence, in a language where “triangle” means a triangle in a hyperbolic space, or in a general space, then what he said would be unconditionally false. There’s no contradiction here because in each case he said a different thing.
They can. But when a person utters a sentence, they generally intend to state the derelativized proposition indicated by the sentence in their language. When I say “P”, I don’t mean ”"P" is a true sentence in all languages at all places”, I mean P(current context).
Which is why it’s useless to say “I have a different definition of ‘should’”, because the original speaker wasn’t talking about definitions, they were talking about whatever it is “should” actually refers to in their actual language.
(I actually thought of mentioning that the sky isn’t always blue in all situations, but decided not to.)
Well, if you should drink more because you’re dehydrated, then you’re right to say that not everyone is bound by that, but people in similar circumstances are (i.e. dehydrated, with no other reason not to drink). Or are you saying that there are ultimately personal shoulds?
‘Of course’ nothing, I find that answer totally shocking. Can you think of an example? Or can you explain how such shoulds are supposed to work?
So far as I understand it, for every ‘should’ there is some list of reasons why. If two people have the same lists of reasons, then whatever binds one binds them both. So there’s nothing personal about shoulds, except insofar as we rarely have all the same reasons to do or not do something.
Sure. Let’s say there is a particular physical place (say, a specific big boulder on the shore of a lake) where I, for some reason, feel unusually calm, serene, and happy. It probably triggers some childhood memories and associations. I like this place. I should spend more time there.
If two people have the same lists of reasons, then whatever binds one binds them both.
No two people are the same. Besides, the importance different people attach to the same reasons varies greatly.
And, of course, to bind another with your “should” requires you to know this other very very well. To the degree I would argue is unattainable.
I like this place. I should spend more time there.
So say this place also makes me feel calm, serene, and happy. It also triggers in me some childhood memories and associations. I like the place. I also have (like you) no reasons not to go there. Lets say (however unlikely it might be) we have all the same reasons, and we weigh these reasons exactly the same. Nevertheless, it’s not the case that I should spend more time there. Have I just told you a coherent story?
And, of course, to bind another with your “should” requires you to know this other very very well. To the degree I would argue is unattainable.
So lets say you’re very thirsty. Around you, there’s plenty of perfectly potable water. And lets say I know you’re not trying to be thirsty for some reason, but that you’ve just come back from a run. I think I’m in a position to say that you should drink the water. I don’t need to know you very well to be sure of that. What am I getting wrong here?
That’s a rather crucial part. I am asserting that not only two people will not have the same reasons and weight them exactly the same, but you also can’t tell whether a person other than you has the same reasons and weights them exactly the same.
You’re basically saying “let’s make an exact copy of you—would your personal “shoulds” apply to that exact copy?”
The answer is yes, but an exact copy of me does not exist and that’s why my personal shoulds don’t apply to other people.
I think I’m in a position to say that you should drink the water.
You can say, of course. But when I answer “no, I don’t think so”, is your “should” stronger than my “no”?
Ahh, okay, it looks like we are just misunderstanding one another. I originally asked you whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.
But it seems to me your answer to this is in fact ‘no’, there are no such ultimately personal shoulds. All shoulds bind everyone subject to the reasons backing them up, it’s just that those reasons rarely (if ever) coincide.
You can say, of course. But when I answer “no, I don’t think so”, is your “should” stronger than my “no”?
Yes. You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.
What’s “numerically different”?
And what did you mean by “ultimately”, then? In reality all people are sufficiently different for my personal shoulds to apply only to me and not necessarily to anyone else. The set of other-than-me people to which my personal should must apply is empty. Is that insufficiently “ultimately”?
Yes. You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
I beg to disagree. Given that you have no idea about reasons that I might have for not drinking, I don’t see why your “should” is correct. Speaking of which, how do you define “correct” in this situation, anyway? What makes you think that the end goals you imagine are actually the end goals that I am pursuing?
I just mean something like ‘there are two of them, rather than one’. So they can have all the same (non-relational) properties, but not be the same thing because there are two of them.
The set of other-than-me people to which my personal should must apply is empty.
Well, that’s an empirical claim, for which we’d need some empirical evidence. It’s certainly possible that my personal ‘should’ could bind you too, since it’s possible (however unlikely) that we could be subject to exactly the same reasons in exactly the same way.
This is an important point, because it means that shoulds bind all and every person subject to the reasons that back them up. It may be true that people are subject to very different sets of reasons, such that in effect ‘shoulds’ only generally apply to one person. I think this empirical claim is false, but that’s a bit beside the point.
Given that you have no idea about reasons that I might have for not drinking
It’s part of the hypothetical that I do know the relevant reasons and your aims: you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty. Those are all the reasons (maybe the reality is never this simple, though I think it often is...again, that’s an empirical question). Knowing those, my ‘you should drink’ is absolutely binding on you.
I don’t need to define ‘correct’. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink. That’s all I mean by correct: that it’s true to say ‘if X, Y, Z, then you should drink’.
Well, that’s an empirical claim, for which we’d need some empirical evidence.
You really want evidence that there are no exact copies of me walking around..?
It’s certainly possible that my personal ‘should’ could bind you too
No, I don’t think it is possible. At this point it is fairly clear that we are not exact copies of each other :-D
it means that shoulds bind all and every person subject to the reasons that back them up
Nope, I don’t think so. You keep on asserting, basically, that if you find a good set of reasons why I should do X and I cannot refute these reasons, I must do X. That is not true. I can easily tell you to go jump into the lake and not do X.
It’s part of the hypothetical that I do know the relevant reasons and your aims
And another crucial part—no, you can not know all of my relevant reasons and my aims. We are different people and you don’t have magical access to the machinations of my mind.
I don’t need to define ‘correct’. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink.
Yes, you do need to define “correct”. The reasons may or may not be sufficient—you don’t know.
It does seem we have several very basic disagreements.
You really want evidence that there are no exact copies of me walking around..?
I deny the premise on which this is necessary: I think most people share the reasons for most of what they do most of the time. For example, when my friend and I come in from a run, we share reasons for drinking water. The ‘should’ that binds me, binds him equally. I think this is by far the most common state of affairs, the great complexity and variety of human psychology notwithstanding. The empirical question is whether our reasons for acting are in general very complicated or not.
It’s certainly possible that my personal ‘should’ could bind you too
No, I don’t think it is possible.
I think you do, since I’m sure you think it’s possible that we are (in the relevant ways) identical. Improbable, to be sure. But possible.
I think I would describe it as you, being in similar situations, each formulate a personal “should” that happens to be pretty similar. But it’s his own “should” which binds him, not yours.
But I don’t suppose you would say this about answering a mathematical problem. If I conclude that six times three is eighteen, and you conclude similarly, isn’t it the case that we’ve done ‘the same problem’ and come to ‘the same answer’? Aren’t we each subject to the same reasons, in trying to solve the problem?
Or did each of us solve a personal math problem, and come to a personal answer that happens to be the same number?
Aren’t we each subject to the same reasons, in trying to solve the problem?
In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.
Same thing for testable statements about physical reality—disagreements (between rational people) can be solved by the usual scientific methods.
But preferences and values exist only inside minds and I’m asserting that each mind is unique. My preferences and values can be the same as yours but they don’t have to be. In contrast, the physical reality is the same for everyone.
Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/
In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.
I don’t see how that’s any different from most value judgements. All human beings have a basically common set of values, owing to our neurological and biological similarities. Granted, you probably can’t advise me on whether or not to go to grad school, or run for office, but you can advice me to wear my seat belt or drink water after a run. That doesn’t seem so different from math: math is also in our heads, it’s also a space of widespread agreement and some limited disagreement in the hard cases.
It may look like the Israeli’s and the Palestinians just don’t see eye to eye on practical matters, but remember how big the practical reasoning space is. Them truly not seeing eye to eye would be like the Palestinians demanding the end of settlements, and the Israelis demanding that Venus be bluer.
Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/
I don’t see why. There’s no reason to infer from the fact that a ‘should’ binds someone that you can force them to obey it.
Now, as to why it’s a problem if your reasons for acting aren’t sufficient to determine a ‘should’. Suppose you hold that A, and that if A then B. You conclude from this that B. I also hold that A, and that if A then B. But I don’t conclude that B. I say “Your conclusion doesn’t bind me.” B, I say, is ‘true for you’, but not ‘true for me’. I explain that reasoning is personal, and that just because you draw a conclusion doesn’t mean anyone else has to.
If I’m right, however, it doesn’t look like ‘A, if A then B’ is sufficient to conclude B for either of us, since B doesn’t necessarily follow from these two premises. Some further thing is needed. What could this be? it can’t be another premise (like, ‘If you believe that A and that if A then B, conclude that B’) because that just reproduces the problem. I’m not sure what you’d like to suggest here, but I worry that so long as, in general, reasons aren’t sufficient to determine practical conclusions (our ‘shoulds’) then nothing could be. Acting would be basically irrational, in that you could never have a sufficient reason for what you do.
All human beings have a basically common set of values
Nope. There is a common core and there is a lot of various non-core stuff. The non-core values can be wildly different.
but you can advice me to wear my seat belt or drink water after a run
We’re back to the same point: you can advise me, but if I say “no”, is your advice stronger than my “no”? You think it is, I think not.
I worry that so long as, in general, reasons aren’t sufficient to determine practical conclusions (our ‘shoulds’) then nothing could be.
The distinction between yourself and others is relevant here. You can easily determine whether a particular set of reasons is sufficient for you to act. However you can only guess whether the same set of reasons is sufficient for another to act. That’s why self-shoulds work perfectly fine, but other-shoulds have only a probability of working. Sometimes this probability is low, sometimes it’s high, but there’s no guarantee.
We’re back to the same point: you can advise me, but if I say “no”, is your advice stronger than my “no”? You think it is, I think not.
What do you mean by ‘stronger’? I think we all have free will: it’s impossible, metaphysically, for me to force you to do anything. You always have a choice. But that doesn’t mean I can’t point out your obligations or advantage with more persuasive or rational force than you can deny them. It may be that you’re so complicated an agent that I couldn’t get a grip on what reasons are relevant to you (again, empirical question), but if, hypothetically speaking, I do have as good a grip on your reasons as you do, and if it follows from the reasons to which you are subject that you should do X, and you think you should do ~X, then I’m right and you’re wrong and you should do X.
But I cannot, morally speaking, coerce or threaten you into doing X. I cannot, metaphysically speaking, force you to do X. If that is what you mean by ‘stronger’, then we agree.
My point is, you seem to be picking out a quantitative point: the degree of complexity is so great, that we cannot be subject to a common ‘should’. Maybe! But the evidence seems to me not to support that quantitative claim.
But aside from the quantitative claim, there’s a different, orthogonal, qualitative claim: if we are subject to the same reasons, we are subject to the same ‘should’. Setting aside the question of how complex our values and preferences are, do you agree with this claim? Remember, you might want to deny the antecedent of this conditional, but that doesn’t entail that the conditional is false.
In the same sense we talked about it in the {grand}parent post. You said:
You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
...to continue
the degree of complexity is so great, that we cannot be subject to a common ‘should’.
We may. But there is no guarantee that we would.
if we are subject to the same reasons, we are subject to the same ‘should’. Setting aside the question of how complex our values and preferences are, do you agree with this claim?
We have to be careful here. I understand “reasons” as, more or less, networks of causes and consequences. “Reasons” tell you what you should do to achieve something. But they don’t tell you what to achieve—that’s the job of values and preferences—and how to weight the different sides in a conflicting situation.
Given this, no, same reasons don’t give rise to the same “should”s because you need the same values and preferences as well.
So we have to figure out what a reason is. I took ‘reasons’ to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative. So, the reasoning behind an action might look something like this:
1) I want an apple.
2) The store sells apples, for a price I’m willing to pay.
3) It’s not too much trouble to get there.
4) I have no other reason not to go get some apples.
C) I should get some apples from the store.
My claim is just that (C) follows and is true of everyone for whom (1)-(4) is true. If (1)-(4) is true of you, but you reject (C), then you’re wrong to do so. Just as anyone would be wrong to accept ‘If P then Q’ and ‘P’ but reject the conclusion ‘Q’.
I took ‘reasons’ to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative.
That’s circular reasoning: if you define reasons as “everything necessary and sufficient”, well, of course, if they don’t conclude in an imperative they are not sufficient and so are not proper reasons :-/
In your example (4) is the weak spot. You’re making a remarkable wide and strong claim—one common in logical exercise but impossible to make in reality. There are always reasons pro and con and it all depends on how do you weight them.
Consider any objection to your conclusion (C) (e.g. “Eh, I’m feel lazy now”) -- any objection falls under (4) and so you can say that it doesn’t apply. And we’re back to the circle...
Not if I have independent reason to think that ‘everything necessary and sufficient to conclude an imperative’ is a reason, which I think I do.
In your example (4) is the weak spot. You’re making a remarkable wide and strong claim—one common in logical exercise but impossible to make in reality.
To be absolutely clear: the above is an empirical claim. Something for which we need evidence on the table. I’m indifferent to this claim, and it has no bearing on my point.
My point is just this conditional: IF (1)-(4) are true of any individual, that individual cannot rationally reject (C).
You might object to the antecedent (on the grounds that (4) is not a claim we can make in practice), but that’s different from objecting to the conditional. If you don’t object to the conditional, then I don’t think we have any disagreement, except the empirical one. And on that score, I find you view very implausible, and neither of us is prepared to argue about it. So we can drop the empirical point.
That fails to include weighing of that against other considerations. If you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty, you “should drink water” only if the other considerations don’t mean that drinking water is a bad idea despite the fact that it would quench your thirst. And in order to know that someone’s other considerations don’t outweigh the benefit of drinking water, you need to know so much about the other person that that situation is pretty much never going to happen with any nontrivial “should”.
That fails to include weighing of that against other considerations.
By hypothesis, there are no other significant considerations. I think most of the time, people’s rational considerations are about as simple as my hypothetical makes them out to be. Lumifer thinks they’re generally much more complicated. That’s an empirical debate that we probably can’t settle.
But there’s also the question of whether or not ‘shoulds’ can be ultimately personal. Suppose two lotteries. The first is won when your name is drawn out of a hat. Only one name is drawn, and so there’s only one possible winner. That’s a ‘personal’ lottery. Now take an impersonal lottery, where you win if your chosen 20 digit number matches the one drawn by the lottery moderators. Supposing you win, it’s just because your number matched theirs. Anyone whose number matched theirs would win, but it’s very unlikely that there will be more than one winner (or even one).
I’m saying that, leaving the empirical question aside, ‘shoulds’ bind us in the manner of an impersonal lottery. If we have a certain set of reasons, then they bind us, and they equally bind everyone who has that set of reasons (or something equivalent).
Lumifer is saying (I think) that ‘shoulds’ bind us in the manner of the personal lottery. They apply to each of us personally, though it’s possible that by coincidence two different shoulds have the same content and so it might look like one should binds two people.
A consequence of Lumifer’s view, it seems to me, is that a given set of reasons (where reasons are things that can apply equally to many individuals) is never sufficient to determine how we should act. This seems to me to be a very serious problem for the view.
We seem to disagree on a fundamental level. I reject your notion of a strict fact-value distinction: I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity. Rationality indeed does not determine values, in the same way that rationality does not determine cheese, but questions about morality and cheese should both be answered in a rational and factual manner all the same.
If someone tells me that they grew up in a culture where they were taught that eating cheese is a sin, then I’m sorry to be so blunt about it (ok, not really) but their culture is stupid and wrong.
I strongly reject your notion of a strict fact-value distinction. I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity.
Interesting. That’s a rather basic and low-level disagreement.
So, let’s take a look at Alice and Bob. Alice says “I like the color green! We should paint all the buildings in town green!”. Bob says “I like the color blue! We should paint all the buildings in town blue!”. Are these statements meaningless? Or are they reducible to factual matters?
By the way, your position was quite popular historically. The Roman Catholic church was (and still is) a big proponent.
I cannot speak for Sophronius of course, but here is one possible answer. It may be that morality is “objective” in the sense that Eliezer tried to defend in the metaethics sequence. Roughly, when someone says X is good they mean that X is part of of a loosely defined set of things that make humans flourish, and by virtue of the psychological unity of mankind we can be reasonably confident that this is a more-or-less well-defined set and that if humans were perfectly informed and rational they would end up agreeing about which things are in it, as the CEV proposal assumes.
Then we can confidently say that both Alice and Bob in your example are objectively mistaken (it is completely implausible that CEV is achieved by painting all buildings the color that Alice or Bob happens to like subjectively the most, as opposed to leaving the decision to the free market, or perhaps careful science-based urban planning done by a FAI). We can also confidently say that some real-world expressions of values (e.g. “Heretics should be burned at the stake”, which was popular a few hundred years ago) are false. Others are more debatable. In particular, the last two examples in Sophronius’ list are cases where I am reasonably confident that his answers are the correct ones, but not as close to 100%-epsilon probability as I am on the examples I gave above.
Roughly, when someone says X is good they mean that X is part of of a loosely defined set of things that make humans flourish, and by virtue of the psychological unity of mankind we can be reasonably confident that this is a more-or-less well-defined set and that if humans were perfectly informed and rational they would end up agreeing about which things are in it
Well, I can’t speak for other people but when I say “X is good” I mean nothing of that sort. I am pretty sure the majority of people on this planet don’t think of “good” this way either.
Then we can confidently say
Nope, you can say. If your “we” includes me then no, “we” can’t say that.
By “Then we can confidently say” I just meant “Assuming we accept the above analysis of morality, then we can confidently say…”. I am not sure I accept it myself; I proposed it as a way one could believe that normative questions have objective answers without straying as far form the general LW worldview as being a Roman Catholic.
By the way, the metaethical analysis I outlined does not require that people think consciously of something like CEV whenever they use the word “good”. It is a proposed explication in the Carnapian sense of the folk concept of “good” in the same way that, say, VNM utility theory is an explication of “rational”.
So, let’s take a look at Alice and Bob. Alice says “I like the color green! We should paint all the buildings in town green!”. Bob says “I like the color blue! We should paint all the buildings in town blue!”. Are these statements meaningless? Or are they reducible to factual matters?
These statements are not meaningless. They are reducible to factual matters. “I like the colour blue” is a factual statement about Bob’s preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Bob’s brain). Presumably Bob is correct in his assertion, but if I know Bob well enough I might point out that he absolutely detests everything that is the colour blue even though he honestly believes he likes the colour blue. The statement would be false in that case.
Furthermore, the statement “We should paint all the buildings in town blue!” follows logically from his previous statement about his preferences regarding blueness. Certainly, the more people are found to prefer blueness over greenness, the more evidence this provides in favour of the claim “We should paint all the buildings in town blue!” which is itself reducible to “A large number of people including myself prefer for the buildings in this town to be blue, and I therefore favour painting them in this colour!”
Contrast the above with the statement “I like blue, therefore we should all have cheese”, which is also a should claim but which can be rejected as illogical. This should make it clear that should statements are not all equally valid, and that they are subject to logical rigour just like any other claim.
“I like the colour blue” is a factual statement about Bob’s preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Bob’s brain).
Let’s introduce Charlie.
“I think women should be barefoot and pregnant” is a factual statement about Charlie’s preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Charlie’s brain).
Furthermore, the statement “We should paint all the buildings in town blue!” follows logically from his previous statement about his preferences regarding blueness.
Futhermore, the statement “We should make sure women remain barefoot and pregnant” follows logically from Charlie’s previous statement about his preferences regarding women.
I would expect you to say that Charlie is factually wrong. In which way is he factually wrong and Bob isn’t?
Certainly, the more people are found to prefer blueness over greenness, the more evidence this provides in favour of the claim “We should paint all the buildings in town blue!”
The statement “We should paint all the buildings in town blue!” is not a claim in need of evidence. It is a command, an expression of what Bob thinks should happen. It has nothing to do with how many people think the same.
Assuming “should” is meant in a moral sense, we can say that “We should paint all the buildings in town blue!” is in fact a claim in need of evidence. Specifically, it says (to 2 decimal places) that we would all be better off / happier / flourish more if the buildings are painted blue. This is certainly true if it turns out the majority of the town really likes blue, so that they would be happier, but it does not entirely follow from Bob’s claim that he likes blue—if the rest of the town really hated blue, then it would be reasonable to say that their discomfort outweighed his happiness. In this case he would be factually incorrect to say “We should paint all the buildings in town blue!”.
In contrast, you can treat “We should make sure women remain barefoot and pregnant” as a claim in need of evidence, and in this case we can establish it as false. Most obviously because the proposed situation would not be very good for women, and we shouldn’t do something that harms half the human race unnecessarily.
Not at all and I don’t see why would you assume a specific morality.
Bob says “We should paint all the buildings in town blue!” to mean that it would make him happier and he doesn’t care at all about what other people around think about the idea.
Bob is not a utilitarian :-)
you can treat “We should make sure women remain barefoot and pregnant” as a claim in need of evidence
Exactly the same thing—Charlie is not a utilitarian either. He thinks he will be better off in the world where women are barefoot and pregnant.
But he says “We should” not “I want” because there is the implication that I should also paint the buildings blue. But if the only reason I should do so is because he wants me to, it raises the question of why I should do what he wants. And if he answers “You should do what I want because it’s what I want”, it’s a tautology.
Putin has a way of adding his wants to my wants, through fear, bribes, or other incentives. But then the direct cause of my actions would be the fear/bribe/etc, not the simple fact that he wants it.
Presumably, Bob doesn’t have a way of making me care about what he wants (beyond the extent to which I care about what a generic stranger wants). If he were to pay me, that would be different, but he can’t make me care simply because that’s his preference. When he says “We should paint the buildings blue” he’s saying “I want the buildings painted blue” and “You want the buildings painted blue”, but if I don’t want the buildings painted blue, he’s wrong.
Presumably, Bob doesn’t have a way of making me care about what he wants
Why not? Much of interactions in a human society are precisely ways of making others care what someone wants.
In any case, the original issue was whether Bob’s preference for blue could be described as “correct” or “wrong”. How exactly does Bob manage to get what he wants is neither here nor there.
he’s saying … “You want the buildings painted blue”
The original statement was “I like the color blue! We should paint all the buildings in town blue!” His preference for blue can neither be right nor wrong, but the second sentence is something that can be ’correct” or “wrong”.
I wonder if that someone will make the logical step to insisting that moral egoists should be reeducated to make them change to a “valid” moral position :-/
In contrast, you can treat “We should make sure women remain barefoot and pregnant” as a claim in need of evidence, and in this case we can establish it as false. Most obviously because the proposed situation would not be very good for women
That’s just looking at one of the direct consequences, accepting for the sake of argument that most women would prefer not to be “barefoot and pregnant”. The problem is that, for these kinds of major social changes, the direct effects tend to be dominated by indirect effects and your argument makes no attempt to analyze the indirect effects.
Technically you are correct, so you can read my above argument as figuratively “accurate to one decimal place”. The important thing is that there’s nothing mysterious going on here in a linguistic or metaethical sense.
I partly agree, but a tradition that developed under certain conditions isn’t necessarily optimal under different conditions (e.g. much better technology and medicine, less need for manual labour, fewer stupid people (at least for now), etc.).
Otherwise, we’d be even better off just executing our evolved adaptations, which had even more time to develop.
accepting for the sake of argument that most women would prefer not to be “barefoot and pregnant”.
Depends on the context :-D In China a few centuries ago a woman quite reasonably might prefer to be barefoot (as opposed to have her feet tightly bound to disfigure them) and pregnant (as opposed to barren which made her socially worthless).
“I think women should be barefoot and pregnant” is a factual statement about Charlie’s preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Charlie’s brain).
Futhermore, the statement “We should make sure women remain barefoot and pregnant” follows logically from Charlie’s previous statement about his preferences regarding women.
I would expect you to say that Charlie is factually wrong. In which way is he factually wrong and Bob isn’t?
Charlie is, presumably, factually correct in that he thinks that he holds that view. However, while preferences regarding colour are well established, I am sceptical regarding the claim that this is an actual terminal preference that Charlie holds. It is possible that he finds pregnant barefeeted women attractive, in which case his statement gives valid information regarding his preferences which might be taken into account by others: In this case it is meaningful. Alternatively, if he were raised to think that this is a belief one ought to hold then the statement is merely signalling politics and is therefore of an entirely different nature.
“I like blue and want the town to be painted blue” gives factual info regarding the universe. “Women ought to be pregnant because my church says so!” does not have the primary goal of providing info, it has the goal of pushing politics.
Imagine a person holding a gun to your head and saying “You should give me your money”. Regardless of his use of the word “should”, he is making an implicit logical argument: 1) Giving me your money reduces your chances of getting shot by me 2) You presumably do not want to get shot 3) Therefore, you should give me your money
If you respond to the man by saying that morality is relative, you are rather missing the point.
The statement “We should paint all the buildings in town blue!” is not a claim in need of evidence. It is a command, an expression of what Bob thinks should happen. It has nothing to do with how many people think the same.
I think you are missing the subtle hidden meanings of everyday discourse. Imagine Bob saying that the town should be painted blue. Then, someone else comes with arguments for why the town should not be painted Blue. Bob eventually agrees. “You are right”, he says, “that was a dumb suggestion”. The fact that exchanges like this happen all the time shows that Bob’s statement is not just a meaningless expression, but rather a proposal relying on implicit arguments and claims. Specifically, it relies on enough people in the village sharing his preference for blue houses that the notion will be taken seriously. If Bob did not think this to be the case, he probably would not have said what he did.
I am sceptical regarding the claim that [Charlie’s preference re: gender roles] is an actual terminal preference that Charlie holds. It is possible that he finds pregnant barefeeted women attractive [...] Alternatively, if he were raised to think that this is a belief one ought to hold then the statement is merely signalling politics and is therefore of an entirely different nature.
Okay, yeah, so belief in belief is a thing. We can profess opinions that we’ve been taught are virtuous to hold without deeply integrating them into our worldview; and that’s probably increasingly common these days as traditional belief systems clank their way into some sort of partial conformity with mainstream secular ethics. But at the same time, we should not automatically assume that anyone professing traditional values—or for that matter unusual nontraditional ones—is doing so out of self-interest or a failure to integrate their ethics.
Setting aside the issues with “terminal value” in a human context, it may well be that post-Enlightenment secular ethics are closer in some absolute sense to a human optimal, and that a single optimal exists. I’m even willing to say that there’s evidence for that in the form of changing rates of violent crime, etc., although I’m sure the reactionaries in the audience will be quick to remind me of the technological and demographic factors with their fingers on the scale. But I don’t think we can claim to have strong evidence for this, in view of the variety of ethical systems that have come before us and the generally poor empirical grounding of ethical philosophy.
Until we do have that sort of evidence, I view the normative component of our ethics as fallible, and certainly not a good litmus test for general rationality.
Okay, yeah, so belief in belief is a thing. We can profess opinions that we’ve been taught are virtuous to hold without deeply integrating them into our worldview; and that’s probably increasingly common these days as traditional belief systems clank their way into some sort of partial conformity with mainstream secular ethics. But at the same time, we should not automatically assume that anyone professing traditional values—or for that matter unusual nontraditional ones—is doing so out of self-interest or a failure to integrate their ethics.
On the contrary, I think it’s quite reasonable to assume that somebody who bases their morality on religious background has not integrated these preferences and is simply confused. My objection here is mainly in case somebody brings up a more extreme example. In these ethical debates, somebody always (me this time, I guess) brings up the example of Islamic sub-groups who throw acid in the faces of their daughters. Somebody always ends up claiming that “well that’s their culture, you know, you can’t criticize that. Who are you to say that they are wrong to do so?”. In that case, my reply would be that those people do not actually have a preference for disfigured daughters, they merely hold the belief that this is right as a result of their religion. This can be seen from the fact that the only people who do this hold more or less the same set of religious beliefs. And given that the only ones who hold that ‘preference’ do so as a result of a belief which is factually false, I think it’s again reasonable to say: No, I do not respect their beliefs and their culture is wrong and stupid.
Setting aside the issues with “terminal value” in a human context, it may well be that post-Enlightenment secular ethics are closer in some absolute sense to a human optimal, and that a single optimal exists.
The point is not so much whether there is one optimum, but rather that some cultures are better than others and that progress is in fact possible. If you agree with that, we have already closed most of the inferential distance between us. :)
Even if people don’t have fully integrated beliefs in destructive policies, their beliefs can be integrated enough to lead to destructive behavior.
The Muslims who throw acid in their daughters’ faces may not have an absolute preference for disfigured daughters, but they may prefer disfigured daughters over being attacked by their neighbors for permitting their daughters more freedom than is locally acceptable—or prefer to not be attacked by the imagined opinions (of other Muslims and/or of Allah) which they’re carrying in their minds.
Also, even though it may not be a terminal value, I’d say there are plenty of people who take pleasure in hurting people, and more who take pleasure in seeing other people hurt.
Somebody always ends up claiming that “well that’s their culture, you know, you can’t criticize that. Who are you to say that they are wrong to do so?” [...] The point is not so much whether there is one optimum, but rather that some cultures are better than others and that progress is in fact possible.
There’s some subtlety here. I believe that ethical propositions are ultimately reducible to physical facts (involving idealized preference satisfaction, although I don’t think it’d be productive to dive into the metaethical rabbit hole here), and that cultures’ moral systems can in principle be evaluated in those terms. So no, culture isn’t a get-out-of-jail-free card. But that works both ways, and I think it’s very likely that many of the products of modern secular ethics are as firmly tied to the culture they come from as would be, say, an injunction to stone people who wear robes woven from two fibers. We don’t magically divorce ourselves from cultural influence when we stop paying attention to the alleged pronouncements of the big beardy dude in the sky. For these reasons I try to be cautious about—though I wouldn’t go so far as to say “skeptical of”—claims of ethical progress in any particular domain.
The other fork of this is stability of preference across individuals. I know I’ve been beating this drum pretty hard, but preference is complicated; among other things, preferences are nodes in a deeply nested system that includes a number of cultural feedback loops. We don’t have any general way of looking at a preference and saying whether or not it’s “true”. We do have some good heuristics—if a particular preference appears only in adherents of a certain religion, and their justification for it is “the Triple Goddess revealed it to us”, it’s probably fairly shallow—but they’re nowhere near good enough to evaluate every ethical proposition, especially if it’s close to something generally thought of as a cultural universal.
Islamic sub-groups who throw acid in the faces of their daughters [...] the only people who do this hold more or less the same set of religious beliefs.
The Wikipedia page on acid throwing describes it as endemic to a number of African and Central and South Asian countries, along with a few outside those regions, with religious cultures ranging from Islam through Hinduism and Buddhism. You may be referring to some subset of acid attacks (the word “daughter” doesn’t appear in the article), but if there is one, I can’t see it from here.
Fair enough. I largely agree with your analysis: I agree that preferences are complicated, and I would even go as far as to say that they change a little every time we think about them. That does make things tricky for those who want to build a utopia for all mankind! However, in every day life I think objections on such an abstract level aren’t so important. The important thing is that we can agree on the object level, e.g. sex is not actually sinful, regardless of how many people believe it is. Saying that sex is sinful is perhaps not factually wrong, but rather it belies a kind of fundamental confusion regarding the way reality works that puts it in the ‘not even wrong’ category. The fact that it’s so hard for people to be logical about their moral beliefs is actually precisely why I think it’s a good litmus test of rationality/clear thinking: If it were easy to get it right, it wouldn’t be much of a test.
The Wikipedia page on acid throwing describes it as endemic to a number of African and Central and South Asian countries, along with a few outside those regions, with religious cultures ranging from Islam through Hinduism and Buddhism.
Looking at that page I am still getting the impression that it’s primarily Islamic cultures that do this, but I’ll agree that calling it exclusively Islamic was wrong. Thanks for the correction :)
I am sceptical regarding the claim that this is an actual terminal preference that Charlie holds
Given that you know absolutely nothing about Charlie, a player in a hypothetical scenario, I find your scepticism entirely unwarranted. Fighting the hypothetical won’t get you very far.
So, is Charlie factually wrong? On the basis of what would you determine that Charlie’s belief is wrong and Bob’s isn’t?
Imagine a person holding a gun to your head and saying “You should give me your money”. … If you respond to the man by saying that morality is relative, you are rather missing the point.
Why would I respond like that? What does the claim that morality is relative have to do with threats of bodily harm?
I think you are missing the subtle hidden meanings of everyday discourse.
In this context I don’t care about the subtle hidden meanings. People who believe they know the Truth and have access to the Sole Factually Correct Set of Values tend to just kill others who disagree. Or at the very least marginalize them and make them third-class citizens. All in the name of the Glorious Future, of course.
Well, given that Charlie indeed genuinely holds that preference, then no he is not wrong to hold that preference. I don’t even know what it would mean for a preference to be wrong. Rather, his preferences might conflict with preferences of others, who might object to this state of reality by calling it “wrong”, which seems like the mind-projection fallacy to me. There is nothing mysterious about this.
Similarly, the person in the original example of mine is not wrong to think men kissing each other is icky, but he IS wrong to conclude that there is therefore some universal moral rule that men kissing each other is bad. Again, just because rationality does not determine preferences, does not mean that logic and reason do not apply to morality!
In this context I don’t care about the subtle hidden meanings. People who believe they know the Truth and have access to the Sole Factually Correct Set of Values tend to just kill others who disagree. Or at the very least marginalize them and make them third-class citizens. All in the name of the Glorious Future, of course.
I believe you have pegged me quite wrongly, sir! I only care about truth, not Truth. And yes, I do have access to some truths, as of course do you. Saying that logic and reason apply to morality and that therefore all moral claims are not equally valid (they can be factually wrong or entirely nonsensical) is quite a far cry from heralding in the Third Reich. The article on Less Wrong regarding the proper use of doubt seems pertinent here.
Well, given that Charlie indeed genuinely holds that preference, then no he is not wrong to hold that preference.
I am confused. Did I misunderstand you or did you change your mind?
Earlier you said that “should” kind of questions have single correct answers (which means that other answers are wrong). A “preference” is more or less the same thing as a “value” in this context, and you staked out a strong position:
I reject your notion of a strict fact-value distinction: I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity. … but questions about morality … should … be answered in a rational and factual manner all the same.
Since statements of facts can be correct or wrong and you said there is no “fact-value distinction”, then values (and preferences) can be correct or wrong as well. However in the parent post you say
I don’t even know what it would mean for a preference to be wrong.
If you have a coherent position in all this, I don’t see it.
I think you misunderstood me. Of course I don’t mean that the terms “facts” and “values” represent the same thing. Saying that a preference itself is wrong is nonsense in the same way that claiming that a piece of cheese is wrong is nonsensical. It’s a category error. When I say I reject a strict fact-value dichotomy I mean that I reject the notion that statements regarding values should somehow be treated differently from statements regarding facts, in the same way that I reject the notion of faith inhabiting a separate magistrate from science (i.e. special pleading). So my position is that when someone makes a moral claim such as “don’t murder”, they better be able to reduce that to factual statements about reality or else they are talking nonsense.
For example, “sex is sinful!” usually reduces to “I think my god doesn’t like sex”, which is nonsense because there is no such thing. On the other hand, if someone says “Stealing is bad!”, that can be reduced to the claim that allowing theft is harmful to society (in a number of observable ways), which I would agree with. As such I am perfectly comfortable labelling some moral claims as valid and some as nonsense.
Saying that a preference itself is wrong is nonsense in the same way that claiming that a piece of cheese is wrong is nonsensical. It’s a category error.
is compatible with this sentence
I reject the notion that statements regarding values should somehow be treated differently from statements regarding facts
I am distinguishing between X and statements regarding X. The statement “Cheese is wrong” is nonsensical. The statement “it’s nonsensical to say cheese is wrong” is not nonsensical. Values and facts are not the same, but statements regarding values and facts should be treated the same way.
Similarly: Faith and Science are not the same thing. Nonetheless, I reject the notion that claims based on faith should be treated any differently from scientific claims.
Similarly: Faith and Science are not the same thing. Nonetheless, I reject the notion that claims based on faith should be treated any differently from scientific claims.
Do you also reject the notion that claims about mathematics and science should be treated differently?
In the general sense that all claims must abide by the usual requirements of validity and soundness of logic, sure.
In fact, you might say that mathematics is really just a very pure form of logic, while science deals with more murky, more complicated matters. But the essential principle is the same: You better make sure that the output follows logically from the input, or else you’re not doing it right.
My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what’s between their legs.
I think it’s more likely he was misusing the word “literally”/wearing belief as attire (in technical terms, bullshitting) than he actually really believed that. After all I guess he could tell boys and girl apart without looking between their legs, couldn’t he?
People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone. Is this contentious? It shouldn’t be.
But you can always find harm if you allow for feelings of disgust, or take into account competition in sexual markets (i.e. if having sex with X is a substitute for having sex with Y then Y might be harmed if someone is allowed to have sex with X.)
Ok, that’s a fair enough point. Sure, feelings do matter. However, I generally distinguish between genuine terminal preferences and mere surface emotions. The reason for this is that often it is easier/better to change your feelings than for other people to change their behaviour. For example, if I strongly dislike the name James Miller, you probably won’t change your name to take my feelings into account.
(At the risk of saying something political: This is the same reason I don’t like political correctness very much. I feel that it allows people to frame political discourse purely by being offended.)
-People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone. Is this contentious? It shouldn’t be.
The standard reply to this is that many people hurt themselves by their choices, and that justifies intervention. (Even if we hastily add an “else” after “anyone,” note that hurting yourself hurts anyone who cares about you, and thus the set of acts which harm no one is potentially empty.)
-My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what’s between their legs. I have heard similar claims from gender studies classes. That counts as obviously false, surely?
It’s wrong on a biological level. From my physiology lecture: Woman blink twice as much as men. The have less water in their bodies.
-People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone. Is this contentious? It shouldn’t be.
So you are claiming either: “Children are no people” or “Pedophilia should be legal”. I don’t think any of those claims has societal approval let alone is a clear-cut issue.
But even if you switch the statement to the standard: “Consenting adults should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone” The phrases consenting (can someone with >1.0 promille alcohol consent?) and harm (emotional harm exists and not going tested for STD’s and having unprotected sex has the potential to harm) are open to debate.
-A guy in college tried to convince me that literally any child could be raised to be Mozart. More generally, the whole “blank slate” notion where people claim that genes don’t matter at all.
The maximal effect of a strong cognitive intervention might very will bring the average person to Mozart levels. We know relatively little about doing strong intervention to improve human mental performance.
But genes to matter.
-Women should be allowed to apply for the same jobs as men. Surely even people who think that women are less intelligent than men on average should agree with this?
It depends on what roles. If a movie producer casts actors for a specific role, gender usually matters a big deal.
A bit more controversial but I think there are cases where it’s useful for men to come together in an environment where they don’t have to signal stuff to females.
So you are claiming either: “Children are no people” or “Pedophilia should be legal”. I don’t think any of those claims has societal approval let alone is a clear-cut issue.
I’d expect them to assert that paedophilia does harm. That’s the obvious resolution.
I’d expect them to assert that paedophilia does harm. That’s the obvious resolution.
Court are not supposed to investigate whether the child is emotionally harmed by the experience but whether he or she is under a certain age threshold.
You could certainly imagine a legal system where psychologists are always asked whether a given child is harmed by having sex instead of a legal system that makes the decision through an age criteria.
I think a more reasonable argument for the age boundary isn’t that every child gets harmed but that most get harmed and that having a law that forbids that behavior is preventing a lot of children from getting harmed.
I don’t think you are a bad person to arguing that we should have a system that focuses on the amount of harm done instead of focusing on an arbitrary age boundary but that’s not the system we have that’s backed by societal consensus.
We also don’t put anybody in prison for having sex with a 19-year old breaking her heart and watching as they commit suicide. We would judge a case like that as a tragedy but we wouldn’t legally charge the responsible person with anything.
The concept of consent is pretty important for our present system. Even in cases where no harm is done we take a breach of consent seriously.
Actually I’m under the impression that the ‘standard’ resolution is not about the “harm” part but about the “want” part
I think your impression is mistaken.
it’s assumed that people below a certain age can’t want sex, to the point that said age is called the age of consent
Nope. It is assumed that people below a certain age cannot give informed consent. In other words, they are assumed to be not capable of good decisions and to be not responsible for the consequences. What they want is irrelevant. If you’re below the appropriate age of consent, you cannot sign a valid contract, for example.
Below the age of consent you basically lack the legal capacity to agree to something.
So you are claiming either: “Children are no people” or “Pedophilia should be legal”. I don’t think any of those claims has societal approval let alone is a clear-cut issue.
Well, I suppose Sophronius could argue that pedophilia should be legal, after all many things (especially related to sex) that were once socially unacceptable are now considered normal.
I suppose Sophronius could argue that pedophilia should be legal
Even if he thinks that it should be legal, it’s no position where it’s likely that everyone will agree. Sophronius wanted to find examples where everyone can agree.
Really? Gives his history I think the answer is pretty clear that he’s not the kind of person who’s out to argue that legalizing pedophila is a clear cut issue.
He also said something about wanting to avoid the kind of controversy that causes downvoting.
In all of these cases, the people breaking with the conclusion you presumably believe to be obvious often do so because they believe the existing research to be hopelessly corrupt. This is of course a rather extraordinary statement, and I’m pretty sure they’d be wrong about it (that is, as sure as I can be with a casual knowledge of each field and a decent grasp of statistics), but bad science isn’t exactly unheard of. Given the right set of priors, I can see a rational person holding each of these opinions at least for a time.
In the latter two, they might additionally have different standards for “should” than you’re used to.
I’m not sure what you are trying to convince me of here. That people who disagree have reasons for disagreeing? Well of course they do, it’s not like they disagree out of spite. The fact that they are right in their minds does not mean that they are in fact right.
And yes, they might have a different definition for should. Doesn’t matter. If you talk to someone who believes that men kissing each other is “just plain wrong”, you’ll inevitably find that they are confused, illogical and inconsistent about their beliefs and are irrational in general. Do you think that just because a statement involves the word “should”, you can’t say that they are wrong?
The question I was trying to answer wasn’t whether they were right, it was whether a rational actor could hold those opinions. That has a lot less to do with factual accuracy and a lot more to do with internal consistency.
As to the correctness of normative claims—well, that’s a fairly subtle question. Deontological claims are often entangled with factual ones (e.g. the existence-of-God thing), so that’s at least one point of grounding, but even from a consequential perspective you need an optimization objective. Rational actors may disagree on exactly what that objective is, and reasonable-sounding objectives often lead to seriously counterintuitive prescriptions in some cases.
The question I was trying to answer wasn’t whether they were right, it was whether a rational actor could hold those opinions. That has a lot less to do with factual accuracy and a lot more to do with internal consistency.
Oh, right, I see what you mean. Sure, people can disagree with each other without either being irrational: All that takes is for them to have different information. For example, one can rationally believe the earth is flat, depending on which time and place one grew up in.
That does not change the fact that these questions have a correct answer though, and it should be pretty clear which the correct answers are in the above examples, even though you can never be 100% certain of course. The point remains that just because a question is political does not mean that all answers are equally valid. False equivalence and all that.
There is a question about it. It’s the existential thread that’s most feared among Lesswrongers. Bioengineered pandemics are a thread due to gene manipulated organisms.
If that’s not what you want to know, how would you word your question?
I took “bioengineered” to imply ‘deliberately’ and “pandemic” to imply ‘contagious’, and in any event fear of > 90% of humans dying by 2100 is far from the only possible reason to oppose GMOs.
any event fear of > 90% of humans dying by 2100 is far from the only possible reason to oppose GMOs.
I didn’t advocate that it’s the only reason. That’s why I asked for a more precise question.
I took “bioengineered” to imply ‘deliberately’ and “pandemic” to imply ‘contagious’,
If the tools that you need to genmanipulate organisms are widely available it’s much easier to deliberately
produce a pandemic.
It’s possible to make a bacteria immune to antibiotica by just giving them antibiotica and making not manipulating the genes directly. On the other hand I think that people fear bioengineered pandemics because they expect stronger capabilities in regards to manipulating organisms in the future.
“Time online per week seems plausible from personal experience, but I didn’t expect the average to be so high.”
I personally spend an average of 50 hours a week online.
That’s because, by profession, I am a web-developer.
The percentage of LessWrong members in IT is clearly higher than that of the average population.
I postulate that the higher number of other IT geeks (who, like me, are also likely spending high numbers of hours online per week) is pushing up the average to a level that seems, to you, to be surprisingly high.
“The overconfidence data hurts, but as someone pointed out in the comments, it’s hard to ask a question which isn’t misunderstood.”
I interpreted this poor level of calibration more to the fact that it’s easier to read about what you should be doing than to actually go and practice the skill and get better at it.
People, use spaced repetition! It’s been studied academically and been shown to work brilliantly; it’s really easy to incorporate in your daily life in comparison to most other LW material etc… Well, I’m comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities
I’m one of the people who have never used spaced repetition, though I’ve heard of it. I don’t doubt it works, but what do you actually need to remember nowadays? I’d probably use it if I was learning a new language (which I don’t really plan to do anytime soon)… What other skills work nicely with spaced repetition?
I just don’t feel the need to remember things when I have google / wikipedia on my phone.
Isn’t there anything you already know but wouldn’t like to forget? SRS is for keeping your precious memory storage, not necessarily for learning new stuff. There are probably a lot of things that wouldn’t even cross your mind to google if they were erased by time. Googling could also waste time compared to storing memories if you have to do it often enough (roughly 5 minutes in your lifetime per fact).
What other skills work nicely with spaced repetition?
In my experience anything you can write into brief flashcards. Some simple facts can work as handles for broader concepts once you’ve learned them. You could even record triggers for episodic memories that are important to you.
Isn’t there anything you already know but wouldn’t like to forget?
Yeah, that’s pretty much the problem. Not really. I.e. there are stuff I know that would be inconvenient to forget, because I use this knowledge every day. But since I already use it every day, SR seems unnecessary.
Things I don’t use every day are not essential—the cost of looking them up is minuscule since it happens rarely.
I suppose a plausible use case would be birth dates of family members, if I didn’t have google calendar to remind me when needed.
Edit: another use case that comes to mind would be names. I’m pretty bad with names (though I’ve recently begun to suspect that probably I’m as bad with remembering names as anyone else, I just fail to pay attention when people introduce themselves). But asking to take someone’s picture ‘so that I can put it on a flashcard’ seems awkward. Facebook to the rescue, I guess?
(though I don’t really meet that many people, so again—possibly not worth the effort in maintaining such a system)
I don’t know what you work on, but many fields include bodies of loosely connected facts that you could in principle look up, but which you’d be much more efficient if you just memorized. In programming this might mean functions in a particular library that you’re working with (the C++ STL, for example). In chemistry, it might be organic reactions. The signs of medical conditions might be another example, or identities related to a particular branch of mathematics.
SRS would be well suited to maintaining any of these bodies of knowledge.
In programming this might mean functions in a particular library that you’re working with (the C++ STL, for example)
Right. I guess I somewhat do ‘spaced repetition’ here, just by the fact that every time I interact with a particular library I’m reminded of its function. But that is incidental—I don’t really care about remembering libraries that I don’t use, and those that I use regularly I don’t need SR to maintain.
I suppose medical conditions looks more plausible as a use case—you really need to remember a large set of facts, any of which is actually used very rarely. But that still doesn’t seem useful to me personally—I can think of no dataset that’d be worth the effort.
I guess I should just assume I’m an outlier there, and simply keep SR in mind in case I ever find myself needing it.
I’ve used SRS to learn programming theory that I otherwise had trouble keeping straight in my head. I’ve made cards for design patterns, levels of database normalization, fiddly elements of C++ referencing syntax, etc.
They’re mostly copy-and-pasted descriptions from wikipedia, tweaked with added info from Design Patterns. I’m not sure they’d be very useful to other people. I used them to help prepare for an interview, so when I was doing my cards I’d describe them out loud, then check the description, then pop open the book to clarify anything I wasn’t sure on.
edit: And I’d do the reverse, naming the pattern based on the description.
Thanks for taking the time to conduct and then analyze this survey!
What surprised me:
Average IQ seemed insane to me. Thanks for dealing extensively with that objection.
Time online per week seems plausible from personal experience, but I didn’t expect the average to be so high.
The overconfidence data hurts, but as someone pointed out in the comments, it’s hard to ask a question which isn’t misunderstood.
What disappointed me:
Even I was disappointed by the correlations between P(significant man-made global warming) vs. e.g. taxation/feminism/etc. Most other correlations were between values, but this one was between one’s values and an empirical question. Truly Blue/Green. On the topic of politics in general, see below.
People, use spaced repetition! It’s been studied academically and been shown to work brilliantly; it’s really easy to incorporate in your daily life in comparison to most other LW material etc… Well, I’m comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities.
And a comment at the end:
Given that LW explicitly tries to exclude politics from discussion (and for reasons I find compelling), what makes you expect differently?
Incorporating LW debiasing techniques into daily life will necessarily be significantly harder than just reading the Sequences, and even those have only been read by a relatively small proportion of posters...
To me it has always sounded right. I’m MENSA-level (at least according to the test the local MENSA association gave me) and LessWrong is the first forum I ever encountered where I’ve considered myself below-average—where I’ve found not just one or two but several people who can think faster and deeper than me.
Same for me.
Below average or simply not exceptional? I’m certainly not exceptional here but I don’t think I’m particularly below average. I suppose it depends on how you weight the average.
With only 500 people responding to the IQ question, it is entirely possible that this is simply a selection effect. I.e. only people with high IQ test themselves or report their score while lower IQ people keep quiet.
There’s nothing necessarily wrong with this. You are assuming that feminism is purely a matter of personal preference, incorrectly I feel. If you reduce feminism to simply asking “should women have the right to vote” then you should in fact find a correlation between that and “is there such a thing as global warming”, because the correct answer in each case is yes.
Not saying I am necessarily in favour of modern day feminism, but it does bother me that people simply assume that social issues are independent of fact. This sounds like “everyone is entitled to their opinion” nonsense to me.
What I find more surprising is that there is no correlation between IQ and political beliefs whatsoever. I suspect that this is simply because the significance level is too strict to find anything.
With this, on the other hand, I agree completely.
I’ve heard GMOs described as the left equivalent for global warming—maybe there should be a question about GMOs on next survey.
While we’re here, there may be questions about animal testing, alternative medicine, gun control, euthanasia, and marijuana legalization. (I’m not saying that the left is wrong about all of these.)
I object to GMOs, but I object to GMOs not because of fears that they may be unnoticed health hazards, but rather because they are often used to apply DRM and patents to food, and applying DRM and patents to food has the disadvantages of applying DRM and patents to computer software. Except it’s much worse since 1) you can do without World of Warcraft, but you can’t do without food, and 2) traditional methods of producing food involve copying and organisms used for food normally copy themselves.
ISTR I’ve read farmers have preferred to buy seeds from specialized companies rather than planting their own from the previous harvest since decades before the first commercial GMO was introduced.
Yes, but they wouldn’t be sued out of existence IF they had to keep their own.
Good point.
It seems that should make you object to certain aspects of the Western legal system.
Given your reasoning I don’t understand why you object to GMOs but don’t object on the same grounds to, say, music and videos which gave us DMCA, etc.
I object to DRM and patents on entertainment as well. (You can’t actually patent music and videos, but software is subject to software patents and I do object to those.)
If you’re asking why I don’t object to entertainment as a class, it’s because of practical considerations—there is quite a bit of entertainment without DRM, small scale infringers are much harder to catch for entertainment, much entertainment is not patented, and while entertainment is copyrighted, it does not normally copy itself and copying is not a routine part of how one uses it in the same way that producing and saving seeds is of using seeds. Furthermore, pretty much all GMO organisms are produced by large companies who encourage DRM and patents. There are plenty of producers of entertainment who have no interest in such things, even if they do end up using DVDs with CSS.
What do you think of golden rice?
I don’t object to it except insofar as it’s used as a loss leader for companies’ other GMO products which are subject to DRM and patents.
Is it, though? I did a quick fact check on this, and found this article which seems to say it is more split down the middle (for as much as US politicians are representative, anyway). It also highlights political divides for other topics.
It’s a pity that some people here are so anti-politics (not entirely unjustified, but still). I think polling people here on issues which are traditionally right or left wing but which have clear-cut correct answers to them would make for quite a nice test of rationality.
Are you quite sure about that? Any examples outside of young earth / creationists?
Am I sure that some political questions have clear cut answers? Well, yes… of course. Just because someone points at a factual question and says “that’s political!” doesn’t magically cause that question to fall into a special subcategory of questions that can never be answered. That just seems really obvious to me.
It’s much harder to give examples that everyone here will agree on of course, and which won’t cause another of those stupid block-downvoting sprees, but I can give it a try:
-My school gym teacher once tried to tell me that there is literally no difference between boys and girls except for what’s between their legs. I have heard similar claims from gender studies classes. That counts as obviously false, surely?
-A guy in college tried to convince me that literally any child could be raised to be Mozart. More generally, the whole “blank slate” notion where people claim that genes don’t matter at all. Can we all agree that this is false? Regardless of whether you see yourself as left or right or up or down?
-Women should be allowed to apply for the same jobs as men. Surely even people who think that women are less intelligent than men on average should agree with this? Even though in the past it was a hot-button issue?
-People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone. Is this contentious? It shouldn’t be.
Do you agree that the above list gives some examples of political questions that every rational person should nonetheless agree with?
No, I don’t. To explain why, let me point out that you list of four questions neatly divides into two halves.
Your first two questions are empirically testable questions about what reality is. As such they are answerable by the usual scienc-y means and a rational person will have to accept the answers.
Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.
Rationality does not determine values.
The question “should people be allowed to do in their bedroom whatever they want as long as it doesn’t harm [directly] anyone [else]?” (extra words added to address Vaniver’s point) can be split into two: “which states of the world would allowing people to do in their bedroom etc. result in?”, and “which states of the world are good?”
Now, it’s been claimed that most disagreements about policies are about the former and all neurologically healthy people would agree about the latter if they thought about it clearly enough—which would make Sophronius’s claim below kind-of sort-of correct—but I’m no longer sure of that.
First, I don’t think this claim is true. Second, I’m not sure what “neurologically healthy” means. I know a lot of people I would call NOT neurotypical. And, of course, labeling people mentally sick for disagreeing with the society’s prevailing mores was not rare in history.
This is what you are missing. The simple fact that someone disagrees does not mean they are mentally sick or have fundamentally different value systems. It could equally well mean that either they or the “prevailing social mores” are simply mistaken. People have been known to claim that 51 is a prime number, and not because they actually disagree about what makes a number prime, but just because they were confused at the time.
It’s not reasonable to take people’s claims that “by ‘should’ I mean that X maximises utility for everyone” or “by ‘should’ I mean that I want X” at face value, because people don’t have access to or actually use logical definitions of the everyday words they use, they “know it when they see it” instead.
No, I don’t think I’m missing this piece. The claim is very general: ALL “neurologically healthy people”.
People can certainly be mistaken about matters of fact. So what?
Of course not, the great majority of people are not utilitarians and have no interest in maximizing utility for everyone. In normal speech “should” doesn’t mean anything like that.
If “should” has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of “should” employed by Sophronius in the text. It would be more accurate to say that you can be very rational and still disapprove of homosexuality (as disapproval is an attitude, as opposed to a propositional statement).
Maybe. But that’s a personal “should”, specific to a particular individual and not binding on anyone else.
Sophronius asserts that values (and so “should”s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.
What does this mean, “not binding”? What is a personal “should”? Is that the same as a personal “blue”?
A personal “should” is “I should”—as opposed to “everyone should”. If I think I should, say, drink more, that “should” is not binding on anyone else.
But the original context was “we should”. Sophronius obviously intended the sentence to refer to everyone. I don’t see anything relative about his use of words.
Correct, and that’s why I said
I’m struggling to figure out how to communicate the issue here.
If you agree that what Sophronius intended to say was “everyone should” why would you describe it as a personal “should”? (And what does “binding on someone” even mean, anyway?)
Well, perhaps you should just express your point, provided you have one? Going in circles around the word “should” doesn’t seem terribly useful.
Well, to me it’s obvious that “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone.” was a logical proposition, either true or false. And whether it’s true or false has nothing to do with whether anyone else has the same terminal values as Sophronius. But you seem to disagree?
Do you mean it would be true or false for everyone? At all times? In all cultures and situations? In the same way “Sky is blue” is true?
But the sky isn’t blue for everyone at all times in all situations!
Yes. Logical propositions are factually either true or false. It doesn’t matter who is asking. In exactly the same way that “everyone p-should put pebbles into prime heaps” doesn’t care who’s asking, or indeed how “the sky is blue” doesn’t care who’s asking.
Well then, I disagree. Since I just did a whole circle of the mulberry bush with Sophronius I’m not inclined to do another round. Instead I’ll just state my position.
I think that statements which do not describe reality but instead speak of preferences, values, and “should”s are NOT “factually either true or false”. They cannot be unconditionally true or false at all. Instead, they can be true or false conditional on the specified value system and if you specify a different value system, the true/false value may change. To rephrase it in a slightly different manner, value statements can consistent or inconsistent with some value system, and they also can be instrumentally rational or not in pursuit of some goals (and whether they are rational or not is conditional on the the particular goals).
To get specific, “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone” is true within some value system and false within some other value systems. Both kinds of value systems exist. I see no basis for declaring one kind of value systems “factually right” and another kind “factually wrong”.
As a example consider a statement “The sum of the triangle’s inner angles is 180 degrees”. Is this true? In some geometries, yes, in others, no. This statement is not true unconditionally, to figure out whether it’s true in some specific case you have to specify a particular geometry. And in some real-life geometries it is true and in other real-life geometries it is false.
Well, I’m not trying to say that some values are factual and others are imaginary. But when someone makes a “should” statement (makes a moral assertion), “should” refers to a particular predicate determined by their actual value system, as your value system determines your language. Thus when people talk of “you should do X” they aren’t speaking of preferences or values, rather they are speaking of whatever it is their value system actually unfolds into.
(The fact that we all use the same word, “should” to describe what could be many different concepts is, I think, justified by the notion that we mostly share the same values, so we are in fact talking about the same thing, but that’s an empirical issue.)
Hopefully this will help demonstrate my position. I would say that when being fully rigorous is it a type error to ask whether a sentence is true. Logical propositions have a truth value, but sentences are just strings of symbols. To turn “The sum of the triangle’s inner angles is 180 degrees” into a logical proposition you need to know what is meant by “sum”, “triangle”, “inner angles”, “180”, “degrees” and indeed “is”.
As an example, if the sentence was uttered by Bob, and what he meant by “triangle” was a triangle in euclidean space, and by “is” he meant “is always” (universally quantified), then what he said is factually (unconditionally) true. But if he uttered the same sentence, in a language where “triangle” means a triangle in a hyperbolic space, or in a general space, then what he said would be unconditionally false. There’s no contradiction here because in each case he said a different thing.
Value systems are themselves part of reality, as people already have values.
In this context I define reality as existing outside of people’s minds. What exists solely within minds in not real.
Yes they are, but the same sentence can state different logical propositions depending on where, when and by whom it is uttered.
They can. But when a person utters a sentence, they generally intend to state the derelativized proposition indicated by the sentence in their language. When I say “
P
”, I don’t mean ”"P"
is a true sentence in all languages at all places”, I meanP(current context)
.Which is why it’s useless to say “I have a different definition of ‘should’”, because the original speaker wasn’t talking about definitions, they were talking about whatever it is “should” actually refers to in their actual language.
(I actually thought of mentioning that the sky isn’t always blue in all situations, but decided not to.)
Well, if you should drink more because you’re dehydrated, then you’re right to say that not everyone is bound by that, but people in similar circumstances are (i.e. dehydrated, with no other reason not to drink). Or are you saying that there are ultimately personal shoulds?
Yes, of course there are.
‘Of course’ nothing, I find that answer totally shocking. Can you think of an example? Or can you explain how such shoulds are supposed to work?
So far as I understand it, for every ‘should’ there is some list of reasons why. If two people have the same lists of reasons, then whatever binds one binds them both. So there’s nothing personal about shoulds, except insofar as we rarely have all the same reasons to do or not do something.
Doesn’t take much to shock you :-)
Sure. Let’s say there is a particular physical place (say, a specific big boulder on the shore of a lake) where I, for some reason, feel unusually calm, serene, and happy. It probably triggers some childhood memories and associations. I like this place. I should spend more time there.
No two people are the same. Besides, the importance different people attach to the same reasons varies greatly.
And, of course, to bind another with your “should” requires you to know this other very very well. To the degree I would argue is unattainable.
So say this place also makes me feel calm, serene, and happy. It also triggers in me some childhood memories and associations. I like the place. I also have (like you) no reasons not to go there. Lets say (however unlikely it might be) we have all the same reasons, and we weigh these reasons exactly the same. Nevertheless, it’s not the case that I should spend more time there. Have I just told you a coherent story?
So lets say you’re very thirsty. Around you, there’s plenty of perfectly potable water. And lets say I know you’re not trying to be thirsty for some reason, but that you’ve just come back from a run. I think I’m in a position to say that you should drink the water. I don’t need to know you very well to be sure of that. What am I getting wrong here?
That’s a rather crucial part. I am asserting that not only two people will not have the same reasons and weight them exactly the same, but you also can’t tell whether a person other than you has the same reasons and weights them exactly the same.
You’re basically saying “let’s make an exact copy of you—would your personal “shoulds” apply to that exact copy?”
The answer is yes, but an exact copy of me does not exist and that’s why my personal shoulds don’t apply to other people.
You can say, of course. But when I answer “no, I don’t think so”, is your “should” stronger than my “no”?
Ahh, okay, it looks like we are just misunderstanding one another. I originally asked you whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.
But it seems to me your answer to this is in fact ‘no’, there are no such ultimately personal shoulds. All shoulds bind everyone subject to the reasons backing them up, it’s just that those reasons rarely (if ever) coincide.
Yes. You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
What’s “numerically different”?
And what did you mean by “ultimately”, then? In reality all people are sufficiently different for my personal shoulds to apply only to me and not necessarily to anyone else. The set of other-than-me people to which my personal should must apply is empty. Is that insufficiently “ultimately”?
I beg to disagree. Given that you have no idea about reasons that I might have for not drinking, I don’t see why your “should” is correct. Speaking of which, how do you define “correct” in this situation, anyway? What makes you think that the end goals you imagine are actually the end goals that I am pursuing?
I just mean something like ‘there are two of them, rather than one’. So they can have all the same (non-relational) properties, but not be the same thing because there are two of them.
Well, that’s an empirical claim, for which we’d need some empirical evidence. It’s certainly possible that my personal ‘should’ could bind you too, since it’s possible (however unlikely) that we could be subject to exactly the same reasons in exactly the same way.
This is an important point, because it means that shoulds bind all and every person subject to the reasons that back them up. It may be true that people are subject to very different sets of reasons, such that in effect ‘shoulds’ only generally apply to one person. I think this empirical claim is false, but that’s a bit beside the point.
It’s part of the hypothetical that I do know the relevant reasons and your aims: you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty. Those are all the reasons (maybe the reality is never this simple, though I think it often is...again, that’s an empirical question). Knowing those, my ‘you should drink’ is absolutely binding on you.
I don’t need to define ‘correct’. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink. That’s all I mean by correct: that it’s true to say ‘if X, Y, Z, then you should drink’.
You really want evidence that there are no exact copies of me walking around..?
No, I don’t think it is possible. At this point it is fairly clear that we are not exact copies of each other :-D
Nope, I don’t think so. You keep on asserting, basically, that if you find a good set of reasons why I should do X and I cannot refute these reasons, I must do X. That is not true. I can easily tell you to go jump into the lake and not do X.
And another crucial part—no, you can not know all of my relevant reasons and my aims. We are different people and you don’t have magical access to the machinations of my mind.
Yes, you do need to define “correct”. The reasons may or may not be sufficient—you don’t know.
It does seem we have several very basic disagreements.
I deny the premise on which this is necessary: I think most people share the reasons for most of what they do most of the time. For example, when my friend and I come in from a run, we share reasons for drinking water. The ‘should’ that binds me, binds him equally. I think this is by far the most common state of affairs, the great complexity and variety of human psychology notwithstanding. The empirical question is whether our reasons for acting are in general very complicated or not.
I think you do, since I’m sure you think it’s possible that we are (in the relevant ways) identical. Improbable, to be sure. But possible.
I think I would describe it as you, being in similar situations, each formulate a personal “should” that happens to be pretty similar. But it’s his own “should” which binds him, not yours.
But I don’t suppose you would say this about answering a mathematical problem. If I conclude that six times three is eighteen, and you conclude similarly, isn’t it the case that we’ve done ‘the same problem’ and come to ‘the same answer’? Aren’t we each subject to the same reasons, in trying to solve the problem?
Or did each of us solve a personal math problem, and come to a personal answer that happens to be the same number?
In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.
Same thing for testable statements about physical reality—disagreements (between rational people) can be solved by the usual scientific methods.
But preferences and values exist only inside minds and I’m asserting that each mind is unique. My preferences and values can be the same as yours but they don’t have to be. In contrast, the physical reality is the same for everyone.
Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/
I don’t see how that’s any different from most value judgements. All human beings have a basically common set of values, owing to our neurological and biological similarities. Granted, you probably can’t advise me on whether or not to go to grad school, or run for office, but you can advice me to wear my seat belt or drink water after a run. That doesn’t seem so different from math: math is also in our heads, it’s also a space of widespread agreement and some limited disagreement in the hard cases.
It may look like the Israeli’s and the Palestinians just don’t see eye to eye on practical matters, but remember how big the practical reasoning space is. Them truly not seeing eye to eye would be like the Palestinians demanding the end of settlements, and the Israelis demanding that Venus be bluer.
I don’t see why. There’s no reason to infer from the fact that a ‘should’ binds someone that you can force them to obey it.
Now, as to why it’s a problem if your reasons for acting aren’t sufficient to determine a ‘should’. Suppose you hold that A, and that if A then B. You conclude from this that B. I also hold that A, and that if A then B. But I don’t conclude that B. I say “Your conclusion doesn’t bind me.” B, I say, is ‘true for you’, but not ‘true for me’. I explain that reasoning is personal, and that just because you draw a conclusion doesn’t mean anyone else has to.
If I’m right, however, it doesn’t look like ‘A, if A then B’ is sufficient to conclude B for either of us, since B doesn’t necessarily follow from these two premises. Some further thing is needed. What could this be? it can’t be another premise (like, ‘If you believe that A and that if A then B, conclude that B’) because that just reproduces the problem. I’m not sure what you’d like to suggest here, but I worry that so long as, in general, reasons aren’t sufficient to determine practical conclusions (our ‘shoulds’) then nothing could be. Acting would be basically irrational, in that you could never have a sufficient reason for what you do.
Nope. There is a common core and there is a lot of various non-core stuff. The non-core values can be wildly different.
We’re back to the same point: you can advise me, but if I say “no”, is your advice stronger than my “no”? You think it is, I think not.
The distinction between yourself and others is relevant here. You can easily determine whether a particular set of reasons is sufficient for you to act. However you can only guess whether the same set of reasons is sufficient for another to act. That’s why self-shoulds work perfectly fine, but other-shoulds have only a probability of working. Sometimes this probability is low, sometimes it’s high, but there’s no guarantee.
What do you mean by ‘stronger’? I think we all have free will: it’s impossible, metaphysically, for me to force you to do anything. You always have a choice. But that doesn’t mean I can’t point out your obligations or advantage with more persuasive or rational force than you can deny them. It may be that you’re so complicated an agent that I couldn’t get a grip on what reasons are relevant to you (again, empirical question), but if, hypothetically speaking, I do have as good a grip on your reasons as you do, and if it follows from the reasons to which you are subject that you should do X, and you think you should do ~X, then I’m right and you’re wrong and you should do X.
But I cannot, morally speaking, coerce or threaten you into doing X. I cannot, metaphysically speaking, force you to do X. If that is what you mean by ‘stronger’, then we agree.
My point is, you seem to be picking out a quantitative point: the degree of complexity is so great, that we cannot be subject to a common ‘should’. Maybe! But the evidence seems to me not to support that quantitative claim.
But aside from the quantitative claim, there’s a different, orthogonal, qualitative claim: if we are subject to the same reasons, we are subject to the same ‘should’. Setting aside the question of how complex our values and preferences are, do you agree with this claim? Remember, you might want to deny the antecedent of this conditional, but that doesn’t entail that the conditional is false.
In the same sense we talked about it in the {grand}parent post. You said:
...to continue
We may. But there is no guarantee that we would.
We have to be careful here. I understand “reasons” as, more or less, networks of causes and consequences. “Reasons” tell you what you should do to achieve something. But they don’t tell you what to achieve—that’s the job of values and preferences—and how to weight the different sides in a conflicting situation.
Given this, no, same reasons don’t give rise to the same “should”s because you need the same values and preferences as well.
So we have to figure out what a reason is. I took ‘reasons’ to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative. So, the reasoning behind an action might look something like this:
1) I want an apple. 2) The store sells apples, for a price I’m willing to pay. 3) It’s not too much trouble to get there. 4) I have no other reason not to go get some apples. C) I should get some apples from the store.
My claim is just that (C) follows and is true of everyone for whom (1)-(4) is true. If (1)-(4) is true of you, but you reject (C), then you’re wrong to do so. Just as anyone would be wrong to accept ‘If P then Q’ and ‘P’ but reject the conclusion ‘Q’.
That’s circular reasoning: if you define reasons as “everything necessary and sufficient”, well, of course, if they don’t conclude in an imperative they are not sufficient and so are not proper reasons :-/
In your example (4) is the weak spot. You’re making a remarkable wide and strong claim—one common in logical exercise but impossible to make in reality. There are always reasons pro and con and it all depends on how do you weight them.
Consider any objection to your conclusion (C) (e.g. “Eh, I’m feel lazy now”) -- any objection falls under (4) and so you can say that it doesn’t apply. And we’re back to the circle...
Not if I have independent reason to think that ‘everything necessary and sufficient to conclude an imperative’ is a reason, which I think I do.
To be absolutely clear: the above is an empirical claim. Something for which we need evidence on the table. I’m indifferent to this claim, and it has no bearing on my point.
My point is just this conditional: IF (1)-(4) are true of any individual, that individual cannot rationally reject (C).
You might object to the antecedent (on the grounds that (4) is not a claim we can make in practice), but that’s different from objecting to the conditional. If you don’t object to the conditional, then I don’t think we have any disagreement, except the empirical one. And on that score, I find you view very implausible, and neither of us is prepared to argue about it. So we can drop the empirical point.
That fails to include weighing of that against other considerations. If you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty, you “should drink water” only if the other considerations don’t mean that drinking water is a bad idea despite the fact that it would quench your thirst. And in order to know that someone’s other considerations don’t outweigh the benefit of drinking water, you need to know so much about the other person that that situation is pretty much never going to happen with any nontrivial “should”.
By hypothesis, there are no other significant considerations. I think most of the time, people’s rational considerations are about as simple as my hypothetical makes them out to be. Lumifer thinks they’re generally much more complicated. That’s an empirical debate that we probably can’t settle.
But there’s also the question of whether or not ‘shoulds’ can be ultimately personal. Suppose two lotteries. The first is won when your name is drawn out of a hat. Only one name is drawn, and so there’s only one possible winner. That’s a ‘personal’ lottery. Now take an impersonal lottery, where you win if your chosen 20 digit number matches the one drawn by the lottery moderators. Supposing you win, it’s just because your number matched theirs. Anyone whose number matched theirs would win, but it’s very unlikely that there will be more than one winner (or even one).
I’m saying that, leaving the empirical question aside, ‘shoulds’ bind us in the manner of an impersonal lottery. If we have a certain set of reasons, then they bind us, and they equally bind everyone who has that set of reasons (or something equivalent).
Lumifer is saying (I think) that ‘shoulds’ bind us in the manner of the personal lottery. They apply to each of us personally, though it’s possible that by coincidence two different shoulds have the same content and so it might look like one should binds two people.
A consequence of Lumifer’s view, it seems to me, is that a given set of reasons (where reasons are things that can apply equally to many individuals) is never sufficient to determine how we should act. This seems to me to be a very serious problem for the view.
Correct, I would agree to that.
Why so?
We seem to disagree on a fundamental level. I reject your notion of a strict fact-value distinction: I posit to you that all statements are either reducible to factual matters or else they are meaningless as a matter of logical necessity. Rationality indeed does not determine values, in the same way that rationality does not determine cheese, but questions about morality and cheese should both be answered in a rational and factual manner all the same.
If someone tells me that they grew up in a culture where they were taught that eating cheese is a sin, then I’m sorry to be so blunt about it (ok, not really) but their culture is stupid and wrong.
Interesting. That’s a rather basic and low-level disagreement.
So, let’s take a look at Alice and Bob. Alice says “I like the color green! We should paint all the buildings in town green!”. Bob says “I like the color blue! We should paint all the buildings in town blue!”. Are these statements meaningless? Or are they reducible to factual matters?
By the way, your position was quite popular historically. The Roman Catholic church was (and still is) a big proponent.
I cannot speak for Sophronius of course, but here is one possible answer. It may be that morality is “objective” in the sense that Eliezer tried to defend in the metaethics sequence. Roughly, when someone says X is good they mean that X is part of of a loosely defined set of things that make humans flourish, and by virtue of the psychological unity of mankind we can be reasonably confident that this is a more-or-less well-defined set and that if humans were perfectly informed and rational they would end up agreeing about which things are in it, as the CEV proposal assumes.
Then we can confidently say that both Alice and Bob in your example are objectively mistaken (it is completely implausible that CEV is achieved by painting all buildings the color that Alice or Bob happens to like subjectively the most, as opposed to leaving the decision to the free market, or perhaps careful science-based urban planning done by a FAI). We can also confidently say that some real-world expressions of values (e.g. “Heretics should be burned at the stake”, which was popular a few hundred years ago) are false. Others are more debatable. In particular, the last two examples in Sophronius’ list are cases where I am reasonably confident that his answers are the correct ones, but not as close to 100%-epsilon probability as I am on the examples I gave above.
Well, I can’t speak for other people but when I say “X is good” I mean nothing of that sort. I am pretty sure the majority of people on this planet don’t think of “good” this way either.
Nope, you can say. If your “we” includes me then no, “we” can’t say that.
By “Then we can confidently say” I just meant “Assuming we accept the above analysis of morality, then we can confidently say…”. I am not sure I accept it myself; I proposed it as a way one could believe that normative questions have objective answers without straying as far form the general LW worldview as being a Roman Catholic.
By the way, the metaethical analysis I outlined does not require that people think consciously of something like CEV whenever they use the word “good”. It is a proposed explication in the Carnapian sense of the folk concept of “good” in the same way that, say, VNM utility theory is an explication of “rational”.
These statements are not meaningless. They are reducible to factual matters. “I like the colour blue” is a factual statement about Bob’s preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Bob’s brain). Presumably Bob is correct in his assertion, but if I know Bob well enough I might point out that he absolutely detests everything that is the colour blue even though he honestly believes he likes the colour blue. The statement would be false in that case.
Furthermore, the statement “We should paint all the buildings in town blue!” follows logically from his previous statement about his preferences regarding blueness. Certainly, the more people are found to prefer blueness over greenness, the more evidence this provides in favour of the claim “We should paint all the buildings in town blue!” which is itself reducible to “A large number of people including myself prefer for the buildings in this town to be blue, and I therefore favour painting them in this colour!”
Contrast the above with the statement “I like blue, therefore we should all have cheese”, which is also a should claim but which can be rejected as illogical. This should make it clear that should statements are not all equally valid, and that they are subject to logical rigour just like any other claim.
Let’s introduce Charlie.
“I think women should be barefoot and pregnant” is a factual statement about Charlie’s preferences which are themselves reducible to the physical locations of atoms in the universe (specifically Charlie’s brain).
Futhermore, the statement “We should make sure women remain barefoot and pregnant” follows logically from Charlie’s previous statement about his preferences regarding women.
I would expect you to say that Charlie is factually wrong. In which way is he factually wrong and Bob isn’t?
The statement “We should paint all the buildings in town blue!” is not a claim in need of evidence. It is a command, an expression of what Bob thinks should happen. It has nothing to do with how many people think the same.
Assuming “should” is meant in a moral sense, we can say that “We should paint all the buildings in town blue!” is in fact a claim in need of evidence. Specifically, it says (to 2 decimal places) that we would all be better off / happier / flourish more if the buildings are painted blue. This is certainly true if it turns out the majority of the town really likes blue, so that they would be happier, but it does not entirely follow from Bob’s claim that he likes blue—if the rest of the town really hated blue, then it would be reasonable to say that their discomfort outweighed his happiness. In this case he would be factually incorrect to say “We should paint all the buildings in town blue!”.
In contrast, you can treat “We should make sure women remain barefoot and pregnant” as a claim in need of evidence, and in this case we can establish it as false. Most obviously because the proposed situation would not be very good for women, and we shouldn’t do something that harms half the human race unnecessarily.
Not at all and I don’t see why would you assume a specific morality.
Bob says “We should paint all the buildings in town blue!” to mean that it would make him happier and he doesn’t care at all about what other people around think about the idea.
Bob is not a utilitarian :-)
Exactly the same thing—Charlie is not a utilitarian either. He thinks he will be better off in the world where women are barefoot and pregnant.
But he says “We should” not “I want” because there is the implication that I should also paint the buildings blue. But if the only reason I should do so is because he wants me to, it raises the question of why I should do what he wants. And if he answers “You should do what I want because it’s what I want”, it’s a tautology.
Imagine Vladimir Putin visiting a Russian village and declaring “We should paint all the buildings blue!”
Suddenly “You should do what I want because it’s what I want” is not a tautology any more but an excellent reason to get out your paint brush :-/
Putin has a way of adding his wants to my wants, through fear, bribes, or other incentives. But then the direct cause of my actions would be the fear/bribe/etc, not the simple fact that he wants it.
And what difference does that make?
Presumably, Bob doesn’t have a way of making me care about what he wants (beyond the extent to which I care about what a generic stranger wants). If he were to pay me, that would be different, but he can’t make me care simply because that’s his preference. When he says “We should paint the buildings blue” he’s saying “I want the buildings painted blue” and “You want the buildings painted blue”, but if I don’t want the buildings painted blue, he’s wrong.
Why not? Much of interactions in a human society are precisely ways of making others care what someone wants.
In any case, the original issue was whether Bob’s preference for blue could be described as “correct” or “wrong”. How exactly does Bob manage to get what he wants is neither here nor there.
No, he is not saying that.
The original statement was “I like the color blue! We should paint all the buildings in town blue!” His preference for blue can neither be right nor wrong, but the second sentence is something that can be ’correct” or “wrong”.
Without specifying a particular value system, no, it can not.
Full circle back to the original.
There already is an existing value system—what Bob and I already value.
I think we’re pretty close to someone declaring that egoism isn’t a valid moral position, again.
I wonder if that someone will make the logical step to insisting that moral egoists should be reeducated to make them change to a “valid” moral position :-/
That’s just looking at one of the direct consequences, accepting for the sake of argument that most women would prefer not to be “barefoot and pregnant”. The problem is that, for these kinds of major social changes, the direct effects tend to be dominated by indirect effects and your argument makes no attempt to analyze the indirect effects.
Technically you are correct, so you can read my above argument as figuratively “accurate to one decimal place”. The important thing is that there’s nothing mysterious going on here in a linguistic or metaethical sense.
But in a practical sense these things can’t be computed from first principals, so it is necessary to rely on tradition at least to some extent.
I partly agree, but a tradition that developed under certain conditions isn’t necessarily optimal under different conditions (e.g. much better technology and medicine, less need for manual labour, fewer stupid people (at least for now), etc.).
Otherwise, we’d be even better off just executing our evolved adaptations, which had even more time to develop.
Revealed preferences of women buying shoes and contraception?
Depends on the context :-D In China a few centuries ago a woman quite reasonably might prefer to be barefoot (as opposed to have her feet tightly bound to disfigure them) and pregnant (as opposed to barren which made her socially worthless).
Charlie is, presumably, factually correct in that he thinks that he holds that view. However, while preferences regarding colour are well established, I am sceptical regarding the claim that this is an actual terminal preference that Charlie holds. It is possible that he finds pregnant barefeeted women attractive, in which case his statement gives valid information regarding his preferences which might be taken into account by others: In this case it is meaningful. Alternatively, if he were raised to think that this is a belief one ought to hold then the statement is merely signalling politics and is therefore of an entirely different nature.
“I like blue and want the town to be painted blue” gives factual info regarding the universe. “Women ought to be pregnant because my church says so!” does not have the primary goal of providing info, it has the goal of pushing politics.
Imagine a person holding a gun to your head and saying “You should give me your money”. Regardless of his use of the word “should”, he is making an implicit logical argument:
1) Giving me your money reduces your chances of getting shot by me
2) You presumably do not want to get shot
3) Therefore, you should give me your money
If you respond to the man by saying that morality is relative, you are rather missing the point.
I think you are missing the subtle hidden meanings of everyday discourse. Imagine Bob saying that the town should be painted blue. Then, someone else comes with arguments for why the town should not be painted Blue. Bob eventually agrees. “You are right”, he says, “that was a dumb suggestion”. The fact that exchanges like this happen all the time shows that Bob’s statement is not just a meaningless expression, but rather a proposal relying on implicit arguments and claims. Specifically, it relies on enough people in the village sharing his preference for blue houses that the notion will be taken seriously. If Bob did not think this to be the case, he probably would not have said what he did.
Okay, yeah, so belief in belief is a thing. We can profess opinions that we’ve been taught are virtuous to hold without deeply integrating them into our worldview; and that’s probably increasingly common these days as traditional belief systems clank their way into some sort of partial conformity with mainstream secular ethics. But at the same time, we should not automatically assume that anyone professing traditional values—or for that matter unusual nontraditional ones—is doing so out of self-interest or a failure to integrate their ethics.
Setting aside the issues with “terminal value” in a human context, it may well be that post-Enlightenment secular ethics are closer in some absolute sense to a human optimal, and that a single optimal exists. I’m even willing to say that there’s evidence for that in the form of changing rates of violent crime, etc., although I’m sure the reactionaries in the audience will be quick to remind me of the technological and demographic factors with their fingers on the scale. But I don’t think we can claim to have strong evidence for this, in view of the variety of ethical systems that have come before us and the generally poor empirical grounding of ethical philosophy.
Until we do have that sort of evidence, I view the normative component of our ethics as fallible, and certainly not a good litmus test for general rationality.
On the contrary, I think it’s quite reasonable to assume that somebody who bases their morality on religious background has not integrated these preferences and is simply confused. My objection here is mainly in case somebody brings up a more extreme example. In these ethical debates, somebody always (me this time, I guess) brings up the example of Islamic sub-groups who throw acid in the faces of their daughters. Somebody always ends up claiming that “well that’s their culture, you know, you can’t criticize that. Who are you to say that they are wrong to do so?”. In that case, my reply would be that those people do not actually have a preference for disfigured daughters, they merely hold the belief that this is right as a result of their religion. This can be seen from the fact that the only people who do this hold more or less the same set of religious beliefs. And given that the only ones who hold that ‘preference’ do so as a result of a belief which is factually false, I think it’s again reasonable to say: No, I do not respect their beliefs and their culture is wrong and stupid.
The point is not so much whether there is one optimum, but rather that some cultures are better than others and that progress is in fact possible. If you agree with that, we have already closed most of the inferential distance between us. :)
Even if people don’t have fully integrated beliefs in destructive policies, their beliefs can be integrated enough to lead to destructive behavior.
The Muslims who throw acid in their daughters’ faces may not have an absolute preference for disfigured daughters, but they may prefer disfigured daughters over being attacked by their neighbors for permitting their daughters more freedom than is locally acceptable—or prefer to not be attacked by the imagined opinions (of other Muslims and/or of Allah) which they’re carrying in their minds.
Also, even though it may not be a terminal value, I’d say there are plenty of people who take pleasure in hurting people, and more who take pleasure in seeing other people hurt.
Agreed on each count.
There’s some subtlety here. I believe that ethical propositions are ultimately reducible to physical facts (involving idealized preference satisfaction, although I don’t think it’d be productive to dive into the metaethical rabbit hole here), and that cultures’ moral systems can in principle be evaluated in those terms. So no, culture isn’t a get-out-of-jail-free card. But that works both ways, and I think it’s very likely that many of the products of modern secular ethics are as firmly tied to the culture they come from as would be, say, an injunction to stone people who wear robes woven from two fibers. We don’t magically divorce ourselves from cultural influence when we stop paying attention to the alleged pronouncements of the big beardy dude in the sky. For these reasons I try to be cautious about—though I wouldn’t go so far as to say “skeptical of”—claims of ethical progress in any particular domain.
The other fork of this is stability of preference across individuals. I know I’ve been beating this drum pretty hard, but preference is complicated; among other things, preferences are nodes in a deeply nested system that includes a number of cultural feedback loops. We don’t have any general way of looking at a preference and saying whether or not it’s “true”. We do have some good heuristics—if a particular preference appears only in adherents of a certain religion, and their justification for it is “the Triple Goddess revealed it to us”, it’s probably fairly shallow—but they’re nowhere near good enough to evaluate every ethical proposition, especially if it’s close to something generally thought of as a cultural universal.
The Wikipedia page on acid throwing describes it as endemic to a number of African and Central and South Asian countries, along with a few outside those regions, with religious cultures ranging from Islam through Hinduism and Buddhism. You may be referring to some subset of acid attacks (the word “daughter” doesn’t appear in the article), but if there is one, I can’t see it from here.
Fair enough. I largely agree with your analysis: I agree that preferences are complicated, and I would even go as far as to say that they change a little every time we think about them. That does make things tricky for those who want to build a utopia for all mankind! However, in every day life I think objections on such an abstract level aren’t so important. The important thing is that we can agree on the object level, e.g. sex is not actually sinful, regardless of how many people believe it is. Saying that sex is sinful is perhaps not factually wrong, but rather it belies a kind of fundamental confusion regarding the way reality works that puts it in the ‘not even wrong’ category. The fact that it’s so hard for people to be logical about their moral beliefs is actually precisely why I think it’s a good litmus test of rationality/clear thinking: If it were easy to get it right, it wouldn’t be much of a test.
Looking at that page I am still getting the impression that it’s primarily Islamic cultures that do this, but I’ll agree that calling it exclusively Islamic was wrong. Thanks for the correction :)
Given that you know absolutely nothing about Charlie, a player in a hypothetical scenario, I find your scepticism entirely unwarranted. Fighting the hypothetical won’t get you very far.
So, is Charlie factually wrong? On the basis of what would you determine that Charlie’s belief is wrong and Bob’s isn’t?
Why would I respond like that? What does the claim that morality is relative have to do with threats of bodily harm?
In this context I don’t care about the subtle hidden meanings. People who believe they know the Truth and have access to the Sole Factually Correct Set of Values tend to just kill others who disagree. Or at the very least marginalize them and make them third-class citizens. All in the name of the Glorious Future, of course.
Well, given that Charlie indeed genuinely holds that preference, then no he is not wrong to hold that preference. I don’t even know what it would mean for a preference to be wrong. Rather, his preferences might conflict with preferences of others, who might object to this state of reality by calling it “wrong”, which seems like the mind-projection fallacy to me. There is nothing mysterious about this.
Similarly, the person in the original example of mine is not wrong to think men kissing each other is icky, but he IS wrong to conclude that there is therefore some universal moral rule that men kissing each other is bad. Again, just because rationality does not determine preferences, does not mean that logic and reason do not apply to morality!
I believe you have pegged me quite wrongly, sir! I only care about truth, not Truth. And yes, I do have access to some truths, as of course do you. Saying that logic and reason apply to morality and that therefore all moral claims are not equally valid (they can be factually wrong or entirely nonsensical) is quite a far cry from heralding in the Third Reich. The article on Less Wrong regarding the proper use of doubt seems pertinent here.
I am confused. Did I misunderstand you or did you change your mind?
Earlier you said that “should” kind of questions have single correct answers (which means that other answers are wrong). A “preference” is more or less the same thing as a “value” in this context, and you staked out a strong position:
Since statements of facts can be correct or wrong and you said there is no “fact-value distinction”, then values (and preferences) can be correct or wrong as well. However in the parent post you say
If you have a coherent position in all this, I don’t see it.
I think you misunderstood me. Of course I don’t mean that the terms “facts” and “values” represent the same thing. Saying that a preference itself is wrong is nonsense in the same way that claiming that a piece of cheese is wrong is nonsensical. It’s a category error. When I say I reject a strict fact-value dichotomy I mean that I reject the notion that statements regarding values should somehow be treated differently from statements regarding facts, in the same way that I reject the notion of faith inhabiting a separate magistrate from science (i.e. special pleading). So my position is that when someone makes a moral claim such as “don’t murder”, they better be able to reduce that to factual statements about reality or else they are talking nonsense.
For example, “sex is sinful!” usually reduces to “I think my god doesn’t like sex”, which is nonsense because there is no such thing. On the other hand, if someone says “Stealing is bad!”, that can be reduced to the claim that allowing theft is harmful to society (in a number of observable ways), which I would agree with. As such I am perfectly comfortable labelling some moral claims as valid and some as nonsense.
I don’t see how this sentence
is compatible with this sentence
I am distinguishing between X and statements regarding X. The statement “Cheese is wrong” is nonsensical. The statement “it’s nonsensical to say cheese is wrong” is not nonsensical. Values and facts are not the same, but statements regarding values and facts should be treated the same way.
Similarly: Faith and Science are not the same thing. Nonetheless, I reject the notion that claims based on faith should be treated any differently from scientific claims.
Do you also reject the notion that claims about mathematics and science should be treated differently?
In the general sense that all claims must abide by the usual requirements of validity and soundness of logic, sure.
In fact, you might say that mathematics is really just a very pure form of logic, while science deals with more murky, more complicated matters. But the essential principle is the same: You better make sure that the output follows logically from the input, or else you’re not doing it right.
My point is that what constitutes “validity” and “soundness of logic” differs between the two domains.
I think it’s more likely he was misusing the word “literally”/wearing belief as attire (in technical terms, bullshitting) than he actually really believed that. After all I guess he could tell boys and girl apart without looking between their legs, couldn’t he?
But you can always find harm if you allow for feelings of disgust, or take into account competition in sexual markets (i.e. if having sex with X is a substitute for having sex with Y then Y might be harmed if someone is allowed to have sex with X.)
Ok, that’s a fair enough point. Sure, feelings do matter. However, I generally distinguish between genuine terminal preferences and mere surface emotions. The reason for this is that often it is easier/better to change your feelings than for other people to change their behaviour. For example, if I strongly dislike the name James Miller, you probably won’t change your name to take my feelings into account.
(At the risk of saying something political: This is the same reason I don’t like political correctness very much. I feel that it allows people to frame political discourse purely by being offended.)
The standard reply to this is that many people hurt themselves by their choices, and that justifies intervention. (Even if we hastily add an “else” after “anyone,” note that hurting yourself hurts anyone who cares about you, and thus the set of acts which harm no one is potentially empty.)
It’s wrong on a biological level. From my physiology lecture: Woman blink twice as much as men. The have less water in their bodies.
So you are claiming either: “Children are no people” or “Pedophilia should be legal”. I don’t think any of those claims has societal approval let alone is a clear-cut issue.
But even if you switch the statement to the standard: “Consenting adults should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone” The phrases consenting (can someone with >1.0 promille alcohol consent?) and harm (emotional harm exists and not going tested for STD’s and having unprotected sex has the potential to harm) are open to debate.
The maximal effect of a strong cognitive intervention might very will bring the average person to Mozart levels. We know relatively little about doing strong intervention to improve human mental performance.
But genes to matter.
It depends on what roles. If a movie producer casts actors for a specific role, gender usually matters a big deal.
A bit more controversial but I think there are cases where it’s useful for men to come together in an environment where they don’t have to signal stuff to females.
I’d expect them to assert that paedophilia does harm. That’s the obvious resolution.
Court are not supposed to investigate whether the child is emotionally harmed by the experience but whether he or she is under a certain age threshold. You could certainly imagine a legal system where psychologists are always asked whether a given child is harmed by having sex instead of a legal system that makes the decision through an age criteria.
I think a more reasonable argument for the age boundary isn’t that every child gets harmed but that most get harmed and that having a law that forbids that behavior is preventing a lot of children from getting harmed.
I don’t think you are a bad person to arguing that we should have a system that focuses on the amount of harm done instead of focusing on an arbitrary age boundary but that’s not the system we have that’s backed by societal consensus.
We also don’t put anybody in prison for having sex with a 19-year old breaking her heart and watching as they commit suicide. We would judge a case like that as a tragedy but we wouldn’t legally charge the responsible person with anything.
The concept of consent is pretty important for our present system. Even in cases where no harm is done we take a breach of consent seriously.
Actually I’m under the impression that the ‘standard’ resolution is not about the “harm” part but about the “want” part: it’s assumed that people below a certain age can’t want sex, to the point that said age is called the age of consent and sex with people younger than that is called a term which suggests it’s considered a subset of sex with people who don’t want it.
(I’m neither endorsing nor mocking this, just describing it.)
I think your impression is mistaken.
Nope. It is assumed that people below a certain age cannot give informed consent. In other words, they are assumed to be not capable of good decisions and to be not responsible for the consequences. What they want is irrelevant. If you’re below the appropriate age of consent, you cannot sign a valid contract, for example.
Below the age of consent you basically lack the legal capacity to agree to something.
I assumed “want” to mean ‘consent’ in that sentence.
That’s not what these words mean, not even close.
Well, I suppose Sophronius could argue that pedophilia should be legal, after all many things (especially related to sex) that were once socially unacceptable are now considered normal.
Even if he thinks that it should be legal, it’s no position where it’s likely that everyone will agree. Sophronius wanted to find examples where everyone can agree.
No, he was listing political, i.e., controversial, questions with clear cut answers. I don’t know what Sophronius considers clear cut.
Really? Gives his history I think the answer is pretty clear that he’s not the kind of person who’s out to argue that legalizing pedophila is a clear cut issue.
He also said something about wanting to avoid the kind of controversy that causes downvoting.
In all of these cases, the people breaking with the conclusion you presumably believe to be obvious often do so because they believe the existing research to be hopelessly corrupt. This is of course a rather extraordinary statement, and I’m pretty sure they’d be wrong about it (that is, as sure as I can be with a casual knowledge of each field and a decent grasp of statistics), but bad science isn’t exactly unheard of. Given the right set of priors, I can see a rational person holding each of these opinions at least for a time.
In the latter two, they might additionally have different standards for “should” than you’re used to.
I’m not sure what you are trying to convince me of here. That people who disagree have reasons for disagreeing? Well of course they do, it’s not like they disagree out of spite. The fact that they are right in their minds does not mean that they are in fact right.
And yes, they might have a different definition for should. Doesn’t matter. If you talk to someone who believes that men kissing each other is “just plain wrong”, you’ll inevitably find that they are confused, illogical and inconsistent about their beliefs and are irrational in general. Do you think that just because a statement involves the word “should”, you can’t say that they are wrong?
The question I was trying to answer wasn’t whether they were right, it was whether a rational actor could hold those opinions. That has a lot less to do with factual accuracy and a lot more to do with internal consistency.
As to the correctness of normative claims—well, that’s a fairly subtle question. Deontological claims are often entangled with factual ones (e.g. the existence-of-God thing), so that’s at least one point of grounding, but even from a consequential perspective you need an optimization objective. Rational actors may disagree on exactly what that objective is, and reasonable-sounding objectives often lead to seriously counterintuitive prescriptions in some cases.
Oh, right, I see what you mean. Sure, people can disagree with each other without either being irrational: All that takes is for them to have different information. For example, one can rationally believe the earth is flat, depending on which time and place one grew up in.
That does not change the fact that these questions have a correct answer though, and it should be pretty clear which the correct answers are in the above examples, even though you can never be 100% certain of course. The point remains that just because a question is political does not mean that all answers are equally valid. False equivalence and all that.
Including as basso singers? ;-)
(As you worded your sentence, I would agree with it, but I would also add “But employers should be allowed to not hire them.”)
I would have gone for “slavery is bad”
There is a question about it. It’s the existential thread that’s most feared among Lesswrongers. Bioengineered pandemics are a thread due to gene manipulated organisms.
If that’s not what you want to know, how would you word your question?
I took “bioengineered” to imply ‘deliberately’ and “pandemic” to imply ‘contagious’, and in any event fear of > 90% of humans dying by 2100 is far from the only possible reason to oppose GMOs.
I didn’t advocate that it’s the only reason. That’s why I asked for a more precise question.
If the tools that you need to genmanipulate organisms are widely available it’s much easier to deliberately produce a pandemic.
It’s possible to make a bacteria immune to antibiotica by just giving them antibiotica and making not manipulating the genes directly. On the other hand I think that people fear bioengineered pandemics because they expect stronger capabilities in regards to manipulating organisms in the future.
My issue with GMOs is basically the same one Taleb describes in this quote.
“Time online per week seems plausible from personal experience, but I didn’t expect the average to be so high.”
I personally spend an average of 50 hours a week online.
That’s because, by profession, I am a web-developer.
The percentage of LessWrong members in IT is clearly higher than that of the average population.
I postulate that the higher number of other IT geeks (who, like me, are also likely spending high numbers of hours online per week) is pushing up the average to a level that seems, to you, to be surprisingly high.
“The overconfidence data hurts, but as someone pointed out in the comments, it’s hard to ask a question which isn’t misunderstood.”
I interpreted this poor level of calibration more to the fact that it’s easier to read about what you should be doing than to actually go and practice the skill and get better at it.
I’m one of the people who have never used spaced repetition, though I’ve heard of it. I don’t doubt it works, but what do you actually need to remember nowadays? I’d probably use it if I was learning a new language (which I don’t really plan to do anytime soon)… What other skills work nicely with spaced repetition?
I just don’t feel the need to remember things when I have google / wikipedia on my phone.
Isn’t there anything you already know but wouldn’t like to forget? SRS is for keeping your precious memory storage, not necessarily for learning new stuff. There are probably a lot of things that wouldn’t even cross your mind to google if they were erased by time. Googling could also waste time compared to storing memories if you have to do it often enough (roughly 5 minutes in your lifetime per fact).
In my experience anything you can write into brief flashcards. Some simple facts can work as handles for broader concepts once you’ve learned them. You could even record triggers for episodic memories that are important to you.
Yeah, that’s pretty much the problem. Not really. I.e. there are stuff I know that would be inconvenient to forget, because I use this knowledge every day. But since I already use it every day, SR seems unnecessary.
Things I don’t use every day are not essential—the cost of looking them up is minuscule since it happens rarely.
I suppose a plausible use case would be birth dates of family members, if I didn’t have google calendar to remind me when needed.
Edit: another use case that comes to mind would be names. I’m pretty bad with names (though I’ve recently begun to suspect that probably I’m as bad with remembering names as anyone else, I just fail to pay attention when people introduce themselves). But asking to take someone’s picture ‘so that I can put it on a flashcard’ seems awkward. Facebook to the rescue, I guess?
(though I don’t really meet that many people, so again—possibly not worth the effort in maintaining such a system)
I don’t know what you work on, but many fields include bodies of loosely connected facts that you could in principle look up, but which you’d be much more efficient if you just memorized. In programming this might mean functions in a particular library that you’re working with (the C++ STL, for example). In chemistry, it might be organic reactions. The signs of medical conditions might be another example, or identities related to a particular branch of mathematics.
SRS would be well suited to maintaining any of these bodies of knowledge.
I’m a software dev.
Right. I guess I somewhat do ‘spaced repetition’ here, just by the fact that every time I interact with a particular library I’m reminded of its function. But that is incidental—I don’t really care about remembering libraries that I don’t use, and those that I use regularly I don’t need SR to maintain.
I suppose medical conditions looks more plausible as a use case—you really need to remember a large set of facts, any of which is actually used very rarely. But that still doesn’t seem useful to me personally—I can think of no dataset that’d be worth the effort.
I guess I should just assume I’m an outlier there, and simply keep SR in mind in case I ever find myself needing it.
I’ve used SRS to learn programming theory that I otherwise had trouble keeping straight in my head. I’ve made cards for design patterns, levels of database normalization, fiddly elements of C++ referencing syntax, etc.
Do you have your design pattern cards formatted in a way that are likely to be useful for other people?
They’re mostly copy-and-pasted descriptions from wikipedia, tweaked with added info from Design Patterns. I’m not sure they’d be very useful to other people. I used them to help prepare for an interview, so when I was doing my cards I’d describe them out loud, then check the description, then pop open the book to clarify anything I wasn’t sure on.
edit: And I’d do the reverse, naming the pattern based on the description.