Your last two questions are value-based questions about what should be. They are not answerable by science and the answers are culturally determined. It is perfectly possible to be very rational and at the same time believe that, say, homosexuality is a great evil.
If “should” has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of “should” employed by Sophronius in the text. It would be more accurate to say that you can be very rational and still disapprove of homosexuality (as disapproval is an attitude, as opposed to a propositional statement).
If “should” has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of “should” employed by Sophronius
Maybe. But that’s a personal “should”, specific to a particular individual and not binding on anyone else.
Sophronius asserts that values (and so “should”s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.
But the original context was “we should”. Sophronius obviously intended the sentence to refer to everyone. I don’t see anything relative about his use of words.
Sophronius obviously intended the sentence to refer to everyone.
Correct, and that’s why I said
Sophronius asserts that values (and so “should”s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.
I’m struggling to figure out how to communicate the issue here.
If you agree that what Sophronius intended to say was “everyone should” why would you describe it as a personal “should”? (And what does “binding on someone” even mean, anyway?)
Well, to me it’s obvious that “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone.” was a logical proposition, either true or false. And whether it’s true or false has nothing to do with whether anyone else has the same terminal values as Sophronius. But you seem to disagree?
Well, to me it’s obvious that “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone.” was a logical proposition, either true or false.
Do you mean it would be true or false for everyone? At all times? In all cultures and situations? In the same way “Sky is blue” is true?
Yes. Logical propositions are factually either true or false. It doesn’t matter who is asking. In exactly the same way that “everyone p-should put pebbles into prime heaps” doesn’t care who’s asking, or indeed how “the sky is blue” doesn’t care who’s asking.
Well then, I disagree. Since I just did a whole circle of the mulberry bush with Sophronius I’m not inclined to do another round. Instead I’ll just state my position.
I think that statements which do not describe reality but instead speak of preferences, values, and “should”s are NOT “factually either true or false”. They cannot be unconditionally true or false at all. Instead, they can be true or false conditional on the specified value system and if you specify a different value system, the true/false value may change. To rephrase it in a slightly different manner, value statements can consistent or inconsistent with some value system, and they also can be instrumentally rational or not in pursuit of some goals (and whether they are rational or not is conditional on the the particular goals).
To get specific, “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone” is true within some value system and false within some other value systems. Both kinds of value systems exist. I see no basis for declaring one kind of value systems “factually right” and another kind “factually wrong”.
As a example consider a statement “The sum of the triangle’s inner angles is 180 degrees”. Is this true? In some geometries, yes, in others, no. This statement is not true unconditionally, to figure out whether it’s true in some specific case you have to specify a particular geometry. And in some real-life geometries it is true and in other real-life geometries it is false.
Well, I’m not trying to say that some values are factual and others are imaginary. But when someone makes a “should” statement (makes a moral assertion), “should” refers to a particular predicate determined by their actual value system, as your value system determines your language. Thus when people talk of “you should do X” they aren’t speaking of preferences or values, rather they are speaking of whatever it is their value system actually unfolds into.
(The fact that we all use the same word, “should” to describe what could be many different concepts is, I think, justified by the notion that we mostly share the same values, so we are in fact talking about the same thing, but that’s an empirical issue.)
As a example consider a statement “The sum of the triangle’s inner angles is 180 degrees”. Is this true?
Hopefully this will help demonstrate my position. I would say that when being fully rigorous is it a type error to ask whether a sentence is true. Logical propositions have a truth value, but sentences are just strings of symbols. To turn “The sum of the triangle’s inner angles is 180 degrees” into a logical proposition you need to know what is meant by “sum”, “triangle”, “inner angles”, “180”, “degrees” and indeed “is”.
As an example, if the sentence was uttered by Bob, and what he meant by “triangle” was a triangle in euclidean space, and by “is” he meant “is always” (universally quantified), then what he said is factually (unconditionally) true. But if he uttered the same sentence, in a language where “triangle” means a triangle in a hyperbolic space, or in a general space, then what he said would be unconditionally false. There’s no contradiction here because in each case he said a different thing.
They can. But when a person utters a sentence, they generally intend to state the derelativized proposition indicated by the sentence in their language. When I say “P”, I don’t mean ”"P" is a true sentence in all languages at all places”, I mean P(current context).
Which is why it’s useless to say “I have a different definition of ‘should’”, because the original speaker wasn’t talking about definitions, they were talking about whatever it is “should” actually refers to in their actual language.
(I actually thought of mentioning that the sky isn’t always blue in all situations, but decided not to.)
Well, if you should drink more because you’re dehydrated, then you’re right to say that not everyone is bound by that, but people in similar circumstances are (i.e. dehydrated, with no other reason not to drink). Or are you saying that there are ultimately personal shoulds?
‘Of course’ nothing, I find that answer totally shocking. Can you think of an example? Or can you explain how such shoulds are supposed to work?
So far as I understand it, for every ‘should’ there is some list of reasons why. If two people have the same lists of reasons, then whatever binds one binds them both. So there’s nothing personal about shoulds, except insofar as we rarely have all the same reasons to do or not do something.
Sure. Let’s say there is a particular physical place (say, a specific big boulder on the shore of a lake) where I, for some reason, feel unusually calm, serene, and happy. It probably triggers some childhood memories and associations. I like this place. I should spend more time there.
If two people have the same lists of reasons, then whatever binds one binds them both.
No two people are the same. Besides, the importance different people attach to the same reasons varies greatly.
And, of course, to bind another with your “should” requires you to know this other very very well. To the degree I would argue is unattainable.
I like this place. I should spend more time there.
So say this place also makes me feel calm, serene, and happy. It also triggers in me some childhood memories and associations. I like the place. I also have (like you) no reasons not to go there. Lets say (however unlikely it might be) we have all the same reasons, and we weigh these reasons exactly the same. Nevertheless, it’s not the case that I should spend more time there. Have I just told you a coherent story?
And, of course, to bind another with your “should” requires you to know this other very very well. To the degree I would argue is unattainable.
So lets say you’re very thirsty. Around you, there’s plenty of perfectly potable water. And lets say I know you’re not trying to be thirsty for some reason, but that you’ve just come back from a run. I think I’m in a position to say that you should drink the water. I don’t need to know you very well to be sure of that. What am I getting wrong here?
That’s a rather crucial part. I am asserting that not only two people will not have the same reasons and weight them exactly the same, but you also can’t tell whether a person other than you has the same reasons and weights them exactly the same.
You’re basically saying “let’s make an exact copy of you—would your personal “shoulds” apply to that exact copy?”
The answer is yes, but an exact copy of me does not exist and that’s why my personal shoulds don’t apply to other people.
I think I’m in a position to say that you should drink the water.
You can say, of course. But when I answer “no, I don’t think so”, is your “should” stronger than my “no”?
Ahh, okay, it looks like we are just misunderstanding one another. I originally asked you whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.
But it seems to me your answer to this is in fact ‘no’, there are no such ultimately personal shoulds. All shoulds bind everyone subject to the reasons backing them up, it’s just that those reasons rarely (if ever) coincide.
You can say, of course. But when I answer “no, I don’t think so”, is your “should” stronger than my “no”?
Yes. You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.
What’s “numerically different”?
And what did you mean by “ultimately”, then? In reality all people are sufficiently different for my personal shoulds to apply only to me and not necessarily to anyone else. The set of other-than-me people to which my personal should must apply is empty. Is that insufficiently “ultimately”?
Yes. You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
I beg to disagree. Given that you have no idea about reasons that I might have for not drinking, I don’t see why your “should” is correct. Speaking of which, how do you define “correct” in this situation, anyway? What makes you think that the end goals you imagine are actually the end goals that I am pursuing?
I just mean something like ‘there are two of them, rather than one’. So they can have all the same (non-relational) properties, but not be the same thing because there are two of them.
The set of other-than-me people to which my personal should must apply is empty.
Well, that’s an empirical claim, for which we’d need some empirical evidence. It’s certainly possible that my personal ‘should’ could bind you too, since it’s possible (however unlikely) that we could be subject to exactly the same reasons in exactly the same way.
This is an important point, because it means that shoulds bind all and every person subject to the reasons that back them up. It may be true that people are subject to very different sets of reasons, such that in effect ‘shoulds’ only generally apply to one person. I think this empirical claim is false, but that’s a bit beside the point.
Given that you have no idea about reasons that I might have for not drinking
It’s part of the hypothetical that I do know the relevant reasons and your aims: you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty. Those are all the reasons (maybe the reality is never this simple, though I think it often is...again, that’s an empirical question). Knowing those, my ‘you should drink’ is absolutely binding on you.
I don’t need to define ‘correct’. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink. That’s all I mean by correct: that it’s true to say ‘if X, Y, Z, then you should drink’.
Well, that’s an empirical claim, for which we’d need some empirical evidence.
You really want evidence that there are no exact copies of me walking around..?
It’s certainly possible that my personal ‘should’ could bind you too
No, I don’t think it is possible. At this point it is fairly clear that we are not exact copies of each other :-D
it means that shoulds bind all and every person subject to the reasons that back them up
Nope, I don’t think so. You keep on asserting, basically, that if you find a good set of reasons why I should do X and I cannot refute these reasons, I must do X. That is not true. I can easily tell you to go jump into the lake and not do X.
It’s part of the hypothetical that I do know the relevant reasons and your aims
And another crucial part—no, you can not know all of my relevant reasons and my aims. We are different people and you don’t have magical access to the machinations of my mind.
I don’t need to define ‘correct’. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink.
Yes, you do need to define “correct”. The reasons may or may not be sufficient—you don’t know.
It does seem we have several very basic disagreements.
You really want evidence that there are no exact copies of me walking around..?
I deny the premise on which this is necessary: I think most people share the reasons for most of what they do most of the time. For example, when my friend and I come in from a run, we share reasons for drinking water. The ‘should’ that binds me, binds him equally. I think this is by far the most common state of affairs, the great complexity and variety of human psychology notwithstanding. The empirical question is whether our reasons for acting are in general very complicated or not.
It’s certainly possible that my personal ‘should’ could bind you too
No, I don’t think it is possible.
I think you do, since I’m sure you think it’s possible that we are (in the relevant ways) identical. Improbable, to be sure. But possible.
I think I would describe it as you, being in similar situations, each formulate a personal “should” that happens to be pretty similar. But it’s his own “should” which binds him, not yours.
But I don’t suppose you would say this about answering a mathematical problem. If I conclude that six times three is eighteen, and you conclude similarly, isn’t it the case that we’ve done ‘the same problem’ and come to ‘the same answer’? Aren’t we each subject to the same reasons, in trying to solve the problem?
Or did each of us solve a personal math problem, and come to a personal answer that happens to be the same number?
Aren’t we each subject to the same reasons, in trying to solve the problem?
In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.
Same thing for testable statements about physical reality—disagreements (between rational people) can be solved by the usual scientific methods.
But preferences and values exist only inside minds and I’m asserting that each mind is unique. My preferences and values can be the same as yours but they don’t have to be. In contrast, the physical reality is the same for everyone.
Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/
In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.
I don’t see how that’s any different from most value judgements. All human beings have a basically common set of values, owing to our neurological and biological similarities. Granted, you probably can’t advise me on whether or not to go to grad school, or run for office, but you can advice me to wear my seat belt or drink water after a run. That doesn’t seem so different from math: math is also in our heads, it’s also a space of widespread agreement and some limited disagreement in the hard cases.
It may look like the Israeli’s and the Palestinians just don’t see eye to eye on practical matters, but remember how big the practical reasoning space is. Them truly not seeing eye to eye would be like the Palestinians demanding the end of settlements, and the Israelis demanding that Venus be bluer.
Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/
I don’t see why. There’s no reason to infer from the fact that a ‘should’ binds someone that you can force them to obey it.
Now, as to why it’s a problem if your reasons for acting aren’t sufficient to determine a ‘should’. Suppose you hold that A, and that if A then B. You conclude from this that B. I also hold that A, and that if A then B. But I don’t conclude that B. I say “Your conclusion doesn’t bind me.” B, I say, is ‘true for you’, but not ‘true for me’. I explain that reasoning is personal, and that just because you draw a conclusion doesn’t mean anyone else has to.
If I’m right, however, it doesn’t look like ‘A, if A then B’ is sufficient to conclude B for either of us, since B doesn’t necessarily follow from these two premises. Some further thing is needed. What could this be? it can’t be another premise (like, ‘If you believe that A and that if A then B, conclude that B’) because that just reproduces the problem. I’m not sure what you’d like to suggest here, but I worry that so long as, in general, reasons aren’t sufficient to determine practical conclusions (our ‘shoulds’) then nothing could be. Acting would be basically irrational, in that you could never have a sufficient reason for what you do.
All human beings have a basically common set of values
Nope. There is a common core and there is a lot of various non-core stuff. The non-core values can be wildly different.
but you can advice me to wear my seat belt or drink water after a run
We’re back to the same point: you can advise me, but if I say “no”, is your advice stronger than my “no”? You think it is, I think not.
I worry that so long as, in general, reasons aren’t sufficient to determine practical conclusions (our ‘shoulds’) then nothing could be.
The distinction between yourself and others is relevant here. You can easily determine whether a particular set of reasons is sufficient for you to act. However you can only guess whether the same set of reasons is sufficient for another to act. That’s why self-shoulds work perfectly fine, but other-shoulds have only a probability of working. Sometimes this probability is low, sometimes it’s high, but there’s no guarantee.
We’re back to the same point: you can advise me, but if I say “no”, is your advice stronger than my “no”? You think it is, I think not.
What do you mean by ‘stronger’? I think we all have free will: it’s impossible, metaphysically, for me to force you to do anything. You always have a choice. But that doesn’t mean I can’t point out your obligations or advantage with more persuasive or rational force than you can deny them. It may be that you’re so complicated an agent that I couldn’t get a grip on what reasons are relevant to you (again, empirical question), but if, hypothetically speaking, I do have as good a grip on your reasons as you do, and if it follows from the reasons to which you are subject that you should do X, and you think you should do ~X, then I’m right and you’re wrong and you should do X.
But I cannot, morally speaking, coerce or threaten you into doing X. I cannot, metaphysically speaking, force you to do X. If that is what you mean by ‘stronger’, then we agree.
My point is, you seem to be picking out a quantitative point: the degree of complexity is so great, that we cannot be subject to a common ‘should’. Maybe! But the evidence seems to me not to support that quantitative claim.
But aside from the quantitative claim, there’s a different, orthogonal, qualitative claim: if we are subject to the same reasons, we are subject to the same ‘should’. Setting aside the question of how complex our values and preferences are, do you agree with this claim? Remember, you might want to deny the antecedent of this conditional, but that doesn’t entail that the conditional is false.
In the same sense we talked about it in the {grand}parent post. You said:
You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
...to continue
the degree of complexity is so great, that we cannot be subject to a common ‘should’.
We may. But there is no guarantee that we would.
if we are subject to the same reasons, we are subject to the same ‘should’. Setting aside the question of how complex our values and preferences are, do you agree with this claim?
We have to be careful here. I understand “reasons” as, more or less, networks of causes and consequences. “Reasons” tell you what you should do to achieve something. But they don’t tell you what to achieve—that’s the job of values and preferences—and how to weight the different sides in a conflicting situation.
Given this, no, same reasons don’t give rise to the same “should”s because you need the same values and preferences as well.
So we have to figure out what a reason is. I took ‘reasons’ to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative. So, the reasoning behind an action might look something like this:
1) I want an apple.
2) The store sells apples, for a price I’m willing to pay.
3) It’s not too much trouble to get there.
4) I have no other reason not to go get some apples.
C) I should get some apples from the store.
My claim is just that (C) follows and is true of everyone for whom (1)-(4) is true. If (1)-(4) is true of you, but you reject (C), then you’re wrong to do so. Just as anyone would be wrong to accept ‘If P then Q’ and ‘P’ but reject the conclusion ‘Q’.
I took ‘reasons’ to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative.
That’s circular reasoning: if you define reasons as “everything necessary and sufficient”, well, of course, if they don’t conclude in an imperative they are not sufficient and so are not proper reasons :-/
In your example (4) is the weak spot. You’re making a remarkable wide and strong claim—one common in logical exercise but impossible to make in reality. There are always reasons pro and con and it all depends on how do you weight them.
Consider any objection to your conclusion (C) (e.g. “Eh, I’m feel lazy now”) -- any objection falls under (4) and so you can say that it doesn’t apply. And we’re back to the circle...
Not if I have independent reason to think that ‘everything necessary and sufficient to conclude an imperative’ is a reason, which I think I do.
In your example (4) is the weak spot. You’re making a remarkable wide and strong claim—one common in logical exercise but impossible to make in reality.
To be absolutely clear: the above is an empirical claim. Something for which we need evidence on the table. I’m indifferent to this claim, and it has no bearing on my point.
My point is just this conditional: IF (1)-(4) are true of any individual, that individual cannot rationally reject (C).
You might object to the antecedent (on the grounds that (4) is not a claim we can make in practice), but that’s different from objecting to the conditional. If you don’t object to the conditional, then I don’t think we have any disagreement, except the empirical one. And on that score, I find you view very implausible, and neither of us is prepared to argue about it. So we can drop the empirical point.
That fails to include weighing of that against other considerations. If you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty, you “should drink water” only if the other considerations don’t mean that drinking water is a bad idea despite the fact that it would quench your thirst. And in order to know that someone’s other considerations don’t outweigh the benefit of drinking water, you need to know so much about the other person that that situation is pretty much never going to happen with any nontrivial “should”.
That fails to include weighing of that against other considerations.
By hypothesis, there are no other significant considerations. I think most of the time, people’s rational considerations are about as simple as my hypothetical makes them out to be. Lumifer thinks they’re generally much more complicated. That’s an empirical debate that we probably can’t settle.
But there’s also the question of whether or not ‘shoulds’ can be ultimately personal. Suppose two lotteries. The first is won when your name is drawn out of a hat. Only one name is drawn, and so there’s only one possible winner. That’s a ‘personal’ lottery. Now take an impersonal lottery, where you win if your chosen 20 digit number matches the one drawn by the lottery moderators. Supposing you win, it’s just because your number matched theirs. Anyone whose number matched theirs would win, but it’s very unlikely that there will be more than one winner (or even one).
I’m saying that, leaving the empirical question aside, ‘shoulds’ bind us in the manner of an impersonal lottery. If we have a certain set of reasons, then they bind us, and they equally bind everyone who has that set of reasons (or something equivalent).
Lumifer is saying (I think) that ‘shoulds’ bind us in the manner of the personal lottery. They apply to each of us personally, though it’s possible that by coincidence two different shoulds have the same content and so it might look like one should binds two people.
A consequence of Lumifer’s view, it seems to me, is that a given set of reasons (where reasons are things that can apply equally to many individuals) is never sufficient to determine how we should act. This seems to me to be a very serious problem for the view.
If “should” has a meaning, then those two questions can be correctly and incorrectly answered with respect to the particular sense of “should” employed by Sophronius in the text. It would be more accurate to say that you can be very rational and still disapprove of homosexuality (as disapproval is an attitude, as opposed to a propositional statement).
Maybe. But that’s a personal “should”, specific to a particular individual and not binding on anyone else.
Sophronius asserts that values (and so “should”s) can be right or wrong without specifying a referent, just unconditionally right or wrong the way physics laws work.
What does this mean, “not binding”? What is a personal “should”? Is that the same as a personal “blue”?
A personal “should” is “I should”—as opposed to “everyone should”. If I think I should, say, drink more, that “should” is not binding on anyone else.
But the original context was “we should”. Sophronius obviously intended the sentence to refer to everyone. I don’t see anything relative about his use of words.
Correct, and that’s why I said
I’m struggling to figure out how to communicate the issue here.
If you agree that what Sophronius intended to say was “everyone should” why would you describe it as a personal “should”? (And what does “binding on someone” even mean, anyway?)
Well, perhaps you should just express your point, provided you have one? Going in circles around the word “should” doesn’t seem terribly useful.
Well, to me it’s obvious that “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone.” was a logical proposition, either true or false. And whether it’s true or false has nothing to do with whether anyone else has the same terminal values as Sophronius. But you seem to disagree?
Do you mean it would be true or false for everyone? At all times? In all cultures and situations? In the same way “Sky is blue” is true?
But the sky isn’t blue for everyone at all times in all situations!
Yes. Logical propositions are factually either true or false. It doesn’t matter who is asking. In exactly the same way that “everyone p-should put pebbles into prime heaps” doesn’t care who’s asking, or indeed how “the sky is blue” doesn’t care who’s asking.
Well then, I disagree. Since I just did a whole circle of the mulberry bush with Sophronius I’m not inclined to do another round. Instead I’ll just state my position.
I think that statements which do not describe reality but instead speak of preferences, values, and “should”s are NOT “factually either true or false”. They cannot be unconditionally true or false at all. Instead, they can be true or false conditional on the specified value system and if you specify a different value system, the true/false value may change. To rephrase it in a slightly different manner, value statements can consistent or inconsistent with some value system, and they also can be instrumentally rational or not in pursuit of some goals (and whether they are rational or not is conditional on the the particular goals).
To get specific, “People should be allowed to do in their bedroom whatever they want as long as it doesn’t harm anyone” is true within some value system and false within some other value systems. Both kinds of value systems exist. I see no basis for declaring one kind of value systems “factually right” and another kind “factually wrong”.
As a example consider a statement “The sum of the triangle’s inner angles is 180 degrees”. Is this true? In some geometries, yes, in others, no. This statement is not true unconditionally, to figure out whether it’s true in some specific case you have to specify a particular geometry. And in some real-life geometries it is true and in other real-life geometries it is false.
Well, I’m not trying to say that some values are factual and others are imaginary. But when someone makes a “should” statement (makes a moral assertion), “should” refers to a particular predicate determined by their actual value system, as your value system determines your language. Thus when people talk of “you should do X” they aren’t speaking of preferences or values, rather they are speaking of whatever it is their value system actually unfolds into.
(The fact that we all use the same word, “should” to describe what could be many different concepts is, I think, justified by the notion that we mostly share the same values, so we are in fact talking about the same thing, but that’s an empirical issue.)
Hopefully this will help demonstrate my position. I would say that when being fully rigorous is it a type error to ask whether a sentence is true. Logical propositions have a truth value, but sentences are just strings of symbols. To turn “The sum of the triangle’s inner angles is 180 degrees” into a logical proposition you need to know what is meant by “sum”, “triangle”, “inner angles”, “180”, “degrees” and indeed “is”.
As an example, if the sentence was uttered by Bob, and what he meant by “triangle” was a triangle in euclidean space, and by “is” he meant “is always” (universally quantified), then what he said is factually (unconditionally) true. But if he uttered the same sentence, in a language where “triangle” means a triangle in a hyperbolic space, or in a general space, then what he said would be unconditionally false. There’s no contradiction here because in each case he said a different thing.
Value systems are themselves part of reality, as people already have values.
In this context I define reality as existing outside of people’s minds. What exists solely within minds in not real.
Yes they are, but the same sentence can state different logical propositions depending on where, when and by whom it is uttered.
They can. But when a person utters a sentence, they generally intend to state the derelativized proposition indicated by the sentence in their language. When I say “
P
”, I don’t mean ”"P"
is a true sentence in all languages at all places”, I meanP(current context)
.Which is why it’s useless to say “I have a different definition of ‘should’”, because the original speaker wasn’t talking about definitions, they were talking about whatever it is “should” actually refers to in their actual language.
(I actually thought of mentioning that the sky isn’t always blue in all situations, but decided not to.)
Well, if you should drink more because you’re dehydrated, then you’re right to say that not everyone is bound by that, but people in similar circumstances are (i.e. dehydrated, with no other reason not to drink). Or are you saying that there are ultimately personal shoulds?
Yes, of course there are.
‘Of course’ nothing, I find that answer totally shocking. Can you think of an example? Or can you explain how such shoulds are supposed to work?
So far as I understand it, for every ‘should’ there is some list of reasons why. If two people have the same lists of reasons, then whatever binds one binds them both. So there’s nothing personal about shoulds, except insofar as we rarely have all the same reasons to do or not do something.
Doesn’t take much to shock you :-)
Sure. Let’s say there is a particular physical place (say, a specific big boulder on the shore of a lake) where I, for some reason, feel unusually calm, serene, and happy. It probably triggers some childhood memories and associations. I like this place. I should spend more time there.
No two people are the same. Besides, the importance different people attach to the same reasons varies greatly.
And, of course, to bind another with your “should” requires you to know this other very very well. To the degree I would argue is unattainable.
So say this place also makes me feel calm, serene, and happy. It also triggers in me some childhood memories and associations. I like the place. I also have (like you) no reasons not to go there. Lets say (however unlikely it might be) we have all the same reasons, and we weigh these reasons exactly the same. Nevertheless, it’s not the case that I should spend more time there. Have I just told you a coherent story?
So lets say you’re very thirsty. Around you, there’s plenty of perfectly potable water. And lets say I know you’re not trying to be thirsty for some reason, but that you’ve just come back from a run. I think I’m in a position to say that you should drink the water. I don’t need to know you very well to be sure of that. What am I getting wrong here?
That’s a rather crucial part. I am asserting that not only two people will not have the same reasons and weight them exactly the same, but you also can’t tell whether a person other than you has the same reasons and weights them exactly the same.
You’re basically saying “let’s make an exact copy of you—would your personal “shoulds” apply to that exact copy?”
The answer is yes, but an exact copy of me does not exist and that’s why my personal shoulds don’t apply to other people.
You can say, of course. But when I answer “no, I don’t think so”, is your “should” stronger than my “no”?
Ahh, okay, it looks like we are just misunderstanding one another. I originally asked you whether there are ultimately personal shoulds, and by this I meant that shoulds that are binding on me but not you for no reason other than you and I are numerically different people.
But it seems to me your answer to this is in fact ‘no’, there are no such ultimately personal shoulds. All shoulds bind everyone subject to the reasons backing them up, it’s just that those reasons rarely (if ever) coincide.
Yes. You’re wrong that you shouldn’t drink. The only should on the table is my correct one. Your ‘no’ has no strength at all.
What’s “numerically different”?
And what did you mean by “ultimately”, then? In reality all people are sufficiently different for my personal shoulds to apply only to me and not necessarily to anyone else. The set of other-than-me people to which my personal should must apply is empty. Is that insufficiently “ultimately”?
I beg to disagree. Given that you have no idea about reasons that I might have for not drinking, I don’t see why your “should” is correct. Speaking of which, how do you define “correct” in this situation, anyway? What makes you think that the end goals you imagine are actually the end goals that I am pursuing?
I just mean something like ‘there are two of them, rather than one’. So they can have all the same (non-relational) properties, but not be the same thing because there are two of them.
Well, that’s an empirical claim, for which we’d need some empirical evidence. It’s certainly possible that my personal ‘should’ could bind you too, since it’s possible (however unlikely) that we could be subject to exactly the same reasons in exactly the same way.
This is an important point, because it means that shoulds bind all and every person subject to the reasons that back them up. It may be true that people are subject to very different sets of reasons, such that in effect ‘shoulds’ only generally apply to one person. I think this empirical claim is false, but that’s a bit beside the point.
It’s part of the hypothetical that I do know the relevant reasons and your aims: you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty. Those are all the reasons (maybe the reality is never this simple, though I think it often is...again, that’s an empirical question). Knowing those, my ‘you should drink’ is absolutely binding on you.
I don’t need to define ‘correct’. You agree, I take it, that the above listed reasons can in principle be sufficient to determine that one should drink. That’s all I mean by correct: that it’s true to say ‘if X, Y, Z, then you should drink’.
You really want evidence that there are no exact copies of me walking around..?
No, I don’t think it is possible. At this point it is fairly clear that we are not exact copies of each other :-D
Nope, I don’t think so. You keep on asserting, basically, that if you find a good set of reasons why I should do X and I cannot refute these reasons, I must do X. That is not true. I can easily tell you to go jump into the lake and not do X.
And another crucial part—no, you can not know all of my relevant reasons and my aims. We are different people and you don’t have magical access to the machinations of my mind.
Yes, you do need to define “correct”. The reasons may or may not be sufficient—you don’t know.
It does seem we have several very basic disagreements.
I deny the premise on which this is necessary: I think most people share the reasons for most of what they do most of the time. For example, when my friend and I come in from a run, we share reasons for drinking water. The ‘should’ that binds me, binds him equally. I think this is by far the most common state of affairs, the great complexity and variety of human psychology notwithstanding. The empirical question is whether our reasons for acting are in general very complicated or not.
I think you do, since I’m sure you think it’s possible that we are (in the relevant ways) identical. Improbable, to be sure. But possible.
I think I would describe it as you, being in similar situations, each formulate a personal “should” that happens to be pretty similar. But it’s his own “should” which binds him, not yours.
But I don’t suppose you would say this about answering a mathematical problem. If I conclude that six times three is eighteen, and you conclude similarly, isn’t it the case that we’ve done ‘the same problem’ and come to ‘the same answer’? Aren’t we each subject to the same reasons, in trying to solve the problem?
Or did each of us solve a personal math problem, and come to a personal answer that happens to be the same number?
In this particular case (math) we share the framework within which the problem is solved. The framework is unambiguous and assigns true or false values to particular answers.
Same thing for testable statements about physical reality—disagreements (between rational people) can be solved by the usual scientific methods.
But preferences and values exist only inside minds and I’m asserting that each mind is unique. My preferences and values can be the same as yours but they don’t have to be. In contrast, the physical reality is the same for everyone.
Moreover, once we start talking about binding shoulds we enter the territory of such concepts as identity, autonomy, and power. Gets really complicated really fast :-/
I don’t see how that’s any different from most value judgements. All human beings have a basically common set of values, owing to our neurological and biological similarities. Granted, you probably can’t advise me on whether or not to go to grad school, or run for office, but you can advice me to wear my seat belt or drink water after a run. That doesn’t seem so different from math: math is also in our heads, it’s also a space of widespread agreement and some limited disagreement in the hard cases.
It may look like the Israeli’s and the Palestinians just don’t see eye to eye on practical matters, but remember how big the practical reasoning space is. Them truly not seeing eye to eye would be like the Palestinians demanding the end of settlements, and the Israelis demanding that Venus be bluer.
I don’t see why. There’s no reason to infer from the fact that a ‘should’ binds someone that you can force them to obey it.
Now, as to why it’s a problem if your reasons for acting aren’t sufficient to determine a ‘should’. Suppose you hold that A, and that if A then B. You conclude from this that B. I also hold that A, and that if A then B. But I don’t conclude that B. I say “Your conclusion doesn’t bind me.” B, I say, is ‘true for you’, but not ‘true for me’. I explain that reasoning is personal, and that just because you draw a conclusion doesn’t mean anyone else has to.
If I’m right, however, it doesn’t look like ‘A, if A then B’ is sufficient to conclude B for either of us, since B doesn’t necessarily follow from these two premises. Some further thing is needed. What could this be? it can’t be another premise (like, ‘If you believe that A and that if A then B, conclude that B’) because that just reproduces the problem. I’m not sure what you’d like to suggest here, but I worry that so long as, in general, reasons aren’t sufficient to determine practical conclusions (our ‘shoulds’) then nothing could be. Acting would be basically irrational, in that you could never have a sufficient reason for what you do.
Nope. There is a common core and there is a lot of various non-core stuff. The non-core values can be wildly different.
We’re back to the same point: you can advise me, but if I say “no”, is your advice stronger than my “no”? You think it is, I think not.
The distinction between yourself and others is relevant here. You can easily determine whether a particular set of reasons is sufficient for you to act. However you can only guess whether the same set of reasons is sufficient for another to act. That’s why self-shoulds work perfectly fine, but other-shoulds have only a probability of working. Sometimes this probability is low, sometimes it’s high, but there’s no guarantee.
What do you mean by ‘stronger’? I think we all have free will: it’s impossible, metaphysically, for me to force you to do anything. You always have a choice. But that doesn’t mean I can’t point out your obligations or advantage with more persuasive or rational force than you can deny them. It may be that you’re so complicated an agent that I couldn’t get a grip on what reasons are relevant to you (again, empirical question), but if, hypothetically speaking, I do have as good a grip on your reasons as you do, and if it follows from the reasons to which you are subject that you should do X, and you think you should do ~X, then I’m right and you’re wrong and you should do X.
But I cannot, morally speaking, coerce or threaten you into doing X. I cannot, metaphysically speaking, force you to do X. If that is what you mean by ‘stronger’, then we agree.
My point is, you seem to be picking out a quantitative point: the degree of complexity is so great, that we cannot be subject to a common ‘should’. Maybe! But the evidence seems to me not to support that quantitative claim.
But aside from the quantitative claim, there’s a different, orthogonal, qualitative claim: if we are subject to the same reasons, we are subject to the same ‘should’. Setting aside the question of how complex our values and preferences are, do you agree with this claim? Remember, you might want to deny the antecedent of this conditional, but that doesn’t entail that the conditional is false.
In the same sense we talked about it in the {grand}parent post. You said:
...to continue
We may. But there is no guarantee that we would.
We have to be careful here. I understand “reasons” as, more or less, networks of causes and consequences. “Reasons” tell you what you should do to achieve something. But they don’t tell you what to achieve—that’s the job of values and preferences—and how to weight the different sides in a conflicting situation.
Given this, no, same reasons don’t give rise to the same “should”s because you need the same values and preferences as well.
So we have to figure out what a reason is. I took ‘reasons’ to be everything necessary and sufficient to conclude in a hypothetical or categorical imperative. So, the reasoning behind an action might look something like this:
1) I want an apple. 2) The store sells apples, for a price I’m willing to pay. 3) It’s not too much trouble to get there. 4) I have no other reason not to go get some apples. C) I should get some apples from the store.
My claim is just that (C) follows and is true of everyone for whom (1)-(4) is true. If (1)-(4) is true of you, but you reject (C), then you’re wrong to do so. Just as anyone would be wrong to accept ‘If P then Q’ and ‘P’ but reject the conclusion ‘Q’.
That’s circular reasoning: if you define reasons as “everything necessary and sufficient”, well, of course, if they don’t conclude in an imperative they are not sufficient and so are not proper reasons :-/
In your example (4) is the weak spot. You’re making a remarkable wide and strong claim—one common in logical exercise but impossible to make in reality. There are always reasons pro and con and it all depends on how do you weight them.
Consider any objection to your conclusion (C) (e.g. “Eh, I’m feel lazy now”) -- any objection falls under (4) and so you can say that it doesn’t apply. And we’re back to the circle...
Not if I have independent reason to think that ‘everything necessary and sufficient to conclude an imperative’ is a reason, which I think I do.
To be absolutely clear: the above is an empirical claim. Something for which we need evidence on the table. I’m indifferent to this claim, and it has no bearing on my point.
My point is just this conditional: IF (1)-(4) are true of any individual, that individual cannot rationally reject (C).
You might object to the antecedent (on the grounds that (4) is not a claim we can make in practice), but that’s different from objecting to the conditional. If you don’t object to the conditional, then I don’t think we have any disagreement, except the empirical one. And on that score, I find you view very implausible, and neither of us is prepared to argue about it. So we can drop the empirical point.
That fails to include weighing of that against other considerations. If you’re thirsty, there’s plenty of water, and you’re not trying to stay thirsty, you “should drink water” only if the other considerations don’t mean that drinking water is a bad idea despite the fact that it would quench your thirst. And in order to know that someone’s other considerations don’t outweigh the benefit of drinking water, you need to know so much about the other person that that situation is pretty much never going to happen with any nontrivial “should”.
By hypothesis, there are no other significant considerations. I think most of the time, people’s rational considerations are about as simple as my hypothetical makes them out to be. Lumifer thinks they’re generally much more complicated. That’s an empirical debate that we probably can’t settle.
But there’s also the question of whether or not ‘shoulds’ can be ultimately personal. Suppose two lotteries. The first is won when your name is drawn out of a hat. Only one name is drawn, and so there’s only one possible winner. That’s a ‘personal’ lottery. Now take an impersonal lottery, where you win if your chosen 20 digit number matches the one drawn by the lottery moderators. Supposing you win, it’s just because your number matched theirs. Anyone whose number matched theirs would win, but it’s very unlikely that there will be more than one winner (or even one).
I’m saying that, leaving the empirical question aside, ‘shoulds’ bind us in the manner of an impersonal lottery. If we have a certain set of reasons, then they bind us, and they equally bind everyone who has that set of reasons (or something equivalent).
Lumifer is saying (I think) that ‘shoulds’ bind us in the manner of the personal lottery. They apply to each of us personally, though it’s possible that by coincidence two different shoulds have the same content and so it might look like one should binds two people.
A consequence of Lumifer’s view, it seems to me, is that a given set of reasons (where reasons are things that can apply equally to many individuals) is never sufficient to determine how we should act. This seems to me to be a very serious problem for the view.
Correct, I would agree to that.
Why so?