What about that thing where you can’t derive an “ought” from an “is”? Just from the standpoint of pure logic, we can’t derive anything about morality from axioms that don’t mention morality. If you want to derive your morality from the existence of God, you still need to add an axiom: “that which God says is moral is moral”. On the other end of things, an atheist could still agree with a theist on all moral statements, despite not believing in God. Suppose that God says “A, B, C are moral, and X, Y, Z are immoral”. Then an atheist working from the axioms “A, B, C are moral, and X, Y, Z are immoral” would believe the same things as a theist about what is moral, despite not believing in God.
Similarly, Darwin’s theory of evolution is just a claim about how the various kinds of living things we see today arose on Earth. Forget about God and religion, it would be really weird if believing in this funny idea about how complexity and seeming goal-directness can arise from a competition between imperfect copies somehow made you into an evil person.
Indeed, claiming that atheism or evolution is what led to Nazi atrocities almost feels to me like giving too much slack to the Nazis and their collaborators. Millions of people are atheists, or believe in evolution, or both, and they don’t end up committing murder, let alone genocide. Maybe we should just hold people responsible for their actions, and not treat them as automatons being piloted by memes?
As another example, imagine we’re trying to prevent a similar genocide from happening in the future (which we are, in fact). Which strategy would be more effective?
Encourage belief in religion and discourage belief in evolution. Pass a law making church attendance mandatory, teach religion in schools. Hide the fossil record, and lock biology papers behind a firewall so that only medical doctors and biologists can see them. Prevent evolution from being taught in science classes, in favour of creationism.
Teach the history of the holocaust in schools, along with other genocides. In those lessons, emphasize how genocide is a terrible, very bad thing to do, and point out how ordinary people often go along with genocide, slavery, and other horrifying things, if they’re not paying a lot of attention and being careful not to do that. From a legal perspective, put protections against authoritarianism in the constitution (eg. no arresting people for speaking out against the government).
Seems to me like option 2 would be much more effective, though from trying to pass your intellectual Turing test, I’d guess you’d maybe endorse doing both? (Though with option 1 softened to promote religion more through gradual cultural change than heavy-handed legal measures.)?
Tangentially, FWIW: Among the ought/is counterarguments that I’ve heard (I first encountered it in Alasdair MacIntyre’s stuff) is that some “is”s have “ought”s wrapped up in them from the get-go. The way we divide reality up into its various “is” packages may or may not include function, purpose, etc. in any particular package, but that’s in part a linguistic, cultural, fashionable, etc. decision.
For example: that is a clock, it ought to tell the correct time, because that is what clocks are all about. That it is a clock implies what it ought to do.
MacIntyre’s position, more-or-less, is that the modern philosophical position that you can’t get oughts from izzes in the human moral realm is the result of a catastrophe in which we lost sight of what people are for, in the same way that if we forgot what clocks did and just saw them as bizarre artifacts, we’d think they were just as suitable as objet’s d’art, paperweights, or items for bludgeoning fish with, as anything else, and it wouldn’t matter which ways the hands were pointing.
Now you might say that adding an ought to an is by definition like this (as with the clock) is a sort of artificial, additional, undeclared axiom. But you might consider what removing all the oughts from things like clocks would do to your language and conceptual arsenal. Removing the “ought” from people was a decision, not a conclusion. Philosophers performed a painstaking oughtectomy on the concept of a person and then acted surprised when the ought refused to just regrow itself like a planarian head.
From a language perspective, I agree that’s it’s great to not worry about the is/ought distinction when discussing anything other than meta-ethics. It’s kind of like how we talk about evolved adaptations as being “meant” to solve a particular problem, even though there was really no intention involved in the process. It’s just such a convenient way of speaking, so everyone does it.
I’d guess I’d say that the despite this, the is/ought distinction remains useful in some contexts. Like if someone says “we get morality from X, so you have to believe X or you won’t be moral”, it gives you a shortcut to realizing “nah, even if I think X is false, I can continue to not do bad things”.
What about that thing where you can’t derive an “ought” from an “is”? Just from the standpoint of pure logic, we can’t derive anything about morality from axioms that don’t mention morality. If you want to derive your morality from the existence of God, you still need to add an axiom: “that which God says is moral is moral”.
The hypothesis that we can’t derive an ougth from an is is not a proven theorem. In fact, it is easy to prove the opposite—we can derive an ought only from purely descriptive statements. Here is how we can do it:
John says that I ought to clean my room.
John always speaks the truth (i.e. never lies and is never mistaken).
Therefore, I ought to clean my room.
Justifying the two premises is of course another matter, but the argument is logically valid and is not circular or anything like that.
Yeah, it definitely depends how you formalize the logic, which I didn’t do in my comment above. I think there’s some hidden issues with your proposed disproof, though. For example, how do we formalize 2? If we’re representing John’s utterances as strings of symbols, then one obvious method would be to write down something like: ∀ s:String, says(John, s) ⇒ true(s). This seems like a good way of doing things, that doesn’t mention the ought predicate. Unfortunately, it does require the true predicate, which is meaningless until we have a way of enforcing that for any statement S, S ⇔ true(QUOT[S]). We can do this with an axiom schema: SCHEMA[S:Statement], S ⇔ true(QUOT[S]). Unfortunately, if we want to be able to do the reasoning chain says(John, QUOT[ought(X)]) therefore true(QUOT[ought(X)]) therefore ought(X), we find out that we used the axiom true(QUOT[ought(X)]) ⇔ ought(X) from the schema. So in order to derive ought(X), we still had to use an axiom with “ought” in it.
I expect it’s possible write a proof that “you can’t derive a ought from an is”, assuming we’re reasoning in first order logic, with ought being a predicate in the logic. But it might be a little nontrivial from a technical perspective, since while we couldn’t derive ought(X) from oughtless axioms, we could certainly derive things like ought(X) ∨ ¬ought(X) from the law of excluded middle, and then there would be many complications you could build up.
“we find out that we used the axiom true(QUOT[ought(X)]) ⇔ ought(X) from the schema. So in order to derive ought(X), we still had to use an axiom with “ought” in it.”
But that “axiom”, as you call it, is trivially true, as it follows from any sensible definition or understanding of “true”. In particular, it follows from the axiom “true(QUOT[X]) ⇔ X”, which doesn’t have an ought in it.
Moreover, we don’t even need the true predicate in this argument (we can formulate it in the spirit of the deflationary theory of truth):
2′. Whenever John says that X, then X. ( ∀ s:proposition, says(John, s) ⇒ s )
I think the issue boils down to one of types and not being able to have a “Statement” type in the theory. This is why we have QUOT[X] to convert a statement X into a string.QUOT is not a function, really, it’s a macro that converts a statement into a string representation of that statement.true(QUOT[X]) ⇔ X isn’t an axiom, it’s an infinite sequence of axioms (a “schema”), one for each possible statement X. It’s considered okay to have an infinite sequence of axioms, so long as you know how to compute that sequence. We can enumerate through all possible statements X, and we know how to convert any of those statements into a string using QUOT, so that’s all okay. But we can’t boil down that infinite axiom schema into a single axiom ∀ S:Statement, true(quot(S)) ⇒ S because we don’t have a Statement type inside of the system.
Why can’t we have a Statement type? Well, we could if they were just constants that took on values of “true” or “false”. But, I think what you want to do here is treat statements as both sequences of symbols and as things that can directly be true or false. Then the reasoning system would have ways of combining the sequences of symbols and axioms that map to rules of inference on those symbols.
Imagine what would happen if we did have all those things. I’ll define a notation for a statement literal as state(s), where s is the string of symbols that make up the statement. So state() is kind of an inverse of QUOT[], except that it’s a proper function, not a macro. Since not all strings might form valid statements, we’ll take state(s) to return some default statement like false when s is not valid.
Here is the paradox. We could construct the statement: ∀ S:Statement, ∀ fmtstr:String,(fmtstr = "..." ⇒ (S = state(replace(fmtstr, "%s", repr(fmtstr))) ⇒ ¬S)) where the "..." is "∀ S:Statement, ∀ fmtstr:String,(fmtstr = %s ⇒ (S = state(replace(fmtstr, \"\%s\", repr(fmtstr))) ⇒ ¬S))" So written out in full, the statement would be:
Now consider the statement itself as S in the quantifier, and suppose that fmtstr is indeed equal to "...". Then S = state(replace(fmtstr, "%s", repr(fmtstr))) is true. Then we have ¬S. On the other hand, if S or fmtstr take other values, then the conditional implications become vacuously true. So S reduces down entirely to ¬S. This is a contradiction. Not the friendly quine-based paradox of Godel’s incompleteness theorem, which merely asserts provability, but an actual logic-exploding contradiction.
Therefore we can’t allow a Statement type in our logic.
2′. Whenever John says that X, then X. ( ∀ X:proposition, says(John, X) ⇒ X )
Note that X here is not a statement (grammatically valid sentence?), but a proposition. John can express it however he likes: by means of written word, by means of a demonstration or example, by means of a telepathy, etc. There is no need, specifically, to convert a proposition to a string or vice versa; as long as (1) is true and we most likely understand what proposition John is trying to convey, we will most likely believe in the correct normative proposition (that, if expressed in a statement, requires an “ought”).
Ugh, you are using the language of programming in an area where it doesn’t fit. Can you explain what are these funny backslashes, % signs etc.? Why did you name a variable fmtstr instead of simply X?
Anyway—statements obviously exist, so if your theory doesn’t allow for them, it’s the problem with your theory and we can just ignore it. In my theory, every sentence that corresponds to a proposition (not all do of course), if that sentence is utterred by John, that proposition is true—that’s what I mean by John being truthful. There is no additional axiom here, this is just premise 2, rephrased.
Just to give you some (very late) clarification: The theory I describe above (a first order theory) can handle statements perfectly well, it just represents them as strings, rather than giving them their own separate type. The problem isn’t inherently with giving them their own separate type though, it’s with expecting to be able to just stick a member of that type in our expression where we’re supposed to expect a truth value.
You can skip past my proof and its messy programming notation, and just look here.
What about that thing where you can’t derive an “ought” from an “is”? Just from the standpoint of pure logic, we can’t derive anything about morality from axioms that don’t mention morality. If you want to derive your morality from the existence of God, you still need to add an axiom: “that which God says is moral is moral”. On the other end of things, an atheist could still agree with a theist on all moral statements, despite not believing in God. Suppose that God says “A, B, C are moral, and X, Y, Z are immoral”. Then an atheist working from the axioms “A, B, C are moral, and X, Y, Z are immoral” would believe the same things as a theist about what is moral, despite not believing in God.
Similarly, Darwin’s theory of evolution is just a claim about how the various kinds of living things we see today arose on Earth. Forget about God and religion, it would be really weird if believing in this funny idea about how complexity and seeming goal-directness can arise from a competition between imperfect copies somehow made you into an evil person.
Indeed, claiming that atheism or evolution is what led to Nazi atrocities almost feels to me like giving too much slack to the Nazis and their collaborators. Millions of people are atheists, or believe in evolution, or both, and they don’t end up committing murder, let alone genocide. Maybe we should just hold people responsible for their actions, and not treat them as automatons being piloted by memes?
As another example, imagine we’re trying to prevent a similar genocide from happening in the future (which we are, in fact). Which strategy would be more effective?
Encourage belief in religion and discourage belief in evolution. Pass a law making church attendance mandatory, teach religion in schools. Hide the fossil record, and lock biology papers behind a firewall so that only medical doctors and biologists can see them. Prevent evolution from being taught in science classes, in favour of creationism.
Teach the history of the holocaust in schools, along with other genocides. In those lessons, emphasize how genocide is a terrible, very bad thing to do, and point out how ordinary people often go along with genocide, slavery, and other horrifying things, if they’re not paying a lot of attention and being careful not to do that. From a legal perspective, put protections against authoritarianism in the constitution (eg. no arresting people for speaking out against the government).
Seems to me like option 2 would be much more effective, though from trying to pass your intellectual Turing test, I’d guess you’d maybe endorse doing both? (Though with option 1 softened to promote religion more through gradual cultural change than heavy-handed legal measures.)?
Tangentially, FWIW: Among the ought/is counterarguments that I’ve heard (I first encountered it in Alasdair MacIntyre’s stuff) is that some “is”s have “ought”s wrapped up in them from the get-go. The way we divide reality up into its various “is” packages may or may not include function, purpose, etc. in any particular package, but that’s in part a linguistic, cultural, fashionable, etc. decision.
For example: that is a clock, it ought to tell the correct time, because that is what clocks are all about. That it is a clock implies what it ought to do.
MacIntyre’s position, more-or-less, is that the modern philosophical position that you can’t get oughts from izzes in the human moral realm is the result of a catastrophe in which we lost sight of what people are for, in the same way that if we forgot what clocks did and just saw them as bizarre artifacts, we’d think they were just as suitable as objet’s d’art, paperweights, or items for bludgeoning fish with, as anything else, and it wouldn’t matter which ways the hands were pointing.
Now you might say that adding an ought to an is by definition like this (as with the clock) is a sort of artificial, additional, undeclared axiom. But you might consider what removing all the oughts from things like clocks would do to your language and conceptual arsenal. Removing the “ought” from people was a decision, not a conclusion. Philosophers performed a painstaking oughtectomy on the concept of a person and then acted surprised when the ought refused to just regrow itself like a planarian head.
From a language perspective, I agree that’s it’s great to not worry about the is/ought distinction when discussing anything other than meta-ethics. It’s kind of like how we talk about evolved adaptations as being “meant” to solve a particular problem, even though there was really no intention involved in the process. It’s just such a convenient way of speaking, so everyone does it.
I’d guess I’d say that the despite this, the is/ought distinction remains useful in some contexts. Like if someone says “we get morality from X, so you have to believe X or you won’t be moral”, it gives you a shortcut to realizing “nah, even if I think X is false, I can continue to not do bad things”.
The hypothesis that we can’t derive an ougth from an is is not a proven theorem. In fact, it is easy to prove the opposite—we can derive an ought only from purely descriptive statements. Here is how we can do it:
John says that I ought to clean my room.
John always speaks the truth (i.e. never lies and is never mistaken).
Therefore, I ought to clean my room.
Justifying the two premises is of course another matter, but the argument is logically valid and is not circular or anything like that.
Yeah, it definitely depends how you formalize the logic, which I didn’t do in my comment above. I think there’s some hidden issues with your proposed disproof, though. For example, how do we formalize 2? If we’re representing John’s utterances as strings of symbols, then one obvious method would be to write down something like:
∀ s:String, says(John, s) ⇒ true(s)
. This seems like a good way of doing things, that doesn’t mention theought
predicate. Unfortunately, it does require thetrue
predicate, which is meaningless until we have a way of enforcing that for any statement S,S ⇔ true(QUOT[S])
. We can do this with an axiom schema:SCHEMA[S:Statement], S ⇔ true(QUOT[S])
. Unfortunately, if we want to be able to do the reasoning chainsays(John, QUOT[ought(X)])
thereforetrue(QUOT[ought(X)])
thereforeought(X)
, we find out that we used the axiomtrue(QUOT[ought(X)]) ⇔ ought(X)
from the schema. So in order to deriveought(X)
, we still had to use an axiom with “ought” in it.I expect it’s possible write a proof that “you can’t derive a ought from an is”, assuming we’re reasoning in first order logic, with
ought
being a predicate in the logic. But it might be a little nontrivial from a technical perspective, since while we couldn’t deriveought(X)
from oughtless axioms, we could certainly derive things likeought(X) ∨ ¬ought(X)
from the law of excluded middle, and then there would be many complications you could build up.“we find out that we used the axiom
true(QUOT[ought(X)]) ⇔ ought(X)
from the schema. So in order to deriveought(X)
, we still had to use an axiom with “ought” in it.”But that “axiom”, as you call it, is trivially true, as it follows from any sensible definition or understanding of “true”. In particular, it follows from the axiom “
true(QUOT[X]) ⇔ X
”, which doesn’t have an ought in it.Moreover, we don’t even need the true predicate in this argument (we can formulate it in the spirit of the deflationary theory of truth):
2′. Whenever John says that X, then X. ( ∀ s:proposition, says(John, s) ⇒ s )
I think the issue boils down to one of types and not being able to have a “Statement” type in the theory. This is why we have
QUOT[X]
to convert a statementX
into a string.QUOT
is not a function, really, it’s a macro that converts a statement into a string representation of that statement.true(QUOT[X]) ⇔ X
isn’t an axiom, it’s an infinite sequence of axioms (a “schema”), one for each possible statementX
. It’s considered okay to have an infinite sequence of axioms, so long as you know how to compute that sequence. We can enumerate through all possible statementsX
, and we know how to convert any of those statements into a string usingQUOT
, so that’s all okay. But we can’t boil down that infinite axiom schema into a single axiom∀ S:Statement, true(quot(S)) ⇒ S
because we don’t have aStatement
type inside of the system.Why can’t we have a
Statement
type? Well, we could if they were just constants that took on values of “true” or “false”. But, I think what you want to do here is treat statements as both sequences of symbols and as things that can directly be true or false. Then the reasoning system would have ways of combining the sequences of symbols and axioms that map to rules of inference on those symbols.Imagine what would happen if we did have all those things. I’ll define a notation for a statement literal as
state(s)
, wheres
is the string of symbols that make up the statement. Sostate()
is kind of an inverse ofQUOT[]
, except that it’s a proper function, not a macro. Since not all strings might form valid statements, we’ll takestate(s)
to return some default statement likefalse
whens
is not valid.Here is the paradox. We could construct the statement:
∀ S:Statement, ∀ fmtstr:String,(fmtstr = "..." ⇒ (S = state(replace(fmtstr, "%s", repr(fmtstr))) ⇒ ¬S))
where the"..."
is"∀ S:Statement, ∀ fmtstr:String,(fmtstr = %s ⇒ (S = state(replace(fmtstr, \"\%s\", repr(fmtstr))) ⇒ ¬S))"
So written out in full, the statement would be:∀ S:Statement, ∀ fmtstr:String,(fmtstr = "∀ S:Statement, ∀ fmtstr:String,(fmtstr = %s ⇒ (S = state(replace(fmtstr, \"\%s\", repr(fmtstr))) ⇒ ¬S))" ⇒ (S = state(replace(fmtstr, "%s", repr(fmtstr))) ⇒ ¬S))
Now consider the statement itself as
S
in the quantifier, and suppose that fmtstr is indeed equal to"..."
. ThenS = state(replace(fmtstr, "%s", repr(fmtstr)))
is true. Then we have¬S
. On the other hand, ifS
orfmtstr
take other values, then the conditional implications become vacuously true. SoS
reduces down entirely to¬S
. This is a contradiction. Not the friendly quine-based paradox of Godel’s incompleteness theorem, which merely asserts provability, but an actual logic-exploding contradiction.Therefore we can’t allow a
Statement
type in our logic.Oh, an one more thing. My updated premise 2 is:
2′. Whenever John says that X, then X. ( ∀ X:proposition, says(John, X) ⇒ X )
Note that X here is not a statement (grammatically valid sentence?), but a proposition. John can express it however he likes: by means of written word, by means of a demonstration or example, by means of a telepathy, etc. There is no need, specifically, to convert a proposition to a string or vice versa; as long as (1) is true and we most likely understand what proposition John is trying to convey, we will most likely believe in the correct normative proposition (that, if expressed in a statement, requires an “ought”).
Ugh, you are using the language of programming in an area where it doesn’t fit. Can you explain what are these funny backslashes, % signs etc.? Why did you name a variable fmtstr instead of simply X?
Anyway—statements obviously exist, so if your theory doesn’t allow for them, it’s the problem with your theory and we can just ignore it. In my theory, every sentence that corresponds to a proposition (not all do of course), if that sentence is utterred by John, that proposition is true—that’s what I mean by John being truthful. There is no additional axiom here, this is just premise 2, rephrased.
Just to give you some (very late) clarification: The theory I describe above (a first order theory) can handle statements perfectly well, it just represents them as strings, rather than giving them their own separate type. The problem isn’t inherently with giving them their own separate type though, it’s with expecting to be able to just stick a member of that type in our expression where we’re supposed to expect a truth value.
You can skip past my proof and its messy programming notation, and just look here.