Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.
So here I have a bit of moral reasoning, the conclusion of which follows from the premises. The argument is valid, so if the premises are true, the conclusion can be considered proven. So given that I can give you valid proofs for moral conclusions, in what way is morality not logical?
doesn’t have any of the nice properties of that a well-constructed system of logic would have, for example, consistency, validity, soundness...
The above example of moral reasoning (assume for the sake of simplicity that this is my entire moral system) is consistant, and valid, and (if you accept the premises) sound. Anyone who accepts the premises must accept the conclusion. One might waver on acceptance of the premises (this is true for every subject) but the conclusion follows from them regardless of what one’s mood is.
All that said, our moral reasoning is often fraught. But I don’t think makes morality peculiar. The mistakes we often make with regard to moral reasoning don’t seem to be different in kind from the mistakes we make in, say, economics. Ethics, they say, is not an exact science.
Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people’s losses. See this as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it’ll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It’s an open ended problem ‘unlike logic’ because consequences are difficult to evaluate.
In the case of ‘circular altruism’, I confess I’m quite at a loss. I’ve never really managed to pull an argument out of there. But if we’re just talking about the practice of quantifying goods in moral judgements, then I agree with you there’s no strongly complete ethical calculus that’s going to do render ethics a mathematical science. But in at least in ‘circular reasoning’ EY doesn’t need quite so strong a view: so far as I can tell, he’s just saying that our moral passions conflict with our reflective moral judgements. And even if we don’t have a strongly complete moral system, we can make logically coherent reflective moral judgements. I’d go so far as to say we can make logically coherent reflective literary criticism judgements. Logic isn’t picky.
So while, on the one hand, I’m also (as yet) unconvinced about EY’s ethics, I think it goes too far in the opposite direction to say that ethical reasoning is inherently fuzzy or illogical. Valid arguments are valid arguments, regardless.
Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.
So here I have a bit of moral reasoning, the conclusion of which follows from the premises.
The problem is that when the conclusion is “proven wrong” (i.e. “my gut tells me that it’s better to lie to an Al Qaeda prison guard than to tell him the launch codes for America’s nuclear weapons”), then the premises that you started with are wrong.
So if I’m understanding Wei_Lai’s point, it’s that the name of the game is to find a premise that cannot and will not be contradicted by other moral premises via a bizarre hypothetical situation.
I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig:
“There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad”.
I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig: ”There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad”.
This is why I find Harris frustrating. He’s stating something pretty much everyone agrees with, but they all make different substitutions for the variable “suffering.” And then Harris is vague about what he personally plugs in.
At least as paraphrased here, the definition of “move towards” is very unclear. Is it a universe with more suffering? A universe with more suffering right now? A universe with more net present suffering, according to some discount rate? What if I move to a universe with more suffering both right now and for all possible future discount rates, assuming no further action, but for which future actions that greatly reduce suffering are made easier? (In other words, does this system get stuck in local optimums?)
I think there is much that this approach fails to solve, even if we all agree on how to measure suffering.
(Included in “how to measure suffering” is a bit of complicated stuff like average vs total utilitarianism, and how to handle existential risks, and how to do probability math on outcomes that produce a likelihood of suffering.)
The problem is that when the conclusion is “proven wrong”...then the premises that you started with are wrong.
I hope so! It would be terribly awkward to find ourselves with true premises, valid reasoning, and a false conclusion. But unless by ‘gut feeling’ you mean a valid argument with true premises, then gut feelings can’t prove anything wrong.
So if I’m understanding Wei_Lai’s point, it’s that the name of the game is to find a premise that cannot and will not be contradicted by other moral premises via a bizarre hypothetical situation.
Perhaps, though that wouldn’t speak to whether or not morality is logical. If Wai Dai’s point is that morality is, at best, axiomatic, then sure. But so is Peano arithmetic, and that’s as logical as can be.
It was a fantastic read, but the underlying theme that I feel is relevant to this discussion is this:
Socratic philosophy treats logical axioms as “self-evident truths” (i.e. I think, therefore I am).
Mathematics treats logical axioms as “propositions”, and uses logic to see where those propositions lead (i.e. if you have a line and a point, the number/amount of lines that you can draw through the point that’s parallel to the original line determines what type of geometry you are working with (multidimensional, spherical, or flat-plane geometry)).
Scientists treat logical axioms as “hypotheses”, and logical “conclusions” as testable statements that can determine whether those axioms are true or not (i.e. if this weird system known as “quantum mechanics” were true, then we would see an interference pattern when shooting electrons through a screen with 2 slits).
So I guess the point that we should be making is this: which philosophical approach towards logic should we take to study ethics? I believe Wei_Lai would say that the first approach, treating ethical axioms as “self-evident truths” is problematic due to the fact that a lot of hypothetical situations (like my example before) can create a lot of contradictions between various ethical axioms (i.e. choosing between telling a lie and letting terrorists blow up the planet).
Socratic philosophy treats logical axioms as “self-evident truths” (i.e. I think, therefore I am).
I read the article. It’s interesting (I liked the thing about pegs and strings), but I don’t think the guy’s (nor you) read a lot of actual Greek philosophy. I don’t mean that as an attack (why would you want to, after all?), but it makes some of his, and your claims a little strange.
Socrates, in the Platonic dialogues, is unwilling to take the law of non-contradiction as an axiom. There just aren’t any axioms in Socratic philosophy, just discussions. No proofs, just conversations. Plato (and certainly not Socrates) doesn’t have doctrines, and Plato is totally and intentionally merciless with people who try to find Platonic doctrines.
Also, Plato and Socrates predate, for most purposes, logic.
Mathematics treats logical axioms as “propositions”, and uses logic to see where those propositions lead
Right, Aristotle largely invented (or discovered) that trick. Aristotle’s logic is consistant and strongly complete (i.e. it’s not axiomatic, and relies on no external logical concepts). Euclid picked up on it, and produced a complete and consistant mathematics. So (some) Greek philosophy certainly shares this idea with modern mathematics.
Scientists treat logical axioms as “hypotheses”, and logical “conclusions” as testable statements that can determine whether those axioms are true or not
I don’t think scientists treat logical axioms as hypotheses. Logical axioms aren’t empirical claims, and aren’t really subject to testing. But Aristotle’s work on biology, meteorology, etc. forwards plenty of empirical hypotheses, along with empirical evidence for them. Textual evidence suggests Aristotle performed lots of experiments, mostly in the form of vivisection of animals. He was wrong about pretty much everything, but his method was empirical.
This is to say nothing of contemporary philosophy, which certainly doesn’t take very much as ‘self-evident truth’. I can assure you, no one gets anywhere with that phrase anymore, in any study.
I believe Wei_Lai would say that the first approach, treating ethical axioms as “self-evident truths” is problematic due to the fact that a lot of hypothetical situations (like my example before) can create a lot of contradictions between various ethical axioms (i.e. choosing between telling a lie and letting terrorists blow up the planet).
Not if those ethical axioms actually are self-evident truths. Then hypothetical situations (no matter how uncomfortable they make us) can’t disrupt them. But we might, on the basis of these situations, conclude that we don’t have any self-evident moral axioms. But, as you neatly argue, we don’t have any self-evident mathematical axioms either.
Thanks for taking the time to read and respond to the article, and for the critique; you are correct in that I am not well-versed in Greek philosophy. With that being said, allow me to try to expand my framework to explain what I’m trying to get at:
Scientists, unlike mathematicians, don’t always frame their arguments in terms of pure logic (i.e. If A and B, then C). However, I believe that the work that comes from them can be treated as logical statements.
Example: “I think that heat is transferred between two objects via some sort of matter that I will call ‘phlogiston’. If my hypothesis is true, than an object will lose mass as it cools down.” 10 days later: “I have weighed an object when it was hot, and I weighed it when it was cold. The object did not lose any mass. Therefore, my hypothesis is wrong”.
In logical terms: Let’s call the Theory of Phlogiston “A”, and let’s call the act of measuring a loss of mass with a loss of heat “C”.
If A, then C.
Physical evidence is obtained
If Not C, then Not A.
Essentially, the scientific method involves the creation of a hypothesis “A”, and a logical consequence of that hypothesis, “If A then C”. Then physical evidence is presented in favor of, or against “C”. If C is disproven, then A is disproven.
This is what I mean when I say that hypotheses are “axioms”, and physical experiments are “conclusions”.
In response to this statement:
Socrates, in the Platonic dialogues, is unwilling to take the law of non-contradiction as an axiom. There just aren’t any axioms in Socratic philosophy, just discussions. No proofs, just conversations. Plato (and certainly not Socrates) doesn’t have doctrines, and Plato is totally and intentionally merciless with people who try to find Platonic doctrines.
“No proofs, just conversations”. In the framework that I’m working in, every single statement is either a premise or a conclusion. In addition, every single statement is either a “truth” (that we are to believe immediately), a “proposition” (that we are to entertain the logical implications of), or part of a “hypothesis/implication” pair (that we are suppose to believe with a level of skepticism until an experiment verifies it or disproves it). I believe that every single statement that has ever been made in any field of study falls into one of those 3 categories, and I’m saying that we need to discuss which category we need to place statements that are in the field of ethics.
In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as “truth”, whether or not they are supported by evidence (i.e. John Rawl’s “Original Position”).
This is what I mean when I say that hypotheses are “axioms”, and physical experiments are “conclusions”.
I see. You’re right that philosophers pretty much never do anything like that. Except experimental philosophers, but thus far most of that stuff is just terrible.
“In the framework that I’m working in...”
That’s a good framework with with to approach any philosophical text, including and especially the Platonic dialogues. I just wanted to stress the fact that the dialogues aren’t treatises presented in a funny way. You’re supposed to argue with Socrates, against him, yell at his interlocutors, try to patch up the arguments with premises of your own. It’s very different from, say, Aristotle or Kant or whatever, where its a guy presenting a theory.
In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as “truth”
Would you mind if I go on for a bit? I have thoughts on this, but I don’t quite know how to present them briefly. Anyway:
Students of Physics should go into a Physics class room or book with an open mind. They should be ready to learn new things about the world, often surprising things (relative to their naive impressions) and should often try to check their prejudices at the door. None of us are born knowing physics. It’s something we have to go out and learn.
Philosophy isn’t like that. The right attitude walking into a philosophy classroom is irritation. It is an inherently annoying subject, and its practitioners are even worse. You can’t learn philosophy, and you can’t become an expert at it. You can’t even become good at it. Being a philosopher is no accomplishment whatsoever. You can just do philosophy, and anyone can do it. Intelligence is good, but it can be a hindrance too, same with education.
Doing philosophy means asking questions about things to which you really ought to already know the answers, like the difference between right and wrong, whether or not you’re in control of your actions, what change is, what existing is, etc. Philosophy is about asking questions to which we ought to have the answers, but don’t.
We do philosophy by talking to each other. If that means running an experiment, good. If that means just arguing, fine. There’s no method, no standards, and no body of knowledge, unless you say there is, and then convince someone, and then there is until someone convinces you otherwise.
Scientists and mathematicians don’t hate philosophy. They tend to love philosophers, or at least the older ones do. Young scientists and mathematicians do hate philosophers, and with good reason: part of being a young scientist or mathematician is developing a refined mental self-discipline, and that means turning your back on any froo-froo hand wavy BS and getting down to work. Philosophy is the most hateful thing in the world when you’re trying to be wrong as little as possible. But once that discipline is in place, and people are confident in their ability to sort out good arguments from bad ones, facts from speculation, philosophy starts to look like fun.
In the framework that I’m working in, every single statement is either a premise or a conclusion. In addition, every single statement is either a “truth” (that we are to believe immediately), a “proposition” (that we are to entertain the logical implications of), or part of a “hypothesis/implication” pair (that we are suppose to believe with a level of skepticism until an experiment verifies it or disproves it)
But there is a mini-premise, inference and mini-conclusion inside every “hypothesis-implication pair”.
In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as “truth”, whether or not they are supported by evidence (i.e. John Rawl’s “Original Position”).
I’m curious as to why you referenced Rawl’s work in this context. It’s not apparent to me how Justice as Fairness is relevant here.
I referenced him because I recall that he comes to a very strong conclusion- that a moral society should have agreed-upon laws based on the premise of the “original position”. He was the first philosopher that came to mind when I was trying to think of examples of a hard statement that is neither a “proposition” to be explored, nor the conclusion from an observable fact.
I mean, I’m pretty sure his conclusion is a “proposition.” It has premises, and I could construct it logically if you wanted.
In fact, I don’t understand his position to be “that a moral society should have agreed-upon laws” at all, but rather his use of the original position is an attempt to isolate and discover the principles of distributive justice, and that’s really his bottom line.
Interesting piece. I was a bit bemused by this, though:
In fact Plato wrote to Archimedes, scolding him about messing around with real levers and ropes when any gentleman would have stayed in his study or possibly, in Archimedes’ case, his bath.
Problematically for the story, Plato died around 347 BCE, and Archimedes wasn’t born until 287 BCE—sixty years later.
Nope. Even if one grants objective meaning to a unique interpersonal aggregate of suffering (and I don’t), it’s just wrong.
Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe.
EDIT: I didn’t mean “you” to indicate everyone. Sometimes I want people to suffer, and think that in my hypothetical, the majority of mankind would feel the same, and choose the same, if it were in their power.
Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe.
...because doing so would create incentive to not cause suffering to others. In the long run, that would result in less universal suffering overall. Isn’t this correct?
No, that’s not my motivation at all. That’s not my because. It’s just vengeance on my part.
Even if one regarded the design of vengeance as an evolutionary adaptation, I don’t think that vengeance minimizes suffering, it punishes infractions against values.
At that level, it’s not about minimizing suffering either, it’s about evolutionary fitness.
4.5.2: Doesn’t that screw up the whole concept of moral responsibility?
Honestly? Well, yeah. Moral responsibility doesn’t exist as a physical object. Moral responsibility—the idea that choosing evil causes you to deserve pain—is fundamentally a human idea that we’ve all adopted for convenience’s sake. (23).
The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn’t deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even Adolf Hitler. Pain is bad; if it’s ultimately meaningful, it’s almost certainly as a negative goal. Nothing any human being can do will flip that sign from negative to positive.
So why do we throw people in jail? To discourage crime. Choosing evil doesn’t make a person deserve anything wrong, but it makes ver targetable, so that if something bad has to happen to someone, it may as well happen to ver. Adolf Hitler, for example, is so targetable that we could shoot him on the off-chance that it would save someone a stubbed toe. There’s never a point where we can morally take pleasure in someone else’s pain. But human society doesn’t require hatred to function—just law.
Besides which, my mind feels a lot cleaner now that I’ve totally renounced all hatred.
It’s pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition.
EDIT: As ArisKatsaris points out, I don’t actually have any source for the “most people on LW disagree with you” bit. I’ve always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post “Policy Debates Should Not Appear One Sided” is fairly highly regarded, and it esposes a related view, that people don’t deserve harm for their stupidity.
Also, what those people would prefer isn’t nessecarily what our moral system should prefer- humans are petty and short-sighted.
I’ve always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey.
What do you mean by “utilitarianism”? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses “total happiness” as a utility function. This sentence appears to be designed to confuse the two meanings.
The post “Policy Debates Should Not Appear One Sided” is fairly highly regarded, and it esposes a related view, that people don’t deserve harm for their stupidity.
That is most definitely not the main point of that post.
What do you mean by “utilitarianism”? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses “total happiness” as a utility function. This sentence appears to be designed to confuse the two meanings.
Yeah, my mistake. I’d never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren’t utilitarian?
It’s a kind of utilitarianism. I’m including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.
Yes, I had concluded that EY was anti retribution. Hadn’t concluded that he had carried the day on that point.
Moral responsibility—the idea that choosing evil causes you to deserve pain—is fundamentally a human idea that we’ve all adopted for convenience’s sake. (23).
I don’t think vengeance and retribution are “ideas” that people had to come up with—they’re central moral motivations. “A social preference for which we punish violators” gets at 80% of what morality is about.
Some may disagree about the intuition, but I’d note that even EY had to “renounce” all hatred, which implies to me that he had the impulse for hatred (retribution, in this context) in the first place.
This seems like it has makings of an interesting poll question.
This seems like it has makings of an interesting poll question.
I agree. Let’s do that. You’re consequentialist, right?
I’d phrase my opinion as “I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes.”
How do you phrase yours? If I were to guess, it would be “I have a terminal value which says that people who have caused suffering should suffer themselves.”
I’ll make a Discussion post about this after I get your refinement of the question?
I place terminal value to retribution (inflicting suffering on the causers of suffering), at least for some of the most egregious cases.
I do not place terminal value to retribution, not even for the most egregious cases (e.g. mass murderers). I acknowledge that sometimes it may have instrumental value.
Perhaps also add a third choice:
I think I place terminal value to retribution, but I would prefer it if I could self-modify so that I wouldn’t.
Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.
That also applies to literary criticism: Wulky Wilkinsen shows colonial alienation / Authors who show colonial alieniation are post-utopians // Wulky Wilkinsen is a post-utopian.
Every act of lying is morally prohibited / This act would be a lie // This act is morally prohibited.
So here I have a bit of moral reasoning, the conclusion of which follows from the premises. The argument is valid, so if the premises are true, the conclusion can be considered proven. So given that I can give you valid proofs for moral conclusions, in what way is morality not logical?
The above example of moral reasoning (assume for the sake of simplicity that this is my entire moral system) is consistant, and valid, and (if you accept the premises) sound. Anyone who accepts the premises must accept the conclusion. One might waver on acceptance of the premises (this is true for every subject) but the conclusion follows from them regardless of what one’s mood is.
All that said, our moral reasoning is often fraught. But I don’t think makes morality peculiar. The mistakes we often make with regard to moral reasoning don’t seem to be different in kind from the mistakes we make in, say, economics. Ethics, they say, is not an exact science.
I should have given some examples of the kind of moral reasoning I’m referring to.
http://lesswrong.com/lw/n3/circular_altruism/
http://lesswrong.com/lw/1r9/shut_up_and_divide/
1st link is ambiguity aversion.
Morality is commonly taken to describe what one will actually do when they are trading off private gains vs other people’s losses. See this as example of moral judgement. Suppose Roberts is smarter. He will quickly see that he can donate 10% to charity, and it’ll take longer for him to reason about value of cash that was not given to him (reasoning that may stop him from pressing the button), so there will be a transient during which he pushes the button, unless he somehow suppresses actions during transients. It’s an open ended problem ‘unlike logic’ because consequences are difficult to evaluate.
edit: been in a hurry.
Ah, thank you, that is helpful.
In the case of ‘circular altruism’, I confess I’m quite at a loss. I’ve never really managed to pull an argument out of there. But if we’re just talking about the practice of quantifying goods in moral judgements, then I agree with you there’s no strongly complete ethical calculus that’s going to do render ethics a mathematical science. But in at least in ‘circular reasoning’ EY doesn’t need quite so strong a view: so far as I can tell, he’s just saying that our moral passions conflict with our reflective moral judgements. And even if we don’t have a strongly complete moral system, we can make logically coherent reflective moral judgements. I’d go so far as to say we can make logically coherent reflective literary criticism judgements. Logic isn’t picky.
So while, on the one hand, I’m also (as yet) unconvinced about EY’s ethics, I think it goes too far in the opposite direction to say that ethical reasoning is inherently fuzzy or illogical. Valid arguments are valid arguments, regardless.
The problem is that when the conclusion is “proven wrong” (i.e. “my gut tells me that it’s better to lie to an Al Qaeda prison guard than to tell him the launch codes for America’s nuclear weapons”), then the premises that you started with are wrong.
So if I’m understanding Wei_Lai’s point, it’s that the name of the game is to find a premise that cannot and will not be contradicted by other moral premises via a bizarre hypothetical situation.
I believe that Sam Harris has already mastered this thought experiment. Paraphrased from his debate with William Lane Craig:
“There exists a hypothetical universe in which there is the absolute most amount of suffering possible. Actions that move us away from that universe are considered good; actions that move us towards that universe are considered bad”.
This is why I find Harris frustrating. He’s stating something pretty much everyone agrees with, but they all make different substitutions for the variable “suffering.” And then Harris is vague about what he personally plugs in.
At least as paraphrased here, the definition of “move towards” is very unclear. Is it a universe with more suffering? A universe with more suffering right now? A universe with more net present suffering, according to some discount rate? What if I move to a universe with more suffering both right now and for all possible future discount rates, assuming no further action, but for which future actions that greatly reduce suffering are made easier? (In other words, does this system get stuck in local optimums?)
I think there is much that this approach fails to solve, even if we all agree on how to measure suffering.
(Included in “how to measure suffering” is a bit of complicated stuff like average vs total utilitarianism, and how to handle existential risks, and how to do probability math on outcomes that produce a likelihood of suffering.)
I hope so! It would be terribly awkward to find ourselves with true premises, valid reasoning, and a false conclusion. But unless by ‘gut feeling’ you mean a valid argument with true premises, then gut feelings can’t prove anything wrong.
Perhaps, though that wouldn’t speak to whether or not morality is logical. If Wai Dai’s point is that morality is, at best, axiomatic, then sure. But so is Peano arithmetic, and that’s as logical as can be.
I just stumbled into this discussion after reading an article about why mathematicians and scientists dislike traditional, Socratic philosophy, and my mindset is fresh off that article.
It was a fantastic read, but the underlying theme that I feel is relevant to this discussion is this:
Socratic philosophy treats logical axioms as “self-evident truths” (i.e. I think, therefore I am).
Mathematics treats logical axioms as “propositions”, and uses logic to see where those propositions lead (i.e. if you have a line and a point, the number/amount of lines that you can draw through the point that’s parallel to the original line determines what type of geometry you are working with (multidimensional, spherical, or flat-plane geometry)).
Scientists treat logical axioms as “hypotheses”, and logical “conclusions” as testable statements that can determine whether those axioms are true or not (i.e. if this weird system known as “quantum mechanics” were true, then we would see an interference pattern when shooting electrons through a screen with 2 slits).
So I guess the point that we should be making is this: which philosophical approach towards logic should we take to study ethics? I believe Wei_Lai would say that the first approach, treating ethical axioms as “self-evident truths” is problematic due to the fact that a lot of hypothetical situations (like my example before) can create a lot of contradictions between various ethical axioms (i.e. choosing between telling a lie and letting terrorists blow up the planet).
I read the article. It’s interesting (I liked the thing about pegs and strings), but I don’t think the guy’s (nor you) read a lot of actual Greek philosophy. I don’t mean that as an attack (why would you want to, after all?), but it makes some of his, and your claims a little strange.
Socrates, in the Platonic dialogues, is unwilling to take the law of non-contradiction as an axiom. There just aren’t any axioms in Socratic philosophy, just discussions. No proofs, just conversations. Plato (and certainly not Socrates) doesn’t have doctrines, and Plato is totally and intentionally merciless with people who try to find Platonic doctrines.
Also, Plato and Socrates predate, for most purposes, logic.
Right, Aristotle largely invented (or discovered) that trick. Aristotle’s logic is consistant and strongly complete (i.e. it’s not axiomatic, and relies on no external logical concepts). Euclid picked up on it, and produced a complete and consistant mathematics. So (some) Greek philosophy certainly shares this idea with modern mathematics.
I don’t think scientists treat logical axioms as hypotheses. Logical axioms aren’t empirical claims, and aren’t really subject to testing. But Aristotle’s work on biology, meteorology, etc. forwards plenty of empirical hypotheses, along with empirical evidence for them. Textual evidence suggests Aristotle performed lots of experiments, mostly in the form of vivisection of animals. He was wrong about pretty much everything, but his method was empirical.
This is to say nothing of contemporary philosophy, which certainly doesn’t take very much as ‘self-evident truth’. I can assure you, no one gets anywhere with that phrase anymore, in any study.
Not if those ethical axioms actually are self-evident truths. Then hypothetical situations (no matter how uncomfortable they make us) can’t disrupt them. But we might, on the basis of these situations, conclude that we don’t have any self-evident moral axioms. But, as you neatly argue, we don’t have any self-evident mathematical axioms either.
Thanks for taking the time to read and respond to the article, and for the critique; you are correct in that I am not well-versed in Greek philosophy. With that being said, allow me to try to expand my framework to explain what I’m trying to get at:
Scientists, unlike mathematicians, don’t always frame their arguments in terms of pure logic (i.e. If A and B, then C). However, I believe that the work that comes from them can be treated as logical statements.
Example: “I think that heat is transferred between two objects via some sort of matter that I will call ‘phlogiston’. If my hypothesis is true, than an object will lose mass as it cools down.” 10 days later: “I have weighed an object when it was hot, and I weighed it when it was cold. The object did not lose any mass. Therefore, my hypothesis is wrong”.
In logical terms: Let’s call the Theory of Phlogiston “A”, and let’s call the act of measuring a loss of mass with a loss of heat “C”.
If A, then C.
Physical evidence is obtained
If Not C, then Not A.
Essentially, the scientific method involves the creation of a hypothesis “A”, and a logical consequence of that hypothesis, “If A then C”. Then physical evidence is presented in favor of, or against “C”. If C is disproven, then A is disproven.
This is what I mean when I say that hypotheses are “axioms”, and physical experiments are “conclusions”.
In response to this statement:
“No proofs, just conversations”. In the framework that I’m working in, every single statement is either a premise or a conclusion. In addition, every single statement is either a “truth” (that we are to believe immediately), a “proposition” (that we are to entertain the logical implications of), or part of a “hypothesis/implication” pair (that we are suppose to believe with a level of skepticism until an experiment verifies it or disproves it). I believe that every single statement that has ever been made in any field of study falls into one of those 3 categories, and I’m saying that we need to discuss which category we need to place statements that are in the field of ethics.
In the field of philosophy, from my limited knowledge, I think that these discussions lead to conclusions that we need to believe as “truth”, whether or not they are supported by evidence (i.e. John Rawl’s “Original Position”).
I see. You’re right that philosophers pretty much never do anything like that. Except experimental philosophers, but thus far most of that stuff is just terrible.
“In the framework that I’m working in...”
That’s a good framework with with to approach any philosophical text, including and especially the Platonic dialogues. I just wanted to stress the fact that the dialogues aren’t treatises presented in a funny way. You’re supposed to argue with Socrates, against him, yell at his interlocutors, try to patch up the arguments with premises of your own. It’s very different from, say, Aristotle or Kant or whatever, where its a guy presenting a theory.
Would you mind if I go on for a bit? I have thoughts on this, but I don’t quite know how to present them briefly. Anyway:
Students of Physics should go into a Physics class room or book with an open mind. They should be ready to learn new things about the world, often surprising things (relative to their naive impressions) and should often try to check their prejudices at the door. None of us are born knowing physics. It’s something we have to go out and learn.
Philosophy isn’t like that. The right attitude walking into a philosophy classroom is irritation. It is an inherently annoying subject, and its practitioners are even worse. You can’t learn philosophy, and you can’t become an expert at it. You can’t even become good at it. Being a philosopher is no accomplishment whatsoever. You can just do philosophy, and anyone can do it. Intelligence is good, but it can be a hindrance too, same with education.
Doing philosophy means asking questions about things to which you really ought to already know the answers, like the difference between right and wrong, whether or not you’re in control of your actions, what change is, what existing is, etc. Philosophy is about asking questions to which we ought to have the answers, but don’t.
We do philosophy by talking to each other. If that means running an experiment, good. If that means just arguing, fine. There’s no method, no standards, and no body of knowledge, unless you say there is, and then convince someone, and then there is until someone convinces you otherwise.
Scientists and mathematicians don’t hate philosophy. They tend to love philosophers, or at least the older ones do. Young scientists and mathematicians do hate philosophers, and with good reason: part of being a young scientist or mathematician is developing a refined mental self-discipline, and that means turning your back on any froo-froo hand wavy BS and getting down to work. Philosophy is the most hateful thing in the world when you’re trying to be wrong as little as possible. But once that discipline is in place, and people are confident in their ability to sort out good arguments from bad ones, facts from speculation, philosophy starts to look like fun.
The second part of your post is terrific. :)
But there is a mini-premise, inference and mini-conclusion inside every “hypothesis-implication pair”.
I’m curious as to why you referenced Rawl’s work in this context. It’s not apparent to me how Justice as Fairness is relevant here.
I referenced him because I recall that he comes to a very strong conclusion- that a moral society should have agreed-upon laws based on the premise of the “original position”. He was the first philosopher that came to mind when I was trying to think of examples of a hard statement that is neither a “proposition” to be explored, nor the conclusion from an observable fact.
I mean, I’m pretty sure his conclusion is a “proposition.” It has premises, and I could construct it logically if you wanted.
In fact, I don’t understand his position to be “that a moral society should have agreed-upon laws” at all, but rather his use of the original position is an attempt to isolate and discover the principles of distributive justice, and that’s really his bottom line.
Interesting piece. I was a bit bemused by this, though:
Problematically for the story, Plato died around 347 BCE, and Archimedes wasn’t born until 287 BCE—sixty years later.
Thank you for an awesome read. :)
science uses logical rules of inference. Does science take them as self-evident? Or does it test them? And can it test them without assuming them?
(whisper: Wei Lai should be Wei Dai)
Nope. Even if one grants objective meaning to a unique interpersonal aggregate of suffering (and I don’t), it’s just wrong.
Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe.
EDIT: I didn’t mean “you” to indicate everyone. Sometimes I want people to suffer, and think that in my hypothetical, the majority of mankind would feel the same, and choose the same, if it were in their power.
...because doing so would create incentive to not cause suffering to others. In the long run, that would result in less universal suffering overall. Isn’t this correct?
No, that’s not my motivation at all. That’s not my because. It’s just vengeance on my part.
Even if one regarded the design of vengeance as an evolutionary adaptation, I don’t think that vengeance minimizes suffering, it punishes infractions against values.
At that level, it’s not about minimizing suffering either, it’s about evolutionary fitness.
Yeah, I’m pretty sure I (and most LWers) don’t agree with you on that one, at least in the way you phrased it.
You think they’d prefer that the guy that caused everyone else in the universe to suffer didn’t suffer himself?
Here’s an old Eliezer quote on this:
It’s pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition.
EDIT: As ArisKatsaris points out, I don’t actually have any source for the “most people on LW disagree with you” bit. I’ve always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post “Policy Debates Should Not Appear One Sided” is fairly highly regarded, and it esposes a related view, that people don’t deserve harm for their stupidity.
Also, what those people would prefer isn’t nessecarily what our moral system should prefer- humans are petty and short-sighted.
What do you mean by “utilitarianism”? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses “total happiness” as a utility function. This sentence appears to be designed to confuse the two meanings.
That is most definitely not the main point of that post.
Yeah, my mistake. I’d never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren’t utilitarian?
Well, even Eliezer’s version of consequentialism isn’t simple utilitarianism for starters.
It’s a kind of utilitarianism. I’m including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.
Ok, what is your definition of “utilitarianism”?
[citation needed]
I edited my comment to include a tiny bit more evidence.
Thank you, that’s a good start.
Yes, I had concluded that EY was anti retribution. Hadn’t concluded that he had carried the day on that point.
I don’t think vengeance and retribution are “ideas” that people had to come up with—they’re central moral motivations. “A social preference for which we punish violators” gets at 80% of what morality is about.
Some may disagree about the intuition, but I’d note that even EY had to “renounce” all hatred, which implies to me that he had the impulse for hatred (retribution, in this context) in the first place.
This seems like it has makings of an interesting poll question.
I agree. Let’s do that. You’re consequentialist, right?
I’d phrase my opinion as “I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes.”
How do you phrase yours? If I were to guess, it would be “I have a terminal value which says that people who have caused suffering should suffer themselves.”
I’ll make a Discussion post about this after I get your refinement of the question?
I’d suggest the following two phrasings:
I place terminal value to retribution (inflicting suffering on the causers of suffering), at least for some of the most egregious cases.
I do not place terminal value to retribution, not even for the most egregious cases (e.g. mass murderers). I acknowledge that sometimes it may have instrumental value.
Perhaps also add a third choice:
I think I place terminal value to retribution, but I would prefer it if I could self-modify so that I wouldn’t.
I would, all else being equal. Suffering is bad.
That also applies to literary criticism: Wulky Wilkinsen shows colonial alienation / Authors who show colonial alieniation are post-utopians // Wulky Wilkinsen is a post-utopian.