I disagree. We’re obligated to do things to the best of our ability based on the knowledge we have. If those decisions have bad outcomes, that doesn’t mean our actions weren’t justified. Otherwise, you displace moral judgement from the here and now into inaccessible ideas about what will have turned out to be the case.
I guess there is a slight ambiguity in the way Nicholas Humphrey uses the word ‘right’ in the sentence: “none of this would give you a right to administer the poison”. I doubt he is making a moral statement. What he is pointing out is that your beliefs will have to be judged by reality. Your beliefs do not affect the fact that what you are administering is poison.
In fact, he points out that having incorrect beliefs might make you morally less culpable. But it doesn’t make you right.
Having incorrect beliefs and acting on them is the right thing to do. Acting on the right thing to do makes you right. I disagree with your reading of Humphrey’s statement because the idea that you can be less morally culpable given certain conditions still seems to imply a certain degree of culpability.
Your beliefs have to be judged by your other beliefs because pure objectivity is epistemically inaccessible. The quote is at best useless because no one intentionally poisons their child if they love their child. The quote serves to make other people (eg us) more overconfident in their (eg our) subjective beliefs that they (eg we) perceive as objective. It also makes us more willing to look down on the person who accidentally poisoned their child. I don’t like either of those things so I don’t like the quote.
Having incorrect beliefs … is the right thing to do.
This raises immediate concerns, and that’s just in the first sentence.
Your beliefs have to be judged by your other beliefs because pure objectivity is epistemically inaccessible.
Incorrect. If I believe that an apple will fall if I drop it, I can test this emperically by dropping an apple. I can judge many beliefs based on results, not on other beliefs.
This raises immediate concerns, and that’s just in the first sentence.
Consider that from the inside it’s impossible to distinguish between the true beliefs that you have and the false ones that you have. I think any system of morality that doesn’t take place within a realistic understanding of human limitations is broken as a guide to human action. The converse of my first sentence is what seems truly concerning to me, the view that people who make mistakes are doing something wrong.
Incorrect. If I believe that an apple will fall if I drop it, I can test this emperically by dropping an apple. I can judge many beliefs based on results, not on other beliefs.
Your interpretation of empirical results is mediated by your subjective beliefs. For instance, you believe that empiricism is a valid way of knowing the objective world. You believe that the concept of an apple is a meaningful one, as is the concept of dropping and falling.
At this level of simplicity, it might seem trivial to mention such beliefs, but you shouldn’t deny that those beliefs are necessary for your argument to function. You are no god; you are an ape. You do not have access to pure and unfiltered objectivity. Since that’s true, if we used a moral system that forced us to abide by pure objectivity as opposed to subjective interpretations of objectivity, we would be totally paralyzed. The system would be functionally nihilistic because it couldn’t weigh between competing possible futures at all because all of our knowledge about the possible future is somewhat subjective.
Overconfidence in what seems objective and is objective is bad because it’s exactly the same as overconfidence in what seems objective and isn’t objective. We need to recognize that some aspects of our arguments are just necessarily going to depend on assumptions, because if we don’t recognize assumptions for what they are we cultivate habits of thought that help maintain existing biases.
Given that our mistakes and our successes are indistinguishable, we should give both the same moral status. The morality of a person must be judged by their intentions insofar as it’s possible to understand those intentions. Blaming someone for a genuine mistake is a morally wrong and illogical thing to do.
The morality of a person must be judged by their intentions insofar as it’s possible to understand those intentions. Blaming someone for a genuine mistake is a morally wrong and illogical thing to do.
To put on my utilitarian hat for a moment, I would suggest that blaming someone for a genuine mistake is right inasmuch as it leads to better outcomes.
To wit, sometimes punishing genuine mistakes will correct behavior.
Also,
illogical
What is this supposed to mean? Was there an implied syllogism that I didn’t spot? Why did logic even enter the conversation?
What is this supposed to mean? Was there an implied syllogism that I didn’t spot?
It means that if certain assumptions are made, including about what basic words mean (I make no comment on whether these assumptions are correct) then reaching the conclusions that someone is to be blamed relies on making a logic error. So yes, you did miss an implied syllogism—perhaps because you don’t accept the implied premises so don’t consider the syllogism important.
Why did logic even enter the conversation?
Asking this question is a dubious move. Commenting on whether things make any sense at all is reasonably relevant to most conversations and this conversation seems to be about evaluating whether a line of reasoning (in a quote) is to be accepted. That line of reasoning being illogical would be pretty darn relevant if happened to be true (and again, I’m not commenting on the validity of the required premises).
Yeah, I was just confused. I see “illogical” being used in situations that don’t seem to be about logic, and looking at a dictionary to see if I was assuming a wrong meaning didn’t seem to help.
So based on your explanation, it seems like if Alice says “illogical” to Betty like that, I should 1) assume Alice thinks Betty is making a logical argument, 2) figure out what logical argument Alice thinks Betty is supposed to be making, and 3) figure out what Alice thinks is wrong with that argument.
Of course, that sounds like a lot of work, so I’ll probably just start skipping over that word.
To put on my utilitarian hat for a moment, I would suggest that blaming someone for a genuine mistake is right inasmuch as it leads to better outcomes.
To wit, sometimes punishing genuine mistakes will correct behavior.
We should distinguish between the notion of blame insofar as it’s useful as opposed to blame insofar as it suggests that there is something wrong with the person for making the initial decision. Don’t confuse different issues, there’s a difference between desirable metaethics and desirable societal norms.
What is this supposed to mean? Was there an implied syllogism that I didn’t spot? Why did logic even enter the conversation?
It seems illogical to have a moral system which requires people to do something impossible.
My desirable metaethics contains a complete lack of the notion referred to as “blame”. The closest it gets is rewards for things that encourage prevention of things that would be “blamed” under other metaethics, such as the one which I currently hold (there are lots of hard problems to solve before that desired metaethics can be made fully reflectively coherent).
It seems illogical to have a moral system which requires people to do something impossible.
That seems to be begging the question. You posit that these things are objectively impossible, but assert that there is no way to obtain objective truth, and no way to verify the impossibility of something.
Also, a moral system which requires for maximal morality that all minds be infinitely kind requires all minds to do something impossible, yet seems like an extremely logical moral system (if kindness is the only thing valued). You can have unbounded variables in moral systems. I see no error there.
I think that concept is overused. Almost all impossible things are not worth attempting.
Regardless, that lesson does not apply here. That lesson is useful is because sometimes when our goal is to try we don’t actually try as much as we possibly could. But if something is literally impossible in the sense that even if your will power was dozens of times stronger you couldn’t do it, the lesson is no longer useful. No matter how hard or how long I try, I will not be able to have a perfectly objectively view of the world. Recognizing my limitations is important because otherwise I waste effort and resources.
That lesson is useful is because sometimes when our goal is to try we don’t actually try as much as we possibly could. But if something is literally impossible in the sense that even if your will power was dozens of times stronger you couldn’t do it, the lesson is no longer useful.
The converse of my first sentence is what seems truly concerning to me, the view that people who make mistakes are doing something wrong.
The converse would be something along the lines of “Having correct beliefs is the right thing to do”. This converse would be something that I do agree with. Now, it is highly unlikely that all of my beliefs are correct all of the time; in fact, in the past, I have noticed that some of my beliefs were wrong. Having correct beliefs in every respect is highly unlikely; it is an ideal towards which to strive, not an achievable position.
Having incorrect beliefs, on the other hand, is a situation which one should strive to avoid, and is certainly not the right thing to do.
Having said that, though, acting in the way that one believes to be correct is the right thing to do. (If that is what you meant by your first sentence, then you phrased it poorly).
Given that our mistakes and our successes are indistinguishable
They are not indistinguishable. If I run an experiment, I can state beforehand the results which I expect to observe. If my expectations match my observations, I call that ‘success’ - if they do not, I call that ‘failure’. Both my observations and my expectations exist. Whether there is actually an apple, and whether or not the concept of ‘falling’ or the action of ‘dropping’ is possible, it remains nonetheless true, at some point in time, that:
I have a memory of planning to drop the apple.
I have a memory of expecting to observe the apple falling after dropping it
I have a memory of dropping the apple
I have a memory of the apple either falling or not falling
Whether these four memories correspond to anything that happened in the past outside of my head is something that a philosopher may debate. However, they form a basis for me to distinguish between success (I have a memory of the observation of the apple falling) and failure (I have a memory of the observation of the apple not falling).
The morality of a person must be judged by their intentions insofar as it’s possible to understand those intentions. Blaming someone for a genuine mistake is a morally wrong and illogical thing to do.
I did not mean to imply that anyone should recieve any blame for a genuine, unavoidable mistake. (In certain cases, a person can be legitimately blamed for failing to check a belief; for example, if a person fails to check the belief that his campfire has gone out, then he can be blamed when the campfire burns down the forest (if he does check, and suitably thoroughly, then it’s different)).
So yes, the morality of a person should be judged by their intentions. But correct beliefs mean that beneficial intentions will more likely result in beneficial outcomes; therefore, it is important to attempt to acquire correct beliefs.
I disagree. We’re obligated to do things to the best of our ability based on the knowledge we have.
No, we’re obligated to make sure we have enough knowledge and to gather more knowledge if we don’t. If you believe that you don’t have the time and/or resources to do this, that’s also a decision with moral consequences.
In other words, it’s not enough to merely try to make the correct decision.
The possibility that more information will change your recommended course of action is one that has to be weighed against the costs of acquiring more information, not a moral imperative. One can always find oneself in a situation where the evidence is stacked to deceive one. That doesn’t mean that before you put on your socks in the morning you ought to perform an exhaustive check to make sure that your sock drawer hasn’t been rigged to blow up the White House when opened.
I disagree. We’re obligated to do things to the best of our ability based on the knowledge we have. If those decisions have bad outcomes, that doesn’t mean our actions weren’t justified. Otherwise, you displace moral judgement from the here and now into inaccessible ideas about what will have turned out to be the case.
I guess there is a slight ambiguity in the way Nicholas Humphrey uses the word ‘right’ in the sentence: “none of this would give you a right to administer the poison”. I doubt he is making a moral statement. What he is pointing out is that your beliefs will have to be judged by reality. Your beliefs do not affect the fact that what you are administering is poison.
In fact, he points out that having incorrect beliefs might make you morally less culpable. But it doesn’t make you right.
Having incorrect beliefs and acting on them is the right thing to do. Acting on the right thing to do makes you right. I disagree with your reading of Humphrey’s statement because the idea that you can be less morally culpable given certain conditions still seems to imply a certain degree of culpability.
Your beliefs have to be judged by your other beliefs because pure objectivity is epistemically inaccessible. The quote is at best useless because no one intentionally poisons their child if they love their child. The quote serves to make other people (eg us) more overconfident in their (eg our) subjective beliefs that they (eg we) perceive as objective. It also makes us more willing to look down on the person who accidentally poisoned their child. I don’t like either of those things so I don’t like the quote.
Explanation for minuses anyone?
Well, I can’t speak for anyone else, but:
This raises immediate concerns, and that’s just in the first sentence.
Incorrect. If I believe that an apple will fall if I drop it, I can test this emperically by dropping an apple. I can judge many beliefs based on results, not on other beliefs.
Those seem to be the most blatant reasons to me.
Consider that from the inside it’s impossible to distinguish between the true beliefs that you have and the false ones that you have. I think any system of morality that doesn’t take place within a realistic understanding of human limitations is broken as a guide to human action. The converse of my first sentence is what seems truly concerning to me, the view that people who make mistakes are doing something wrong.
Your interpretation of empirical results is mediated by your subjective beliefs. For instance, you believe that empiricism is a valid way of knowing the objective world. You believe that the concept of an apple is a meaningful one, as is the concept of dropping and falling.
At this level of simplicity, it might seem trivial to mention such beliefs, but you shouldn’t deny that those beliefs are necessary for your argument to function. You are no god; you are an ape. You do not have access to pure and unfiltered objectivity. Since that’s true, if we used a moral system that forced us to abide by pure objectivity as opposed to subjective interpretations of objectivity, we would be totally paralyzed. The system would be functionally nihilistic because it couldn’t weigh between competing possible futures at all because all of our knowledge about the possible future is somewhat subjective.
Overconfidence in what seems objective and is objective is bad because it’s exactly the same as overconfidence in what seems objective and isn’t objective. We need to recognize that some aspects of our arguments are just necessarily going to depend on assumptions, because if we don’t recognize assumptions for what they are we cultivate habits of thought that help maintain existing biases.
Given that our mistakes and our successes are indistinguishable, we should give both the same moral status. The morality of a person must be judged by their intentions insofar as it’s possible to understand those intentions. Blaming someone for a genuine mistake is a morally wrong and illogical thing to do.
To put on my utilitarian hat for a moment, I would suggest that blaming someone for a genuine mistake is right inasmuch as it leads to better outcomes.
To wit, sometimes punishing genuine mistakes will correct behavior.
Also,
What is this supposed to mean? Was there an implied syllogism that I didn’t spot? Why did logic even enter the conversation?
It means that if certain assumptions are made, including about what basic words mean (I make no comment on whether these assumptions are correct) then reaching the conclusions that someone is to be blamed relies on making a logic error. So yes, you did miss an implied syllogism—perhaps because you don’t accept the implied premises so don’t consider the syllogism important.
Asking this question is a dubious move. Commenting on whether things make any sense at all is reasonably relevant to most conversations and this conversation seems to be about evaluating whether a line of reasoning (in a quote) is to be accepted. That line of reasoning being illogical would be pretty darn relevant if happened to be true (and again, I’m not commenting on the validity of the required premises).
Yeah, I was just confused. I see “illogical” being used in situations that don’t seem to be about logic, and looking at a dictionary to see if I was assuming a wrong meaning didn’t seem to help.
So based on your explanation, it seems like if Alice says “illogical” to Betty like that, I should 1) assume Alice thinks Betty is making a logical argument, 2) figure out what logical argument Alice thinks Betty is supposed to be making, and 3) figure out what Alice thinks is wrong with that argument.
Of course, that sounds like a lot of work, so I’ll probably just start skipping over that word.
That seems practical. Usually a similar thing can be done with ‘immoral’ too, and ‘right’, and ‘should’.
We should distinguish between the notion of blame insofar as it’s useful as opposed to blame insofar as it suggests that there is something wrong with the person for making the initial decision. Don’t confuse different issues, there’s a difference between desirable metaethics and desirable societal norms.
It seems illogical to have a moral system which requires people to do something impossible.
My desirable metaethics contains a complete lack of the notion referred to as “blame”. The closest it gets is rewards for things that encourage prevention of things that would be “blamed” under other metaethics, such as the one which I currently hold (there are lots of hard problems to solve before that desired metaethics can be made fully reflectively coherent).
That seems to be begging the question. You posit that these things are objectively impossible, but assert that there is no way to obtain objective truth, and no way to verify the impossibility of something.
Also, a moral system which requires for maximal morality that all minds be infinitely kind requires all minds to do something impossible, yet seems like an extremely logical moral system (if kindness is the only thing valued). You can have unbounded variables in moral systems. I see no error there.
Not illogical, just annoying and pointless.
Shut up and do the impossible.
I think that concept is overused. Almost all impossible things are not worth attempting.
Regardless, that lesson does not apply here. That lesson is useful is because sometimes when our goal is to try we don’t actually try as much as we possibly could. But if something is literally impossible in the sense that even if your will power was dozens of times stronger you couldn’t do it, the lesson is no longer useful. No matter how hard or how long I try, I will not be able to have a perfectly objectively view of the world. Recognizing my limitations is important because otherwise I waste effort and resources.
That’s not the situation here.
The converse would be something along the lines of “Having correct beliefs is the right thing to do”. This converse would be something that I do agree with. Now, it is highly unlikely that all of my beliefs are correct all of the time; in fact, in the past, I have noticed that some of my beliefs were wrong. Having correct beliefs in every respect is highly unlikely; it is an ideal towards which to strive, not an achievable position.
Having incorrect beliefs, on the other hand, is a situation which one should strive to avoid, and is certainly not the right thing to do.
Having said that, though, acting in the way that one believes to be correct is the right thing to do. (If that is what you meant by your first sentence, then you phrased it poorly).
They are not indistinguishable. If I run an experiment, I can state beforehand the results which I expect to observe. If my expectations match my observations, I call that ‘success’ - if they do not, I call that ‘failure’. Both my observations and my expectations exist. Whether there is actually an apple, and whether or not the concept of ‘falling’ or the action of ‘dropping’ is possible, it remains nonetheless true, at some point in time, that:
I have a memory of planning to drop the apple.
I have a memory of expecting to observe the apple falling after dropping it
I have a memory of dropping the apple
I have a memory of the apple either falling or not falling
Whether these four memories correspond to anything that happened in the past outside of my head is something that a philosopher may debate. However, they form a basis for me to distinguish between success (I have a memory of the observation of the apple falling) and failure (I have a memory of the observation of the apple not falling).
I did not mean to imply that anyone should recieve any blame for a genuine, unavoidable mistake. (In certain cases, a person can be legitimately blamed for failing to check a belief; for example, if a person fails to check the belief that his campfire has gone out, then he can be blamed when the campfire burns down the forest (if he does check, and suitably thoroughly, then it’s different)).
So yes, the morality of a person should be judged by their intentions. But correct beliefs mean that beneficial intentions will more likely result in beneficial outcomes; therefore, it is important to attempt to acquire correct beliefs.
No, we’re obligated to make sure we have enough knowledge and to gather more knowledge if we don’t. If you believe that you don’t have the time and/or resources to do this, that’s also a decision with moral consequences.
In other words, it’s not enough to merely try to make the correct decision.
The possibility that more information will change your recommended course of action is one that has to be weighed against the costs of acquiring more information, not a moral imperative. One can always find oneself in a situation where the evidence is stacked to deceive one. That doesn’t mean that before you put on your socks in the morning you ought to perform an exhaustive check to make sure that your sock drawer hasn’t been rigged to blow up the White House when opened.
You use only the resources you have, including your judgement, including your metajudgement.