But we can tell him he did something wrong by the standard against randomly killing people. And we can act consistently with that standard by sanctioning him. In fact, it would be inconsistent for us to give him a pass.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I’m not getting it.
I’m not sure I’m following the argument here. I’m saying that all normativity is hypothetical. It sounds like you’re arguing there is a categorical ‘ought’ for believing mathematical truths because it would be very strange to say we only ‘ought’ to believe 2 + 2 = 4 in reference to some goal. So if there are some categorical ‘oughts,’ there might be others.
Is it something like that?
This states the thought very clearly -thanks.
If so, then I would offer the goal of “in order to be logically consistent.”
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though.
There are some who think moral oughts reduce to logical consistency, so we ought act in a certain way in order to be logically consistent. I don’t have a good counter-argument to that, other than asking to examine such a theory and wondering how being able to point out a logical consistency is going to rein in people with desires that run counter to it any better than relativism can.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
, besides pointing out that its current actions are suboptimal for its goal in the long run?
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to
have a different rationality or just different values?
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
I cannot provide [a murderer] compelling grounds as to why he ought not to have done what he did… [T]o punish him would be arbitrary.
If you don’t want murderers running around killing people, then it’s consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.
Yes, that’s arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.
I think you missed the point quite badly there. The point is that there is no rationally
compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven’t exactly given moral objectivism a run for its money.
I understand your point is that we can tell the killer that he has acted wrongly according to our standard (that one ought not randomly to kill people). But if people in general are bound only by their own standards, why should that matter to him? It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary. Sorry if I’m not getting it.
This states the thought very clearly -thanks.
I acknowledge the business about the nature of the compulsion behind mathematical judgement is pretty opaque. What I had in mind is illustrated by this dialogue. As it shows, the problem gets right back to the compulsion to be logically consistent. It’s possible this doesn’t really engage your thoughts, though.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
If you don’t want murderers running around killing people, then it’s consistent with your values to set up a situation in which murderers can expect to be punished, and one way to do that is to actually punish murderers.
Yes, that’s arbitrary, in the same sense that every preference you have is arbitrary. If you are going to act upon your preferences without deceiving yourself, you have to feel comfortable with doing arbitrary things.
I think you missed the point quite badly there. The point is that there is no rationally compelling reason to act on any arbitrary value. You gave the example of punishing murderers, but if every value is equally arbitrary that is no more justifiable than punishing stamp collectors or the left-handed. Having accepted moral subjectivism, you are faced with a choice between acting irrationality or not acting. OTOH, you haven’t exactly given moral objectivism a run for its money.