It seems to me I cannot provide him compelling grounds as to why he ought not to have done what he did, and that to punish him would be arbitrary.
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
If the view is correct, then you can at least convince rational people that it is not rational to kill people. Isn’t that an important result?
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
, besides pointing out that its current actions are suboptimal for its goal in the long run?
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to
have a different rationality or just different values?
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
Like so much material on this site, that tacitly assumes values cannot be reasoned about.
When a dispute is over fundamental values, I don’t think we can give the other side compelling grounds to act according to our own values. Consider Eliezer’s paperclip maximizer. How could we possibly convince such a being that it’s doing something irrational, besides pointing out that its current actions are suboptimal for its goal in the long run?
Thanks for the link to the Carroll story. I plan on taking some time to think it over.
It’s important to us, but — as far as I can tell — only because of our values. I don’t think it’s important ‘to the universe’ for someone to refrain from going on a killing spree.
Another way to put it is that the rationality of killing sprees is dependent on the agent’s values. I haven’t read much of this site, but I’m getting the impression that a major project is to accept this...and figure out which initial values to give AI. Simply ensuring the AI will be rational is not enough to protect our values.
that sounds like a good rational argument to me. Is the paperclip maximiser supposed to have a different rationality or just different values?
Like so much material on this site, that tacitly assumes values cannot be reasoned about.