But I’d think if I only said “It doesn’t have moral value in itself”, you’d still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
you’d still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
What property cluster/why I’d need to find it/which both ideas?
Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You’d need to find it in order to grasp what it means to have a being that lacked moral value, and “both ideas” refers to the distinct ways of explaining what sort of paperclip maximizer we’re talking about.
Those properties that we think makes happy humans better than totally artificial smiling humans mimicking happy humans.
This I guessed.
You’d need to find it in order to grasp what it means to have a being that lacked moral value,
Why? “No moral value” has a clear decision-theoretic meaning, and referring to particular patterns that have moral value doesn’t improve on that understanding. Also, the examples of things that have moral value are easy to imagine.
“both ideas” refers to the distinct ways of explaining what sort of paperclip maximizer we’re talking about.
This I still don’t understand. You’d need to name two ideas. My intuition at grasping the intended meaning fails me often. One relevant idea that I see is that the paperclip maximizer lacks moral value. What’s the other, and how is it relevant?
But I’d think if I only said “It doesn’t have moral value in itself”, you’d still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
What property cluster/why I’d need to find it/which both ideas?
Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You’d need to find it in order to grasp what it means to have a being that lacked moral value, and “both ideas” refers to the distinct ways of explaining what sort of paperclip maximizer we’re talking about.
This I guessed.
Why? “No moral value” has a clear decision-theoretic meaning, and referring to particular patterns that have moral value doesn’t improve on that understanding. Also, the examples of things that have moral value are easy to imagine.
This I still don’t understand. You’d need to name two ideas. My intuition at grasping the intended meaning fails me often. One relevant idea that I see is that the paperclip maximizer lacks moral value. What’s the other, and how is it relevant?