Just an attempt to make it clear that we’re dealing with something like an intelligent calculator here with nothing in it that we’d find interesting or valuable in itself. Setting up this as the true PD.
Is that even well-defined? If I assert that I am a philosophical zombie in every sense of the term (lacking soul, qualia, and whatever other features you find relevant) does that mean you don’t care about my losses?
Observers aren’t ontological fundamental entities which is where you may be running into trouble.
Everyone does, the problem is that the whole area of several steps around its literal meaning has serious problems. “But souls don’t exist! But so what if someone doesn’t have a soul tag, it’s not morally relevant! But so what if the presence of souls influences empathy/eternal life/etc., this reason doesn’t screen off other sources of moral value!” Only when you’ve gone all the way to “The other agent doesn’t have moral value.”, it starts making sense, but then you should’ve just said so, instead of pretending an argument.
But I’d think if I only said “It doesn’t have moral value in itself”, you’d still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
you’d still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
What property cluster/why I’d need to find it/which both ideas?
Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You’d need to find it in order to grasp what it means to have a being that lacked moral value, and “both ideas” refers to the distinct ways of explaining what sort of paperclip maximizer we’re talking about.
Those properties that we think makes happy humans better than totally artificial smiling humans mimicking happy humans.
This I guessed.
You’d need to find it in order to grasp what it means to have a being that lacked moral value,
Why? “No moral value” has a clear decision-theoretic meaning, and referring to particular patterns that have moral value doesn’t improve on that understanding. Also, the examples of things that have moral value are easy to imagine.
“both ideas” refers to the distinct ways of explaining what sort of paperclip maximizer we’re talking about.
This I still don’t understand. You’d need to name two ideas. My intuition at grasping the intended meaning fails me often. One relevant idea that I see is that the paperclip maximizer lacks moral value. What’s the other, and how is it relevant?
What about it? Your perception of English says it’s poorly-constructed, and I should rely less on my language intuition for such improvisation? Or is it unclear what I meant/why I believe so?
What is the purpose of saying “It doesn’t have a soul.”, as opposed to “It doesn’t have moral value.”? The desired conclusion is the latter, but the deeply flawed former is spoken instead. I guess it’s meant as an argument, appealing to existing intuitions, connotations that the word “soul” evokes. But because of its flaws, it’s not actually a rational argument, so it only pretends to be one, a rhetorical device.
It just wasn’t an argument at all or a rhetorical device of any kind. It was a redundant aside setting up a counterfactual problem. At worst it was a waste of a sentence and at best it made the counterfactual accessible to even those people without a suitably sophisticated reductionist philosophy.
(And, obviously, there was an implication that the initial ‘huh?’ verged on disingenuous.)
At worst it was a waste of a sentence and at best it made the counterfactual accessible to even those people without a suitably sophisticated reductionist philosophy.
Rhetorical device in exactly this sense: it communicates where just stating the intended meaning won’t work (“people without a suitably sophisticated reductionist philosophy”). The problem is insignificant (but still present), and as a rhetorical device it could do some good.
Just an attempt to make it clear that we’re dealing with something like an intelligent calculator here with nothing in it that we’d find interesting or valuable in itself. Setting up this as the true PD.
Is that even well-defined? If I assert that I am a philosophical zombie in every sense of the term (lacking soul, qualia, and whatever other features you find relevant) does that mean you don’t care about my losses?
Observers aren’t ontological fundamental entities which is where you may be running into trouble.
I understood what he was trying to say.
Everyone does, the problem is that the whole area of several steps around its literal meaning has serious problems. “But souls don’t exist! But so what if someone doesn’t have a soul tag, it’s not morally relevant! But so what if the presence of souls influences empathy/eternal life/etc., this reason doesn’t screen off other sources of moral value!” Only when you’ve gone all the way to “The other agent doesn’t have moral value.”, it starts making sense, but then you should’ve just said so, instead of pretending an argument.
But I’d think if I only said “It doesn’t have moral value in itself”, you’d still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
What property cluster/why I’d need to find it/which both ideas?
Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You’d need to find it in order to grasp what it means to have a being that lacked moral value, and “both ideas” refers to the distinct ways of explaining what sort of paperclip maximizer we’re talking about.
This I guessed.
Why? “No moral value” has a clear decision-theoretic meaning, and referring to particular patterns that have moral value doesn’t improve on that understanding. Also, the examples of things that have moral value are easy to imagine.
This I still don’t understand. You’d need to name two ideas. My intuition at grasping the intended meaning fails me often. One relevant idea that I see is that the paperclip maximizer lacks moral value. What’s the other, and how is it relevant?
“Huh?”
What about it? Your perception of English says it’s poorly-constructed, and I should rely less on my language intuition for such improvisation? Or is it unclear what I meant/why I believe so?
What is the purpose of saying “It doesn’t have a soul.”, as opposed to “It doesn’t have moral value.”? The desired conclusion is the latter, but the deeply flawed former is spoken instead. I guess it’s meant as an argument, appealing to existing intuitions, connotations that the word “soul” evokes. But because of its flaws, it’s not actually a rational argument, so it only pretends to be one, a rhetorical device.
It just wasn’t an argument at all or a rhetorical device of any kind. It was a redundant aside setting up a counterfactual problem. At worst it was a waste of a sentence and at best it made the counterfactual accessible to even those people without a suitably sophisticated reductionist philosophy.
(And, obviously, there was an implication that the initial ‘huh?’ verged on disingenuous.)
Rhetorical device in exactly this sense: it communicates where just stating the intended meaning won’t work (“people without a suitably sophisticated reductionist philosophy”). The problem is insignificant (but still present), and as a rhetorical device it could do some good.