Rather, they might be mere empty machines. Should you still tolerate/respect/etc them, then?”
My sense is that I’m unusually open to “yes,” here.
I think the discussion following from here is a little ambiguous (perhaps purposefully so?). In particular, it is unclear which of the following points are being made:
1: Sufficient uncertainty with respect to the sentience (I’m taking this as synonymous with phenomenal consciousness) of future AIs should dictate that we show them tolerance/respect etc… 2: We should not be confident that sentience is a good criterion for moral patienthood (i.e., being shown tolerance/respect etc...), even though sentience is a genuine thing. 3: We should worry that sentience isn’t a genuine thing at all (i.e, illusionism / as-yet-undescribed re-factorings of what we currently call sentience).
When you wrote that you are unusually open to “yes” in the quoted sentence, I took the qualifier “unusual” to indicate that you were making point 2, since I do not consider point 1 to be particularly unusual (Schwitzgebel has pushed for this view, for example). However, your discussion then mostly seemed to be making the case for point 1 (i.e., we could impose a criterion for moral worth that is intended to demarcate non-sentient and sentient entities but that fails). For what it’s worth, I would be very interested to hear arguments for point 2 which do not collapse into point 1 (or, alternatively, some reason why I am mistaken for considering them distinct points). From my perspective, it is hard to understand how something which really lacks what I mean by phenomenal consciousness could possibly be a moral patient. Perhaps it is related to the fact that I have, despite significant effort, utterly failed to grok illusionism.
I think the discussion following from here is a little ambiguous (perhaps purposefully so?). In particular, it is unclear which of the following points are being made:
1: Sufficient uncertainty with respect to the sentience (I’m taking this as synonymous with phenomenal consciousness) of future AIs should dictate that we show them tolerance/respect etc…
2: We should not be confident that sentience is a good criterion for moral patienthood (i.e., being shown tolerance/respect etc...), even though sentience is a genuine thing.
3: We should worry that sentience isn’t a genuine thing at all (i.e, illusionism / as-yet-undescribed re-factorings of what we currently call sentience).
When you wrote that you are unusually open to “yes” in the quoted sentence, I took the qualifier “unusual” to indicate that you were making point 2, since I do not consider point 1 to be particularly unusual (Schwitzgebel has pushed for this view, for example). However, your discussion then mostly seemed to be making the case for point 1 (i.e., we could impose a criterion for moral worth that is intended to demarcate non-sentient and sentient entities but that fails). For what it’s worth, I would be very interested to hear arguments for point 2 which do not collapse into point 1 (or, alternatively, some reason why I am mistaken for considering them distinct points). From my perspective, it is hard to understand how something which really lacks what I mean by phenomenal consciousness could possibly be a moral patient. Perhaps it is related to the fact that I have, despite significant effort, utterly failed to grok illusionism.