I worry that a lot of discussion about this is starting from a poorly-formed thesis, and trying to add rigor to get to a measurable … thing, but repeatedly discovering that the basis dissolves when examined that closely.
“we don’t know what sentience is, or how to measure it, but we’re certain that it’s the basis of moral worth” → let’s define some measurable part of it, so we can use this knowledge … somehow” → “huh, that’s not the important part of sentience”.
Yes, you are mostly quite right that this is starting from a place which isn’t ideal, however as you pointed out, as long as we consider sentience the basis of moral worth, we really would rather have a way of figuring it out than not. Of course people could just decide they don’t actually care about sentience at all and thus avoid having to deal with this issue entirely, but otherwisely it seems quite important.
However, I would not agree by default that “defining some measurable parts to use that knowledge somehow” as you put it is meaningless. It would still measure the defined characteristics, which is still useful knowledge to have—especially in the absense of any better knowledge at all. It is not ideal, I will give you that, but until we sufficiently reverse-engineered the nature of sentience, it might be as good as we can do.
And yes, worst case, we learn that the characteristics we measured are not actually meaningful.Getting that realization in itself does not seem without meaning to me either.
Thank you for your feedback.
I worry that a lot of discussion about this is starting from a poorly-formed thesis, and trying to add rigor to get to a measurable … thing, but repeatedly discovering that the basis dissolves when examined that closely.
“we don’t know what sentience is, or how to measure it, but we’re certain that it’s the basis of moral worth” → let’s define some measurable part of it, so we can use this knowledge … somehow” → “huh, that’s not the important part of sentience”.
Yes, you are mostly quite right that this is starting from a place which isn’t ideal, however as you pointed out, as long as we consider sentience the basis of moral worth, we really would rather have a way of figuring it out than not. Of course people could just decide they don’t actually care about sentience at all and thus avoid having to deal with this issue entirely, but otherwisely it seems quite important. However, I would not agree by default that “defining some measurable parts to use that knowledge somehow” as you put it is meaningless. It would still measure the defined characteristics, which is still useful knowledge to have—especially in the absense of any better knowledge at all. It is not ideal, I will give you that, but until we sufficiently reverse-engineered the nature of sentience, it might be as good as we can do. And yes, worst case, we learn that the characteristics we measured are not actually meaningful.Getting that realization in itself does not seem without meaning to me either. Thank you for your feedback.