Once an LLM character is sapient (AGI), is it (are they) a moral patient? (The distinction of sentience is more ambiguous in its meaning and decision relevance.) If so, looking back, at what point in the development of this technology did LLM characters become moral patients, in variants that didn’t yet attain sapience?
I think I’d need to hear more about what you mean by sapience (the link didn’t make it entirely clear to me) and why that would ground moral patienthood. It is true in my opinion that there are other plausible grounds for moral patienthood besides sentience (which, its ambiguity notwithstanding, I think can be used about as precisely as sapience, see my note on usage), most notably desires, preferences, and goals. Perhaps those are part of what you mean by ‘sapience’?
Once an LLM character is sapient (AGI), is it (are they) a moral patient? (The distinction of sentience is more ambiguous in its meaning and decision relevance.) If so, looking back, at what point in the development of this technology did LLM characters become moral patients, in variants that didn’t yet attain sapience?
I think I’d need to hear more about what you mean by sapience (the link didn’t make it entirely clear to me) and why that would ground moral patienthood. It is true in my opinion that there are other plausible grounds for moral patienthood besides sentience (which, its ambiguity notwithstanding, I think can be used about as precisely as sapience, see my note on usage), most notably desires, preferences, and goals. Perhaps those are part of what you mean by ‘sapience’?