I agree, because I think that we must reason about entities as computational processes and think about what stimuli they receive from the world (sentience), and what if any actions they undertake (agentiveness). However, I don’t think that the conclusion is necessarily the case that terminating a conscious process is bad, just because we’ve come to a moral conclusion that it’s generally bad to non-consensually terminate humans. I think our moral intuitions are in need of expansion and clarification when it comes to transient computational subprocesses like simulated entities(e.g. in our minds or the ongoing processes of large language models). More of my thoughts on this here: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw
I agree, because I think that we must reason about entities as computational processes and think about what stimuli they receive from the world (sentience), and what if any actions they undertake (agentiveness). However, I don’t think that the conclusion is necessarily the case that terminating a conscious process is bad, just because we’ve come to a moral conclusion that it’s generally bad to non-consensually terminate humans. I think our moral intuitions are in need of expansion and clarification when it comes to transient computational subprocesses like simulated entities(e.g. in our minds or the ongoing processes of large language models). More of my thoughts on this here: https://www.lesswrong.com/posts/Htu55gzoiYHS6TREB/sentience-matters?commentId=wusCgxN9qK8HzLAiw