It’s basically what is in that paper by Kammerer: “theory of the difference between reporatable and unreportable perceptions” is ok, but calling it “consciousness” and then concluding from reasonable-sounding assumption “consciousness agents are moral patients” that generalizing theory about presense of some computational process in humans to universlal ethics is arbitrariness-free inference—that I don’t like. Because reasonableness of “consciousness agents are moral patients” decrease than you substitute theory’s content into it. It’s like theory of beauty, when “precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences” feels like more implied objectivity.
Great, thanks for the explanation. Just curious to hear your framework, no need to reply:
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties?
-If you don’t, are there questions in the vicinity of “which systems are moral patients” that you do recognize as meaningful?
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood?
I don’t know. If I need to decide, I would probably use some “similarity to human mind” metrics. Maybe I would think about complexity of thoughts in the language of human concepts or something. And I probably could be persuaded in the importance of many other things. Also I can’t really stop on just determining who is moral patient—I start thinking what exactly to value about them and that is complicated by me being (currently interested in counterarguments against being) indifferent to suffering and only counting good things.
Do you think we face uncertainty about whether animals or AIs have these properties?
Yes for “similarity to human mind”—we don’t have precise enough knowledge about AI’s or animals mind. But now it sounds like I’ve only chosen these properties to not be certain. In the end I think moral uncertainty plays more important role than factual uncertainty here—we already can be certain that very high-level low-resolution models of human consciousness generalize to anything from animals to couple lines of python.
It’s basically what is in that paper by Kammerer: “theory of the difference between reporatable and unreportable perceptions” is ok, but calling it “consciousness” and then concluding from reasonable-sounding assumption “consciousness agents are moral patients” that generalizing theory about presense of some computational process in humans to universlal ethics is arbitrariness-free inference—that I don’t like. Because reasonableness of “consciousness agents are moral patients” decrease than you substitute theory’s content into it. It’s like theory of beauty, when “precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences” feels like more implied objectivity.
Great, thanks for the explanation. Just curious to hear your framework, no need to reply:
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties? -If you don’t, are there questions in the vicinity of “which systems are moral patients” that you do recognize as meaningful?
I don’t know. If I need to decide, I would probably use some “similarity to human mind” metrics. Maybe I would think about complexity of thoughts in the language of human concepts or something. And I probably could be persuaded in the importance of many other things. Also I can’t really stop on just determining who is moral patient—I start thinking what exactly to value about them and that is complicated by me being (currently interested in counterarguments against being) indifferent to suffering and only counting good things.
Yes for “similarity to human mind”—we don’t have precise enough knowledge about AI’s or animals mind. But now it sounds like I’ve only chosen these properties to not be certain. In the end I think moral uncertainty plays more important role than factual uncertainty here—we already can be certain that very high-level low-resolution models of human consciousness generalize to anything from animals to couple lines of python.