As usual with intersection of consciousness and science, I think this needs more clarifications about assumptions. In particular, does “Realism about phenomenal consciousness” imply that consciousness is somehow fundamentally different from other forms of organization of matter? If not, I would prefer for it to be explicitly said that we are talking about merely persuasive arguments about reasons to value computational processes interesting in some way. And for every “theory” to be replaced with “ideology”, and every “is” question with “do we want to define consciousness in such a way, that”. And without justification for intermediate steps assumptions 1-3 can be simplified into “if a system satisfies my arbitrary criteria then it is a moral patient”.
I’m trying to get a better idea of your position. Suppose that, as TAG also replied, “realism about phenomenal consciousness” does not imply that consciousness is somehow fundamentally different from other forms of organization of matter. Suppose I’m a physicalist and a functionalist, so I think phenomenal consciousness just is a certain organization of matter. Do we still then need to replace “theory” with “ideology” etc?
It’s basically what is in that paper by Kammerer: “theory of the difference between reporatable and unreportable perceptions” is ok, but calling it “consciousness” and then concluding from reasonable-sounding assumption “consciousness agents are moral patients” that generalizing theory about presense of some computational process in humans to universlal ethics is arbitrariness-free inference—that I don’t like. Because reasonableness of “consciousness agents are moral patients” decrease than you substitute theory’s content into it. It’s like theory of beauty, when “precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences” feels like more implied objectivity.
Great, thanks for the explanation. Just curious to hear your framework, no need to reply:
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties?
-If you don’t, are there questions in the vicinity of “which systems are moral patients” that you do recognize as meaningful?
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood?
I don’t know. If I need to decide, I would probably use some “similarity to human mind” metrics. Maybe I would think about complexity of thoughts in the language of human concepts or something. And I probably could be persuaded in the importance of many other things. Also I can’t really stop on just determining who is moral patient—I start thinking what exactly to value about them and that is complicated by me being (currently interested in counterarguments against being) indifferent to suffering and only counting good things.
Do you think we face uncertainty about whether animals or AIs have these properties?
Yes for “similarity to human mind”—we don’t have precise enough knowledge about AI’s or animals mind. But now it sounds like I’ve only chosen these properties to not be certain. In the end I think moral uncertainty plays more important role than factual uncertainty here—we already can be certain that very high-level low-resolution models of human consciousness generalize to anything from animals to couple lines of python.
As usual with intersection of consciousness and science, I think this needs more clarifications about assumptions. In particular, does “Realism about phenomenal consciousness” imply that consciousness is somehow fundamentally different from other forms of organization of matter? If not, I would prefer for it to be explicitly said that we are talking about merely persuasive arguments about reasons to value computational processes interesting in some way. And for every “theory” to be replaced with “ideology”, and every “is” question with “do we want to define consciousness in such a way, that”. And without justification for intermediate steps assumptions 1-3 can be simplified into “if a system satisfies my arbitrary criteria then it is a moral patient”.
I don’t see why it would, since realism about shoes , and ships and sealing wax doesn’t.
I’m trying to get a better idea of your position. Suppose that, as TAG also replied, “realism about phenomenal consciousness” does not imply that consciousness is somehow fundamentally different from other forms of organization of matter. Suppose I’m a physicalist and a functionalist, so I think phenomenal consciousness just is a certain organization of matter. Do we still then need to replace “theory” with “ideology” etc?
It’s basically what is in that paper by Kammerer: “theory of the difference between reporatable and unreportable perceptions” is ok, but calling it “consciousness” and then concluding from reasonable-sounding assumption “consciousness agents are moral patients” that generalizing theory about presense of some computational process in humans to universlal ethics is arbitrariness-free inference—that I don’t like. Because reasonableness of “consciousness agents are moral patients” decrease than you substitute theory’s content into it. It’s like theory of beauty, when “precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences” feels like more implied objectivity.
Great, thanks for the explanation. Just curious to hear your framework, no need to reply:
-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties? -If you don’t, are there questions in the vicinity of “which systems are moral patients” that you do recognize as meaningful?
I don’t know. If I need to decide, I would probably use some “similarity to human mind” metrics. Maybe I would think about complexity of thoughts in the language of human concepts or something. And I probably could be persuaded in the importance of many other things. Also I can’t really stop on just determining who is moral patient—I start thinking what exactly to value about them and that is complicated by me being (currently interested in counterarguments against being) indifferent to suffering and only counting good things.
Yes for “similarity to human mind”—we don’t have precise enough knowledge about AI’s or animals mind. But now it sounds like I’ve only chosen these properties to not be certain. In the end I think moral uncertainty plays more important role than factual uncertainty here—we already can be certain that very high-level low-resolution models of human consciousness generalize to anything from animals to couple lines of python.