You didn’t answer my question. My point was that pain is simpler than suffering, and even scientists who study it can’t objectively define it.
Because this “definition” does not help us figure out whether low-complexity WBEs suffer the same way humans do.
Are you suggesting we shouldn’t even talk about their potential suffering then? On the same grounds we shouldn’t talk about animal suffering either. That human beings suffer is evidence for low-complexity WBEs and animals being capable of that too.
By the time we can make low-complexity WBEs we’ll probably have some understanding of what suffering computationally is, but it might be too late start philosophizing about it then.
You didn’t answer my question. My point was that pain is simpler than suffering, and even scientists who study it can’t objectively define it.
First, the “objective” part of pain is known as nociception and can likely be studied in real or simulated organisms. The subjective part of pain need not be figured out separately from other qualia, like perception of color red.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
Are you suggesting we shouldn’t even talk about their potential suffering then?
I think we have to work on formalizing qualia in general before we can make progress in understanding “computational suffering” specifically.
it might be too late start philosophizing about it then
I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.
You were so busy refuting me you still didn’t answer this question: what kind of a definition of suffering would satisfy you? So that people could talk about it without it being a waste of time, y’know.
First, the “objective” part of pain is known as nociception and can likely be studied in real or simulated organisms
In the future? Yes. Right now? No. We have no idea what kind of computation happens in the brain when someone experiences pain. Just because it has a name doesn’t mean we have clue.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
I agree. Do you agree that pain is simpler than suffering and therefore the easier problem and more likely to be solved first?
I think we have to work on formalizing qualia in general before we can make progress in understanding “computational suffering” specifically.
I know I can suffer. If a simple WBE is made from my brain it inherits similarities to my brain and this is evidence it can suffer, the same way a complex mammalian brain has similarities to my brain and this is evidence it can suffer. Do you find these ideas objectionable? What do you mean by formalizing qualia?
I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.
Could be so. I’m not defending the paper, and I suggest you shouldn’t assume everyone who reads your comment about it read it.
This exchange does not seem to be going anywhere, so I’ll just leave my final comments before disengaging, feel free to do likewise.
The paper draft is an interesting and comprehensive survey of views on em suffering and related (meta)ethics
It does not do a good job defining its subject matter and thus does not advance the field of em ethics
One potential avenue of progress in em ethics and “em rights” is to define suffering in an externally measurable way for various levels of em complexity and architecture.
You didn’t answer my question. My point was that pain is simpler than suffering, and even scientists who study it can’t objectively define it.
Are you suggesting we shouldn’t even talk about their potential suffering then? On the same grounds we shouldn’t talk about animal suffering either. That human beings suffer is evidence for low-complexity WBEs and animals being capable of that too.
By the time we can make low-complexity WBEs we’ll probably have some understanding of what suffering computationally is, but it might be too late start philosophizing about it then.
First, the “objective” part of pain is known as nociception and can likely be studied in real or simulated organisms. The subjective part of pain need not be figured out separately from other qualia, like perception of color red.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
I think we have to work on formalizing qualia in general before we can make progress in understanding “computational suffering” specifically.
I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.
You were so busy refuting me you still didn’t answer this question: what kind of a definition of suffering would satisfy you? So that people could talk about it without it being a waste of time, y’know.
In the future? Yes. Right now? No. We have no idea what kind of computation happens in the brain when someone experiences pain. Just because it has a name doesn’t mean we have clue.
I agree. Do you agree that pain is simpler than suffering and therefore the easier problem and more likely to be solved first?
I know I can suffer. If a simple WBE is made from my brain it inherits similarities to my brain and this is evidence it can suffer, the same way a complex mammalian brain has similarities to my brain and this is evidence it can suffer. Do you find these ideas objectionable? What do you mean by formalizing qualia?
Could be so. I’m not defending the paper, and I suggest you shouldn’t assume everyone who reads your comment about it read it.
This exchange does not seem to be going anywhere, so I’ll just leave my final comments before disengaging, feel free to do likewise.
The paper draft is an interesting and comprehensive survey of views on em suffering and related (meta)ethics
It does not do a good job defining its subject matter and thus does not advance the field of em ethics
One potential avenue of progress in em ethics and “em rights” is to define suffering in an externally measurable way for various levels of em complexity and architecture.
Just so you know, I probably came off more confrontational than was my intention. Sorry about that if true.
I agree it’s better to halt these kinds of spats than try to find a fix after shit hits the fan.