Right. I’m not able to even formulate the problem statement for “issues about AI consciousness and animal suffering” without using undefined/unmeasurable concepts. Nor is anyone else that I’ve seen—they can write a LOT about similar-sounding or possibly-related topics, but never seem to make the tie to what (if anything) matters about it.
I’m slowly coming to the belief/model that human moral philosophy is hopelessly dualist under the covers, and most of the “rationalist” discussion around it are attempts to obfuscate this.
Right. I’m not able to even formulate the problem statement for “issues about AI consciousness and animal suffering” without using undefined/unmeasurable concepts. Nor is anyone else that I’ve seen—they can write a LOT about similar-sounding or possibly-related topics, but never seem to make the tie to what (if anything) matters about it.
I’m slowly coming to the belief/model that human moral philosophy is hopelessly dualist under the covers, and most of the “rationalist” discussion around it are attempts to obfuscate this.