Upvoted, and I’m sad that this is currently negative (though only −9 with 63 votes is more support than I’d have predicted, though less than I’d wish). I do kind of which it’d been a sequence of 4 posts (one per topic, and a summary about EY’s overconfidence and wrongness), rather than focused on the person, with the object-level disagreements as evidence.
It’s interesting that all of these topics are ones that should be dissolved rather than answered. Without a well-defined measure of “consciousness” (and a “why do we even care”, for some of the less-interesting measures that get proposed), zombies and animal experience are more motte-and-bailey topics than actual answerable propositions. I find it very easy (and sufficient) to believe that “qualia is what it feels like for THIS algorithm on THIS wetware”, with a high level of agnosticism on what other implementations will be like and whether they have internal reflectable experiences or are “just” extremely complex processing engines.
Decision theory likewise. It’s interesting and important to consider embedded agency, where decisions are not as free and unpredictable as it feels like to humans. We should be able to think about constrained knowledge of our own future actions. But it seems very unlikely that we should encode those edge cases into the fundamental mechanisms of decision analysis. Further, the distinction between exotic decision theory and CDT-with-strategic-precommitment-mechanisms is pretty thin.
Which I guess means I think you’re perhaps Less Wrong than EY on these topics, but both of you are (sometimes) ignoring the ambiguity that makes the questions interesting in the first place, and also makes any actual answer incorrect.
I find it very easy (and sufficient) to believe that “qualia is what it feels like for THIS algorithm on THIS wetware”, with a high level of agnosticism on what other implementations will be like and whether they have internal reflectable experiences or are “just” extremely complex processing engines.
But it obviously isn’t sufficient to do a bunch of things. In the absence of an actual explanation , you aren’t able to solve issues about AI consciousness and animal suffering. Note that “qualia is what it feels like for THIS algorithm on THIS wetware” is a belief , not an explanation—there’s no how or why to it.
Right. I’m not able to even formulate the problem statement for “issues about AI consciousness and animal suffering” without using undefined/unmeasurable concepts. Nor is anyone else that I’ve seen—they can write a LOT about similar-sounding or possibly-related topics, but never seem to make the tie to what (if anything) matters about it.
I’m slowly coming to the belief/model that human moral philosophy is hopelessly dualist under the covers, and most of the “rationalist” discussion around it are attempts to obfuscate this.
Upvoted, and I’m sad that this is currently negative (though only −9 with 63 votes is more support than I’d have predicted, though less than I’d wish). I do kind of which it’d been a sequence of 4 posts (one per topic, and a summary about EY’s overconfidence and wrongness), rather than focused on the person, with the object-level disagreements as evidence.
It’s interesting that all of these topics are ones that should be dissolved rather than answered. Without a well-defined measure of “consciousness” (and a “why do we even care”, for some of the less-interesting measures that get proposed), zombies and animal experience are more motte-and-bailey topics than actual answerable propositions. I find it very easy (and sufficient) to believe that “qualia is what it feels like for THIS algorithm on THIS wetware”, with a high level of agnosticism on what other implementations will be like and whether they have internal reflectable experiences or are “just” extremely complex processing engines.
Decision theory likewise. It’s interesting and important to consider embedded agency, where decisions are not as free and unpredictable as it feels like to humans. We should be able to think about constrained knowledge of our own future actions. But it seems very unlikely that we should encode those edge cases into the fundamental mechanisms of decision analysis. Further, the distinction between exotic decision theory and CDT-with-strategic-precommitment-mechanisms is pretty thin.
Which I guess means I think you’re perhaps Less Wrong than EY on these topics, but both of you are (sometimes) ignoring the ambiguity that makes the questions interesting in the first place, and also makes any actual answer incorrect.
But it obviously isn’t sufficient to do a bunch of things. In the absence of an actual explanation , you aren’t able to solve issues about AI consciousness and animal suffering. Note that “qualia is what it feels like for THIS algorithm on THIS wetware” is a belief , not an explanation—there’s no how or why to it.
Right. I’m not able to even formulate the problem statement for “issues about AI consciousness and animal suffering” without using undefined/unmeasurable concepts. Nor is anyone else that I’ve seen—they can write a LOT about similar-sounding or possibly-related topics, but never seem to make the tie to what (if anything) matters about it.
I’m slowly coming to the belief/model that human moral philosophy is hopelessly dualist under the covers, and most of the “rationalist” discussion around it are attempts to obfuscate this.