I think, along with most LWers, that your concerns about qualia and the need for a new ontology are mistaken. But even granting that part of your argument, I don’t see why it is problematic to approach the FAI problem through simulation of humans. Yes, you would only be simulating their physical/computational aspects, not the ineffable subjectiveness, but does that loss matter, for the purposes of seeing how the simulations react to different extrapolations and trying to determine CEV? Only if a) the qualia humans experience are related to their concrete biology and not to their computational properties, and b) the relation is two-ways, so the qualia are not epiphenomenal to behavior but affect it causally, and physics as we understand it is not causally closed. But in that case, you would not be able to make a good computational simulation of a human’s behavior in the first place!
In conclusion, assuming that faithful computational simulations of human behavior are possible, I don’t see how the qualia problem interferes with using them to determine CEV and/or help program FAI. There might be other problems with this line of research (I am not endorsing it) but the simulations not having an epiphenomenal inner aspect that true humans have does not interfere. (In fact, it is good—it means we can use simulations without ethical qualms!)
I wondered too, but I don’t like the “why the downvotes?” attitude when I see it in others, so I refrained from asking. (Fundamental attribution error lesson of the day: what looks like a legitimate puzzled query from the inside, looks like being a whiner from the outside).
My main hypothesis was that the “upvoted for clarity” may have bugged some who saw the original post as obscure. And I must admit that the last paragraphs were much more obscure than the first ones.
Upvoted for clarity.
I think, along with most LWers, that your concerns about qualia and the need for a new ontology are mistaken. But even granting that part of your argument, I don’t see why it is problematic to approach the FAI problem through simulation of humans. Yes, you would only be simulating their physical/computational aspects, not the ineffable subjectiveness, but does that loss matter, for the purposes of seeing how the simulations react to different extrapolations and trying to determine CEV? Only if a) the qualia humans experience are related to their concrete biology and not to their computational properties, and b) the relation is two-ways, so the qualia are not epiphenomenal to behavior but affect it causally, and physics as we understand it is not causally closed. But in that case, you would not be able to make a good computational simulation of a human’s behavior in the first place!
In conclusion, assuming that faithful computational simulations of human behavior are possible, I don’t see how the qualia problem interferes with using them to determine CEV and/or help program FAI. There might be other problems with this line of research (I am not endorsing it) but the simulations not having an epiphenomenal inner aspect that true humans have does not interfere. (In fact, it is good—it means we can use simulations without ethical qualms!)
This seems essentially the same answer as the most upvoted comment on the thread. Yet, you were at −2 just a while ago. I wonder why.
I wondered too, but I don’t like the “why the downvotes?” attitude when I see it in others, so I refrained from asking. (Fundamental attribution error lesson of the day: what looks like a legitimate puzzled query from the inside, looks like being a whiner from the outside).
My main hypothesis was that the “upvoted for clarity” may have bugged some who saw the original post as obscure. And I must admit that the last paragraphs were much more obscure than the first ones.