An alternative to trying to distinguish between porn and erotica on the basis of content or user attitudes: teach the AI to detect infrastructures of privacy and subterfuge, and to detect when people are willing to publicly patronize and self-identify with something. Most people don’t want others to know that they enjoy porn. You could tell your boss about the nude Pirates you saw last weekend, but probably not the porn. Nude Pirates shows up on the Facebook page, but not so much the porn. An online video with naked people that has half a million views, but is discussed nowhere where one’s identity is transparent, is probably porn. It’s basic to porn that it’s enjoyed privately, erotica publicly.
Well, it’s sufficient for our purposes that it holds in most. Proud and public porn consumers are outliers, and however an AI might make ethical distinctions, there will always be a body of outliers to ignore. But I grant that my approach is culturally relative. In my defense, a culture ordered such that this approach wouldn’t work at all probably wouldn’t seek a ban on porn anyway, and might not even be able to make much sense of the distinction we’re working with here.
Handling sub cultures is difficult. We can ignore outliers because our rules are weak and don’t reach those who make their own local rules and accept the price of violating (or bending) some larger sociaties rules. But an AI may not treat them the same. An AI will be able to enforce the rules on the outliers and effectively kill those sub cultures. Do we ant this? One size fits all? I don’t think so. The complex value function must also allow ‘outliers’ - only the concept must be made stronger.
An alternative to trying to distinguish between porn and erotica on the basis of content or user attitudes: teach the AI to detect infrastructures of privacy and subterfuge, and to detect when people are willing to publicly patronize and self-identify with something. Most people don’t want others to know that they enjoy porn. You could tell your boss about the nude Pirates you saw last weekend, but probably not the porn. Nude Pirates shows up on the Facebook page, but not so much the porn. An online video with naked people that has half a million views, but is discussed nowhere where one’s identity is transparent, is probably porn. It’s basic to porn that it’s enjoyed privately, erotica publicly.
Except that this doesn’t hold in all social circles. Once there is a distinction people will start to use it to make a difference.
Well, it’s sufficient for our purposes that it holds in most. Proud and public porn consumers are outliers, and however an AI might make ethical distinctions, there will always be a body of outliers to ignore. But I grant that my approach is culturally relative. In my defense, a culture ordered such that this approach wouldn’t work at all probably wouldn’t seek a ban on porn anyway, and might not even be able to make much sense of the distinction we’re working with here.
Handling sub cultures is difficult. We can ignore outliers because our rules are weak and don’t reach those who make their own local rules and accept the price of violating (or bending) some larger sociaties rules. But an AI may not treat them the same. An AI will be able to enforce the rules on the outliers and effectively kill those sub cultures. Do we ant this? One size fits all? I don’t think so. The complex value function must also allow ‘outliers’ - only the concept must be made stronger.
I like this kind of indirect approach. I wonder if such ideas could be ported to AI...