Longer answer: “Porn” is clearly underspecified, and to make matters worse there’s no single person or interest group that we can try to please with our solution: many different groups (religious traditionalists, radical feminists, /r/nofap...) dislike it for different and often conflicting reasons. This wouldn’t be such a problem—it’s probably possible to come up with a definition broad enough to satisfy all parties’ appetites for social control, distasteful as such a thing is to me—except that we’re also trying to leave “eroticism” alone. Given that additional constraint, we can’t possibly satisfy everyone; the conflicting parties’ decision boundaries differ too much.
We could then come up with some kind of quantification scheme—show questionable media to a sample of the various stakeholders, for example—and try to satisfy as many people as possible. That’s probably the least-bad way of solving the problem as stated, and we can make it as finely grained as we have money for. It’s also one that’s actually implemented in practice—the MPAA ratings board works more or less like this. Note however that it still pisses a lot of people off.
I think a better approach, however, would be to abandon the question as stated and try to solve the problem behind it. None of the stakeholders actually care about banning media-labeled-porn (unless they’re just trying to win points by playing on negative emotional valence, a phenomenon I’ll contemptuously ignore); instead, they have different social agendas that they’re trying to serve by banning some subset of media with that label. Social conservatives want to limit perceived erosion of traditional propriety mores and may see open sexuality as sinful; radical feminists want to reduce what they see as exploitative conditions in the industry and to eliminate media they perceive as objectifying women; /r/nofap wants what it says on the tin.
Depending on the specifics of these objections, we can make interventions a lot more effective and less expensive than varying the exact criteria of a ban: we might be able to satisfy /r/nofap and some conservatives, for example, by instituting an opt-out process by which individuals could voluntarily and verifiably bar themselves from purchasing prurient media (or accessing websites, with the help of a friendly ISP). If we have a little more latitude, we could even look at these agendas and the reasoning behind them, see if they’re actually well-founded and well-targeted, and ignore them if not.
Yeah. An earlier version of my post started by saying so, but I decided that the OP had been explicit enough in asking for an object-level solution that I’d be better off laying out more of the reasoning behind going meta.
Nothing particularly new or interesting, as far as I can tell. It tells us that defining a system of artificial ethics in terms of the object-level prescriptions of a natural ethic is unlikely to be productive; but we already knew that. It also tells us that aggregating people’s values is a hard problem and that the best approaches to solving it probably consist of trying to satisfy underlying motivations rather than stated preferences; but we already knew that, too.
Short answer: Mu.
Longer answer: “Porn” is clearly underspecified, and to make matters worse there’s no single person or interest group that we can try to please with our solution: many different groups (religious traditionalists, radical feminists, /r/nofap...) dislike it for different and often conflicting reasons. This wouldn’t be such a problem—it’s probably possible to come up with a definition broad enough to satisfy all parties’ appetites for social control, distasteful as such a thing is to me—except that we’re also trying to leave “eroticism” alone. Given that additional constraint, we can’t possibly satisfy everyone; the conflicting parties’ decision boundaries differ too much.
We could then come up with some kind of quantification scheme—show questionable media to a sample of the various stakeholders, for example—and try to satisfy as many people as possible. That’s probably the least-bad way of solving the problem as stated, and we can make it as finely grained as we have money for. It’s also one that’s actually implemented in practice—the MPAA ratings board works more or less like this. Note however that it still pisses a lot of people off.
I think a better approach, however, would be to abandon the question as stated and try to solve the problem behind it. None of the stakeholders actually care about banning media-labeled-porn (unless they’re just trying to win points by playing on negative emotional valence, a phenomenon I’ll contemptuously ignore); instead, they have different social agendas that they’re trying to serve by banning some subset of media with that label. Social conservatives want to limit perceived erosion of traditional propriety mores and may see open sexuality as sinful; radical feminists want to reduce what they see as exploitative conditions in the industry and to eliminate media they perceive as objectifying women; /r/nofap wants what it says on the tin.
Depending on the specifics of these objections, we can make interventions a lot more effective and less expensive than varying the exact criteria of a ban: we might be able to satisfy /r/nofap and some conservatives, for example, by instituting an opt-out process by which individuals could voluntarily and verifiably bar themselves from purchasing prurient media (or accessing websites, with the help of a friendly ISP). If we have a little more latitude, we could even look at these agendas and the reasoning behind them, see if they’re actually well-founded and well-targeted, and ignore them if not.
Note that this is the general method for dealing with confused concepts.
Yeah. An earlier version of my post started by saying so, but I decided that the OP had been explicit enough in asking for an object-level solution that I’d be better off laying out more of the reasoning behind going meta.
This all sounds reasonable to me. Now what happens when you apply the same reasoning to Friendly AI?
Nothing particularly new or interesting, as far as I can tell. It tells us that defining a system of artificial ethics in terms of the object-level prescriptions of a natural ethic is unlikely to be productive; but we already knew that. It also tells us that aggregating people’s values is a hard problem and that the best approaches to solving it probably consist of trying to satisfy underlying motivations rather than stated preferences; but we already knew that, too.