I agree with this. “Harm” is too vague to make the harm principle a fully general argument for the Western liberal order- and it certainly wouldn’t do to try and program an AI with it. One thing a liberal society must wrestle with is what kinds of behavior are considered harmful. Usually, we define harm to include some behaviors beyond physical harm: like theft or slander. But watching computer generated images of any kind, in the privacy of your own home is pretty solidly in “doesn’t harm anyone” category, as defined by the liberal/libertarian tradition.
Part of my point is that there isn’t really much of an argument to be had. I suppose if someone demonstrated that the existence of computer generated snuff actually threatened our civilization or something, I could be swayed. But basically I think people should do things that make them happy so long as they avoid hurting others: if that isn’t a terminal value it is awfully close.
I agree with this. “Harm” is too vague to make the harm principle a fully general argument for the Western liberal order- and it certainly wouldn’t do to try and program an AI with it. One thing a liberal society must wrestle with is what kinds of behavior are considered harmful. Usually, we define harm to include some behaviors beyond physical harm: like theft or slander. But watching computer generated images of any kind, in the privacy of your own home is pretty solidly in “doesn’t harm anyone” category, as defined by the liberal/libertarian tradition.
Part of my point is that there isn’t really much of an argument to be had. I suppose if someone demonstrated that the existence of computer generated snuff actually threatened our civilization or something, I could be swayed. But basically I think people should do things that make them happy so long as they avoid hurting others: if that isn’t a terminal value it is awfully close.