To construct a friendly AI, you need to be able to make vague concepts crystal clear, cutting reality at the joints when those joints are obscure and fractal—and them implement a system that implements that cut.
I don’t think that this is true. Reductionist solutions to philosophical problems typically pick some new concepts which can be crisply defined, and then rephrase the problem in terms of those, throwing out the old fuzzy concepts in the process. What they don’t do is to take the fuzzy concepts and try to rework them.
For example, nowhere in the “Free Will Sequence” does Eliezer give a new clear definition of “free will” by which one may decide whether something has free will or not. Instead he just explains all the things that you might want to explain with “free will” using concepts like “algorithm”.
For another example, pretty much all questions of epistemic rationality are settled by Bayesianism. Note that Bayesianism doesn’t contain anywhere a definition of “knowledge”. So we’ve successfully dodged the “problem of knowledge”.
So the answer to the title question is to ask what you want to achieve by banning porn, and then ban precisely the things such that banning them helps you achieve that aim. Less tautologically, my point is that the correct way of banning porn isn’t to make a super precise definition of “porn” and then implement that definition.
I don’t think that this is true. Reductionist solutions to philosophical problems typically pick some new concepts which can be crisply defined, and then rephrase the problem in terms of those, throwing out the old fuzzy concepts in the process. What they don’t do is to take the fuzzy concepts and try to rework them.
For example, nowhere in the “Free Will Sequence” does Eliezer give a new clear definition of “free will” by which one may decide whether something has free will or not. Instead he just explains all the things that you might want to explain with “free will” using concepts like “algorithm”.
For another example, pretty much all questions of epistemic rationality are settled by Bayesianism. Note that Bayesianism doesn’t contain anywhere a definition of “knowledge”. So we’ve successfully dodged the “problem of knowledge”.
So the answer to the title question is to ask what you want to achieve by banning porn, and then ban precisely the things such that banning them helps you achieve that aim. Less tautologically, my point is that the correct way of banning porn isn’t to make a super precise definition of “porn” and then implement that definition.