[another stray thought] …so then CEV would be like trying to extract a fully formal grammar for a given language, only harder.
Not really, CEV doesn’t depend on everyone’s extrapolated volitions converging to total moral agreement.
(I tried to come up with a better CEV analogy, but I’m not sure if that’s doable without stretching the metaphor far beyond any real explanatory usefulness; the ideas of volition and extrapolation don’t actually have any obvious linguistic analogues, as far as I can tell. The platonic computation of morality could be compared to universal grammar or something, but it’s not a strong analogy (for the purposes we’re interested in) because universal grammar doesn’t have the recursive/reflective property that morality does (we can ask ourselves “Should I make this change to the algorithm I use to answer ‘should’ questions?”, and although I suppose we could also ask ourselves if we should change our linguistic algorithms, that wouldn’t loop back on itself like morality does, so the problem of “extrapolating” it wouldn’t be as difficult or as interesting).)
You’re of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.
My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).
EDIT: For some reason “statistical approach” leaves a bad taste in some people’s mouths. If you are one of these people, I’d be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.
I’ve always chalked it up to the fact that you can get decent results with a model that doesn’t even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...
Thanks. I think the reason I don’t find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.
It feels like statistical methods are just giving up on getting it ‘absolutely right’ in favour of getting it ‘good enough’, and for morality, that just doesn’t seem satisfactory. Maybe I’m underestimating statistical systems? I’d love to be corrected.
Not really, CEV doesn’t depend on everyone’s extrapolated volitions converging to total moral agreement.
(I tried to come up with a better CEV analogy, but I’m not sure if that’s doable without stretching the metaphor far beyond any real explanatory usefulness; the ideas of volition and extrapolation don’t actually have any obvious linguistic analogues, as far as I can tell. The platonic computation of morality could be compared to universal grammar or something, but it’s not a strong analogy (for the purposes we’re interested in) because universal grammar doesn’t have the recursive/reflective property that morality does (we can ask ourselves “Should I make this change to the algorithm I use to answer ‘should’ questions?”, and although I suppose we could also ask ourselves if we should change our linguistic algorithms, that wouldn’t loop back on itself like morality does, so the problem of “extrapolating” it wouldn’t be as difficult or as interesting).)
You’re of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.
It would mean using Bayesian (probabilistic) logic instead of Aristotelian logic.
My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).
EDIT: For some reason “statistical approach” leaves a bad taste in some people’s mouths. If you are one of these people, I’d be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.
I’ve always chalked it up to the fact that you can get decent results with a model that doesn’t even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...
I don’t know if this is the actual reason.
Thanks. I think the reason I don’t find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.
It feels like statistical methods are just giving up on getting it ‘absolutely right’ in favour of getting it ‘good enough’, and for morality, that just doesn’t seem satisfactory. Maybe I’m underestimating statistical systems? I’d love to be corrected.