Actually, your metaphor is more apt than you give it credit for. Native speakers can’t be ‘proven wrong’ in their use of the language as 1) no language has a formal grammar and 2) to the extent that there are rules they are extracted from the way native speakers use the language. Something like morality, then.
Come to think of it, this can be used to construct a pretty intuitive response to those that claim that ‘without god there is no objective morality and therefore society will collapse’. There is no formal grammar for English, and yet we’re able to communicate pretty well.
[another stray thought] …so then CEV would be like trying to extract a fully formal grammar for a given language, only harder.
Native speakers can’t be ‘proven wrong’ in their use of the language
Right, but they can be proven wrong in the explanations they give about their use of language (except for rare pathological sentences, speakers of the same language agree which sentences “feel wrong”). Disproving an explanation about one’s morality is much harder.
[another stray thought] …so then CEV would be like trying to extract a fully formal grammar for a given language, only harder.
Not really, CEV doesn’t depend on everyone’s extrapolated volitions converging to total moral agreement.
(I tried to come up with a better CEV analogy, but I’m not sure if that’s doable without stretching the metaphor far beyond any real explanatory usefulness; the ideas of volition and extrapolation don’t actually have any obvious linguistic analogues, as far as I can tell. The platonic computation of morality could be compared to universal grammar or something, but it’s not a strong analogy (for the purposes we’re interested in) because universal grammar doesn’t have the recursive/reflective property that morality does (we can ask ourselves “Should I make this change to the algorithm I use to answer ‘should’ questions?”, and although I suppose we could also ask ourselves if we should change our linguistic algorithms, that wouldn’t loop back on itself like morality does, so the problem of “extrapolating” it wouldn’t be as difficult or as interesting).)
You’re of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.
My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).
EDIT: For some reason “statistical approach” leaves a bad taste in some people’s mouths. If you are one of these people, I’d be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.
I’ve always chalked it up to the fact that you can get decent results with a model that doesn’t even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...
Thanks. I think the reason I don’t find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.
It feels like statistical methods are just giving up on getting it ‘absolutely right’ in favour of getting it ‘good enough’, and for morality, that just doesn’t seem satisfactory. Maybe I’m underestimating statistical systems? I’d love to be corrected.
Native speakers can’t be ‘proven wrong’ in their use of the language
I’d say they can, withing limits—they can recognize that certain ways of saying things “feel wrong”, and you’ll rarely find speakers of the same dialect disagreeing over whether a sentence “feels wrong” (though there are probably some border cases).
Actually, your metaphor is more apt than you give it credit for. Native speakers can’t be ‘proven wrong’ in their use of the language as 1) no language has a formal grammar and 2) to the extent that there are rules they are extracted from the way native speakers use the language. Something like morality, then.
Come to think of it, this can be used to construct a pretty intuitive response to those that claim that ‘without god there is no objective morality and therefore society will collapse’. There is no formal grammar for English, and yet we’re able to communicate pretty well.
[another stray thought] …so then CEV would be like trying to extract a fully formal grammar for a given language, only harder.
Right, but they can be proven wrong in the explanations they give about their use of language (except for rare pathological sentences, speakers of the same language agree which sentences “feel wrong”). Disproving an explanation about one’s morality is much harder.
(I don’t know if I’m disagreeing with you here)
Not really, CEV doesn’t depend on everyone’s extrapolated volitions converging to total moral agreement.
(I tried to come up with a better CEV analogy, but I’m not sure if that’s doable without stretching the metaphor far beyond any real explanatory usefulness; the ideas of volition and extrapolation don’t actually have any obvious linguistic analogues, as far as I can tell. The platonic computation of morality could be compared to universal grammar or something, but it’s not a strong analogy (for the purposes we’re interested in) because universal grammar doesn’t have the recursive/reflective property that morality does (we can ask ourselves “Should I make this change to the algorithm I use to answer ‘should’ questions?”, and although I suppose we could also ask ourselves if we should change our linguistic algorithms, that wouldn’t loop back on itself like morality does, so the problem of “extrapolating” it wouldn’t be as difficult or as interesting).)
You’re of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.
It would mean using Bayesian (probabilistic) logic instead of Aristotelian logic.
My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).
EDIT: For some reason “statistical approach” leaves a bad taste in some people’s mouths. If you are one of these people, I’d be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.
I’ve always chalked it up to the fact that you can get decent results with a model that doesn’t even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...
I don’t know if this is the actual reason.
Thanks. I think the reason I don’t find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.
It feels like statistical methods are just giving up on getting it ‘absolutely right’ in favour of getting it ‘good enough’, and for morality, that just doesn’t seem satisfactory. Maybe I’m underestimating statistical systems? I’d love to be corrected.
I’d say they can, withing limits—they can recognize that certain ways of saying things “feel wrong”, and you’ll rarely find speakers of the same dialect disagreeing over whether a sentence “feels wrong” (though there are probably some border cases).