You’re of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.
My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).
EDIT: For some reason “statistical approach” leaves a bad taste in some people’s mouths. If you are one of these people, I’d be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.
I’ve always chalked it up to the fact that you can get decent results with a model that doesn’t even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...
Thanks. I think the reason I don’t find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.
It feels like statistical methods are just giving up on getting it ‘absolutely right’ in favour of getting it ‘good enough’, and for morality, that just doesn’t seem satisfactory. Maybe I’m underestimating statistical systems? I’d love to be corrected.
You’re of course right, which is why I said CEV even harder. Extracting a formal grammar only has to achieve coherence, and yet, mainstream linguistics has all but given up. State of the art machine translation tools use statistical inference instead of vast rulesets and I shudder to think what that would mean for CEV.
It would mean using Bayesian (probabilistic) logic instead of Aristotelian logic.
My intuition says that CEV is unlikely to consist of vast rulesets. As you yourself stated, if you take a look at the most successful learning algorithms today, they all use some form of statistical approach (sometimes Bayes, sometimes something else).
EDIT: For some reason “statistical approach” leaves a bad taste in some people’s mouths. If you are one of these people, I’d be happy to know why so that I can either update my beliefs or figure out how to explain why the statistical approach is good.
I’ve always chalked it up to the fact that you can get decent results with a model that doesn’t even pretend to try to correspond to reality and which is thus not robust to things like counterfactuals. Great results on exactly the domain you programmed for is just asking someone to accidentally apply it to a different domain...
I don’t know if this is the actual reason.
Thanks. I think the reason I don’t find this compelling is that I see statistical methods being applied to increasingly general problems; I also see the same general ideas being applied to solve problems in many different domains (after, of course, specializing the ideas to the domain in question). It seems to me that if we continue on this path, the limit is being able to solve fully general problems. But this seems closer to an intuition than something I can actually convince anyone of.
It feels like statistical methods are just giving up on getting it ‘absolutely right’ in favour of getting it ‘good enough’, and for morality, that just doesn’t seem satisfactory. Maybe I’m underestimating statistical systems? I’d love to be corrected.