Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
Talk of “approaches” in AI has a similar insidious effect to that of “-ism”s of philosophy, compartmentalizing (motivation for) projects from the rest of the field.
That’s an interesting idea. Would you share some evidence for that? (anecdotes or whatever). I sometimes think in terms of a ‘bayesian approach to statistics’.
I think the “insidious effect” exists and isn’t always a bad thing.