“It’s not what you don’t know that kills you but what you know that isn’t so.”
The issue with pre-paradigmatic fields is that most assumptions we have about them are both implicit and wrong. Philosophy at its best is great at digging up, explicating and questioning them. This part has not been done well enough for AI/ML, though the research seems to be ongoing, with the new models providing new insights into how humans do and do not think. I’d guess that there is plenty of newly low-hanging fruit in epistemology exposed by the likes of GPT-3/DallE/SD etc. Just need to carefully figure out what implicit assumptions now stand naked, exposed and wrong.
The issue with pre-paradigmatic fields is that most assumptions we have about them are both implicit and wrong. Philosophy at its best is great at digging up, explicating and questioning them. This part has not been done well enough for AI/ML, though the research seems to be ongoing, with the new models providing new insights into how humans do and do not think. I’d guess that there is plenty of newly low-hanging fruit in epistemology exposed by the likes of GPT-3/DallE/SD etc. Just need to carefully figure out what implicit assumptions now stand naked, exposed and wrong.