I think use of AI tools could have similar results to human cognitive enhancement, which I expect to basically be helpful. They’ll have more problems with things that are enhanced by stuff like “bigger brain size” rather than “faster thought” and “reducing entropic error rates / wisdom of the crowds” because they’re trained on humans. One can in general expect more success on this sort of thing by having an idea of what problem is even being solved. There’s a lot of stuff that happens in philosophy departments that isn’t best explained by “solving the problem” (which is under-defined anyway) and could be explained by motives like “building connections”, “getting funding”, “being on the good side of powerful political coalitions”, etc. So psychology/sociology of philosophy seems like an approach to understand what is even being done when humans say they’re trying to solve philosophy problems.
I think use of AI tools could have similar results to human cognitive enhancement, which I expect to basically be helpful. They’ll have more problems with things that are enhanced by stuff like “bigger brain size” rather than “faster thought” and “reducing entropic error rates / wisdom of the crowds” because they’re trained on humans. One can in general expect more success on this sort of thing by having an idea of what problem is even being solved. There’s a lot of stuff that happens in philosophy departments that isn’t best explained by “solving the problem” (which is under-defined anyway) and could be explained by motives like “building connections”, “getting funding”, “being on the good side of powerful political coalitions”, etc. So psychology/sociology of philosophy seems like an approach to understand what is even being done when humans say they’re trying to solve philosophy problems.