The question is perhaps not so much about LMs, but about the nature of philosophy. Is it really much (anything?) extra on top of language modelling done by humans (perhaps quite advanced, beyond the abilities of the current LMs, but I have little doubt that in a few years with some incremental algorithm improvements and fine-tuning on the right kinds of text LMs will clear this bar), plus some creativity (covered in the post), plus intuitive moral reasoning?
Regarding reasoning, I also disagree but don’t want to explain why (throwing around capabilities ideas publicly).
The question is perhaps not so much about LMs, but about the nature of philosophy. Is it really much (anything?) extra on top of language modelling done by humans (perhaps quite advanced, beyond the abilities of the current LMs, but I have little doubt that in a few years with some incremental algorithm improvements and fine-tuning on the right kinds of text LMs will clear this bar), plus some creativity (covered in the post), plus intuitive moral reasoning?
Regarding reasoning, I also disagree but don’t want to explain why (throwing around capabilities ideas publicly).