On the overall point of using LLMs-for-reasoning this (output of a team at AI Safety Camp 2023) might be interesting—it is rather broad-ranging and specifically about argumentation in logic, but maybe useful context: https://compphil.github.io/truth/
On the overall point of using LLMs-for-reasoning this (output of a team at AI Safety Camp 2023) might be interesting—it is rather broad-ranging and specifically about argumentation in logic, but maybe useful context: https://compphil.github.io/truth/