On my inside model of how cognition works, I don’t think “able to automate all research but can’t do consequentialist reasoning” is a coherent property that a system could have.
I actually basically agree with this quote.
Note that I said “incapable of doing non-trivial consequentialist reasoning in a forward pass”. The overall llm agent in the hypothetical is absolutely capable of powerful consequentialist reasoning, but it can only do this by reasoning in natural language. I’ll try to clarify this in my comment.
I actually basically agree with this quote.
Note that I said “incapable of doing non-trivial consequentialist reasoning in a forward pass”. The overall llm agent in the hypothetical is absolutely capable of powerful consequentialist reasoning, but it can only do this by reasoning in natural language. I’ll try to clarify this in my comment.