In the context of his argument I think the claim is reasonable, since I interpreted it as the claim that, since it can be used a tool that designs plans, it has already overcome the biggest challenge of being an agent.
But if we take that claim out of context and interpret it literally, then I agree that it’s not a justified statement per se. It may be able to simulate a plausible causal explanation, but I think that is very different from actually knowing it. As long as you only have access to partial information, there are theoretical limits to what you can know about the world. But it’s hard to think of contexts where that gap would matter a lot.
In the context of his argument I think the claim is reasonable, since I interpreted it as the claim that, since it can be used a tool that designs plans, it has already overcome the biggest challenge of being an agent.
But if we take that claim out of context and interpret it literally, then I agree that it’s not a justified statement per se. It may be able to simulate a plausible causal explanation, but I think that is very different from actually knowing it. As long as you only have access to partial information, there are theoretical limits to what you can know about the world. But it’s hard to think of contexts where that gap would matter a lot.