I am not arguing that we’ll end up building tool AI; I do think it will be agent-like. At a high level, I’m arguing that the intelligence and agentiness will increase continuously over time, and as we notice the resulting (non-existential) problems we’ll fix them, or start over.
I agree with your point that long-term planning will develop even with a bunch of heuristics.
I am not arguing that we’ll end up building tool AI; I do think it will be agent-like. At a high level, I’m arguing that the intelligence and agentiness will increase continuously over time, and as we notice the resulting (non-existential) problems we’ll fix them, or start over.
I agree with your point that long-term planning will develop even with a bunch of heuristics.