Personally, I’d guess that we could see a lot of improvement by clever uses of safe AIs. Even if we stopped improving on LLMs today, I think we have a long way to go to make good use of current systems.
Just because there are potentially risky AIs down the road doesn’t mean we should ignore the productive use of safe AIs.
Minor flag, but I’ve thought about some similar ideas, and here’s one summary:
https://forum.effectivealtruism.org/posts/YpaQcARgLHFNBgyGa/prioritization-research-for-advancing-wisdom-and
Personally, I’d guess that we could see a lot of improvement by clever uses of safe AIs. Even if we stopped improving on LLMs today, I think we have a long way to go to make good use of current systems.
Just because there are potentially risky AIs down the road doesn’t mean we should ignore the productive use of safe AIs.