To a first approximation my futurism is time acceleration; so the risks are the typical risks sans AI, but the timescale is hyperexponential ala roodman. Even a more gradual takeoff would imply more risk to global stability on faster timescales than anything we’ve experience in history; the wrong AGI race winners could create various dystopias.
I can’t point to such a site, however you should be aware of AI Optimists, not sure if Jacob plans to write there. Also follow the work of Quentin Pope, Alex Turner, Nora Belrose etc. I expect the site would point out what they feel to be the most important risks. I don’t know of anyone rational, no matter how optimistic who doesn’t think there are substantial ones.
If you meant for current LLMs, some of them could be misuse of current LLM by humans, or risks such as harmful content, harmful hallucination, privacy, memorization, bias, etc. For some other models such as ranking/multiple ranking, I have heard some other worries on deception as well (this is only what I recall of hearing, so it might be completely wrong).
Do you have a post or blog post on the risks we do need to worry about?
No, and that’s a reasonable ask.
To a first approximation my futurism is time acceleration; so the risks are the typical risks sans AI, but the timescale is hyperexponential ala roodman. Even a more gradual takeoff would imply more risk to global stability on faster timescales than anything we’ve experience in history; the wrong AGI race winners could create various dystopias.
I can’t point to such a site, however you should be aware of AI Optimists, not sure if Jacob plans to write there. Also follow the work of Quentin Pope, Alex Turner, Nora Belrose etc. I expect the site would point out what they feel to be the most important risks. I don’t know of anyone rational, no matter how optimistic who doesn’t think there are substantial ones.
If you meant for current LLMs, some of them could be misuse of current LLM by humans, or risks such as harmful content, harmful hallucination, privacy, memorization, bias, etc. For some other models such as ranking/multiple ranking, I have heard some other worries on deception as well (this is only what I recall of hearing, so it might be completely wrong).