A lot of things humans do are AGI-complete, they won’t be automated in a business-as-usual world, before everything else changes too, around the same time. There is no timeline where some of them happen 4 years from now, while others 7 years after that. Possibly even good self-driving cars are AGI-complete, but certainly programming.
Thus there is no straightforward fire alarm: anything that actually happens before AGI is not AGI-complete, and so doesn’t directly demonstrate the possibility of an AGI; and anything that is AGI-complete won’t work before AGI.
I made a bit of a mistake in my wording of the post. I wrote:
I think the rate of advancement of LLMs indicates that this is possible in the near-term, <5 years, and could result in significant financial problems
I accidentally used weasel words here, which has become a force of habit due to the style of writing required for my job. I meant to introduce the possibility as a serious risk that’s obviously worth considering, not make a claim that it would probably (>50%) happen very soon. My use of the words “I think”, “indicates”, “this is possible”, and “could result” were a bad contrast with the precise numbers that I put immediately afterwards, and this was entirely my mistake.
My concern here was that it’s possible that with a little bit of real effort, someone on LW could easily forecast that the labor market in the bay area is headed for a nightmare scenario, which has significant implications for humanity’s ~300 AI safety researchers who are largely located inside the bay area economy. This was based entirely on recent LLM advancements, not AGI timelines or AGI indicators.
A lot of things humans do are AGI-complete, they won’t be automated in a business-as-usual world, before everything else changes too, around the same time. There is no timeline where some of them happen 4 years from now, while others 7 years after that. Possibly even good self-driving cars are AGI-complete, but certainly programming.
Thus there is no straightforward fire alarm: anything that actually happens before AGI is not AGI-complete, and so doesn’t directly demonstrate the possibility of an AGI; and anything that is AGI-complete won’t work before AGI.
I made a bit of a mistake in my wording of the post. I wrote:
I accidentally used weasel words here, which has become a force of habit due to the style of writing required for my job. I meant to introduce the possibility as a serious risk that’s obviously worth considering, not make a claim that it would probably (>50%) happen very soon. My use of the words “I think”, “indicates”, “this is possible”, and “could result” were a bad contrast with the precise numbers that I put immediately afterwards, and this was entirely my mistake.
My concern here was that it’s possible that with a little bit of real effort, someone on LW could easily forecast that the labor market in the bay area is headed for a nightmare scenario, which has significant implications for humanity’s ~300 AI safety researchers who are largely located inside the bay area economy. This was based entirely on recent LLM advancements, not AGI timelines or AGI indicators.