I agree that <15% seems too low for most reasonable definitions of 1-10 hours and the singularity. But I’d guess I’m more sympathetic than you, depending on the definitions Nathan had in mind.
I think both of the phrases “AI capable doing tasks that took 1-10 hours” and “hit the singularity” are underdefined and making them more clear could lead to significantly different probabilities here.
For “capable of doing tasks that took 1-10 hours in 2024”:
If we’re saying that “AI can do every cognitive task that takes a human 1-10 hours in 2024 as well as (edit: the best)a human expert”, I agree it’s pretty clear we’re getting extremely fast progress at that point not least because AI will be able to do the vast majority of tasks that take much longer than that by the time it can do all of 1-10 hour tasks.
Also, it seems like the distribution of relevant cognitive tasks that you care about changes a lot on different time horizons, which further complicates things.
Re: “hit the singularity”, I think in general there’s little agreement on a good definition here e.g. the definition in Tom’s report is based on doubling time of “effective compute in 2022-FLOP” shortening after “full automation”, which I think is unclear what it corresponds to in terms of real-world impact as I think both of these terms are also underdefined/hard to translate into actual capability and impact metrics.
I would be curious to hear the definitions you and Nathan had in mind regarding these terms.
I also guess that the less training data there is, the less good the AIs will be. So while the maybe be good at setting up a dropshipping website for shoes (a 1 − 10 hour task) they may not be good at alignment research.
To me the singularity is when things are undeniably zooming, or perhaps even have zoomed. New AI tech is coming out daily or perhaps the is even godlike AGI. What do folks think is a reasonable definition?
For “capable of doing tasks that took 1-10 hours in 2024”, I was imagining an AI that’s roughly as good as a software engineer that gets paid $100k-$200k a year.
For “hit the singularity”, this one is pretty hazy, I think I’m imagining that the metaculus AGI question has resolved YES, and that the superintelligence question is possibly also resolved YES. I think I’m imagining a point where AI is better than 99% of human experts at 99% of tasks. Although I think it’s pretty plausible that we could enter enormous economic growth with AI that’s roughly as good as humans at most things (I expect the main thing stopping this to be voluntary non-deployment and govt. intervention).
I agree that <15% seems too low for most reasonable definitions of 1-10 hours and the singularity. But I’d guess I’m more sympathetic than you, depending on the definitions Nathan had in mind.
I think both of the phrases “AI capable doing tasks that took 1-10 hours” and “hit the singularity” are underdefined and making them more clear could lead to significantly different probabilities here.
For “capable of doing tasks that took 1-10 hours in 2024”:
If we’re saying that “AI can do every cognitive task that takes a human 1-10 hours in 2024 as well as (edit: the best)
ahuman expert”, I agree it’s pretty clear we’re getting extremely fast progress at that point not least because AI will be able to do the vast majority of tasks that take much longer than that by the time it can do all of 1-10 hour tasks.However, if we’re using a weaker definition like the one Richard used on most cognitive tasks, it beats most human experts who are given 1-10 hours to perform the task, I think it’s much less clear due to human interaction bottlenecks.
Also, it seems like the distribution of relevant cognitive tasks that you care about changes a lot on different time horizons, which further complicates things.
Re: “hit the singularity”, I think in general there’s little agreement on a good definition here e.g. the definition in Tom’s report is based on doubling time of “effective compute in 2022-FLOP” shortening after “full automation”, which I think is unclear what it corresponds to in terms of real-world impact as I think both of these terms are also underdefined/hard to translate into actual capability and impact metrics.
I would be curious to hear the definitions you and Nathan had in mind regarding these terms.
Yeah I was trying to use richard’s terms.
I also guess that the less training data there is, the less good the AIs will be. So while the maybe be good at setting up a dropshipping website for shoes (a 1 − 10 hour task) they may not be good at alignment research.
To me the singularity is when things are undeniably zooming, or perhaps even have zoomed. New AI tech is coming out daily or perhaps the is even godlike AGI. What do folks think is a reasonable definition?
For “capable of doing tasks that took 1-10 hours in 2024”, I was imagining an AI that’s roughly as good as a software engineer that gets paid $100k-$200k a year.
For “hit the singularity”, this one is pretty hazy, I think I’m imagining that the metaculus AGI question has resolved YES, and that the superintelligence question is possibly also resolved YES. I think I’m imagining a point where AI is better than 99% of human experts at 99% of tasks. Although I think it’s pretty plausible that we could enter enormous economic growth with AI that’s roughly as good as humans at most things (I expect the main thing stopping this to be voluntary non-deployment and govt. intervention).
Yeah that sounds about right. A junior dev who needs to be told to do individual features.
You’re hit thi singularity doesn’t sound wrong but I’ll need to think