What I mean[1] is that it seems unlikely relative to what the scale implies, the graph on the log-scale levels off before it gets there. This claim depends on the existence of a reference human who solves the problem in 1 month, since there are some hard problems that take 30 years, but those aren’t relevant to the claim, since it’s about the range of useful slowdowns relative to human effort. The 1-month human remains human on the other side of the analogy, so doesn’t get impossible levels of starting knowledge, instead it’s the 20-year-failing human who becomes a 200-million-token-failing AI that fails despite a knowledge advantage.
“If it takes a human 1 month to solve a difficult problem, it seems unlikely that a less capable human who can’t solve it within 100 months of effort can still succeed in 10 000 months”
That is another implied claim, though it’s not actually observable as evidence, and requires the 10,000 months to pass without advancements in relevant externally generated science (which is easier to imagine for 20 years with a sufficiently obscure problem). Progress like that is possible for sufficiently capable humans, but then I think there won’t be an even more capable human that solves it in 1 month. The relevant AIs are less capable than humans, so to the extent the analogy holds, they similarly won’t be able to be productive with much longer exploration that is essentially serial.
I considered this issue when writing the comment, but the range itself couldn’t be fixed, since both the decades-long-failure and month-long-deliberation seem important, and then there is the human lifespan. My impression is that adding non-concrete details to the kind of top-level comment I’m capable of writing makes it weaker. But the specific argument for not putting in this detail was that this is a legibly implausible kind of mistake for me to make, and such arguments feed the norm of others not pointing out mistakes, so on reflection I don’t endorse this decision. Perhaps I should use footnotes more.
What I mean[1] is that it seems unlikely relative to what the scale implies, the graph on the log-scale levels off before it gets there. This claim depends on the existence of a reference human who solves the problem in 1 month, since there are some hard problems that take 30 years, but those aren’t relevant to the claim, since it’s about the range of useful slowdowns relative to human effort. The 1-month human remains human on the other side of the analogy, so doesn’t get impossible levels of starting knowledge, instead it’s the 20-year-failing human who becomes a 200-million-token-failing AI that fails despite a knowledge advantage.
That is another implied claim, though it’s not actually observable as evidence, and requires the 10,000 months to pass without advancements in relevant externally generated science (which is easier to imagine for 20 years with a sufficiently obscure problem). Progress like that is possible for sufficiently capable humans, but then I think there won’t be an even more capable human that solves it in 1 month. The relevant AIs are less capable than humans, so to the extent the analogy holds, they similarly won’t be able to be productive with much longer exploration that is essentially serial.
I considered this issue when writing the comment, but the range itself couldn’t be fixed, since both the decades-long-failure and month-long-deliberation seem important, and then there is the human lifespan. My impression is that adding non-concrete details to the kind of top-level comment I’m capable of writing makes it weaker. But the specific argument for not putting in this detail was that this is a legibly implausible kind of mistake for me to make, and such arguments feed the norm of others not pointing out mistakes, so on reflection I don’t endorse this decision. Perhaps I should use footnotes more.