I also thought this. Then we run a facility full of robots and have them synthesize and measure the material properties of all 2.2 million crystals. Replication is cheap and would be automatically done so we don’t waste time on materials that seem good due to an error.
Then a human scientist writes a formula that takes into account several properties for suitability to a given tasks, sorts the spreadsheet of results by the formula, orders built a new device using the top scoring materials, writes a paper with the help of a gpt, publishes and collects the rewards for this amazing new discovery.
So I think the OP is thinking that last 1 percent or 0.1 percent contributed by the humans means the model isn’t fully autonomous? And I have seen a kinda bias on lesswrong where many posters went to elite schools and do elite work and they don’t realize all the other people that are needed for anything to be done. For example every cluster of a million GPUs requires a large crew of technicians and all the factory workers and engineers who designed and built all the hardware.
In terms of human labor hours, 10 AI researchers using a large cluster are greatly outnumbered by the other people involved they don’t see. Possibly thousands of other people working full time when you start considering billion dollar clusters, if just 20 percent of that was paying for human labor at the average salary weighted by Asia.
This means ai driven autonomy can be transformational even if the labor of the most elite workers can’t be done by AI.
In numbers, if just 1 of those AI researchers can be automated, but 90 percent of the factory workers and mine workers, and the total crew was 1000 people including all the invisible contributors in Asia, then for the task of AI research it needs 109 people instead of 1000.
But from the OPs perspective, the model hasn’t automated much, you need 9 elite researchers instead of 10. And actually the next generation of AI is more complex so you hire more people and less new ideas are working as low hanging fruit are plucked. If you focus on just elite contributors, only the most powerful AI can be transformational. I have noticed this bias from several prominent lesswrong posters.
I am confused. I agree with the above scenario, but disagree that the focus is a bias. Sure, for human society the linear speed-up scale is important, but for the dynamics of the intelligence explosion the log-scale seems more important. By your own account, we would rapidly move to a situation, where the most capable humans/institutions are in fact the bottleneck. As anyone who is not able to keep up with the speed of their job being automated away is not going to contribute a lot on the margin of intelligence self-improvement. For example, OpenAI/Microsoft/Deepmind/Anthropic/Meta deciding in the future to design and manufacture their chips in house, because NVIDIA can’t keep up etc…
I don’t know if I expect this would make NVIDIA’s stock tank before the world ends. I expect everyone else to profit from slowly generating mundane utility from general AI tools, as is happening today.
Here’s another aspect you may not have considered. “Only” being able to automate the lower 90-99 percent of human industrial tasks results in a conventional industry explosion. Scaling continue until the 1-10 percent of humans still required is the limiting factor.
A world that has 10 to 100 times today’s entire capacity for everything (that means consumer goods, durable goods like cars, weapons, structures if factory prefab) is transformed.
And this feeds back into itself like you realize, the crew of AI researchers trying to automate themselves now has a lot more hardware to work with etc.
This seems overall consistent with Thane’s statements in the post? They don’t make any claims about current AIs not being a transformative technology. Indeed, they do state that current AIs are a powerful technology.
Third and last paragraph I try to explain why the OP and prominent experts like Matthew Barnett and Richard Ngos and others all model much harder standards for when AI will be transformative.
For a summary: advancing technology is mostly perspiration not inspiration, automating the perspiration will be transformative.
This means ai driven autonomy can be transformational even if the labor of the most elite workers can’t be done by AI.
Oh, totally. But I’m not concerned about transformations to the human society in general, I’m concerned about AGI killing everyone. And what you’ve described isn’t going to lead to AGI killing everyone.
See my reply here for why I think complete autonomy is crucial.
I also thought this. Then we run a facility full of robots and have them synthesize and measure the material properties of all 2.2 million crystals. Replication is cheap and would be automatically done so we don’t waste time on materials that seem good due to an error.
Then a human scientist writes a formula that takes into account several properties for suitability to a given tasks, sorts the spreadsheet of results by the formula, orders built a new device using the top scoring materials, writes a paper with the help of a gpt, publishes and collects the rewards for this amazing new discovery.
So I think the OP is thinking that last 1 percent or 0.1 percent contributed by the humans means the model isn’t fully autonomous? And I have seen a kinda bias on lesswrong where many posters went to elite schools and do elite work and they don’t realize all the other people that are needed for anything to be done. For example every cluster of a million GPUs requires a large crew of technicians and all the factory workers and engineers who designed and built all the hardware.
In terms of human labor hours, 10 AI researchers using a large cluster are greatly outnumbered by the other people involved they don’t see. Possibly thousands of other people working full time when you start considering billion dollar clusters, if just 20 percent of that was paying for human labor at the average salary weighted by Asia.
This means ai driven autonomy can be transformational even if the labor of the most elite workers can’t be done by AI.
In numbers, if just 1 of those AI researchers can be automated, but 90 percent of the factory workers and mine workers, and the total crew was 1000 people including all the invisible contributors in Asia, then for the task of AI research it needs 109 people instead of 1000.
But from the OPs perspective, the model hasn’t automated much, you need 9 elite researchers instead of 10. And actually the next generation of AI is more complex so you hire more people and less new ideas are working as low hanging fruit are plucked. If you focus on just elite contributors, only the most powerful AI can be transformational. I have noticed this bias from several prominent lesswrong posters.
I am confused. I agree with the above scenario, but disagree that the focus is a bias. Sure, for human society the linear speed-up scale is important, but for the dynamics of the intelligence explosion the log-scale seems more important. By your own account, we would rapidly move to a situation, where the most capable humans/institutions are in fact the bottleneck. As anyone who is not able to keep up with the speed of their job being automated away is not going to contribute a lot on the margin of intelligence self-improvement. For example, OpenAI/Microsoft/Deepmind/Anthropic/Meta deciding in the future to design and manufacture their chips in house, because NVIDIA can’t keep up etc… I don’t know if I expect this would make NVIDIA’s stock tank before the world ends. I expect everyone else to profit from slowly generating mundane utility from general AI tools, as is happening today.
Here’s another aspect you may not have considered. “Only” being able to automate the lower 90-99 percent of human industrial tasks results in a conventional industry explosion. Scaling continue until the 1-10 percent of humans still required is the limiting factor.
A world that has 10 to 100 times today’s entire capacity for everything (that means consumer goods, durable goods like cars, weapons, structures if factory prefab) is transformed.
And this feeds back into itself like you realize, the crew of AI researchers trying to automate themselves now has a lot more hardware to work with etc.
This seems overall consistent with Thane’s statements in the post? They don’t make any claims about current AIs not being a transformative technology. Indeed, they do state that current AIs are a powerful technology.
Third and last paragraph I try to explain why the OP and prominent experts like Matthew Barnett and Richard Ngos and others all model much harder standards for when AI will be transformative.
For a summary: advancing technology is mostly perspiration not inspiration, automating the perspiration will be transformative.
Oh, totally. But I’m not concerned about transformations to the human society in general, I’m concerned about AGI killing everyone. And what you’ve described isn’t going to lead to AGI killing everyone.
See my reply here for why I think complete autonomy is crucial.