Thanks for sharing your point of view. I tried to give myself a few days, but I’m aftraid I still don’t understand where you see the magic barrier for the transition from 3 to 4 to happen outside of the realm of human control.
3 says the reason right there. Compute, data, or robotics/money.
What are you not able to understand with a few days of thought?
There is extremely strong evidence that compute is the limit right now. This is trivially correct : the current llms architectures are very similar to prior working attempts for the simple reason that one “try” to train to scale costs millions of dollars in compute. (And getting more money saturates, there is a finite number of training accelerators manufactured per quarter and it takes time to ramp to higher volumes)
To find something better, a hard superintelligence only capped by physics obviously requires many tries at exploring the possibility space. (Even intelligent search algorithms need many function evaluations)
yes, it takes millions to advance, but companies are pouring BILLIONS into this and number 3 can earn its own money and create its own companies/DAOs/some new networks of cooperation if it wanted without humans realizing … have you seen any GDP per year charts whatsoever, why would you think we are anywhere close to saturation of money? have you seen any emergent capabilities from LLMs in the last year, why do you think we are anywhere close to saturation of capabilities per million of dollars? Alpaca-like improvemnts are somehow one-off miracle and things are not getting cheaper and better and more efficient in the future somehow?
it could totally happen, but what I don’t see is why are you so sure it will happen by default, are you extrapolating some trend from non-public data or just overly optimistic that 1+1 from previous trends is less than 2 in the future, totally unlike the compount effects in AI advancement in the last year?
Because we are saturated right now and I gave evidence and you can read the gpt-4 paper for more evidence. See:
“getting more money saturates, there is a finite number of training accelerators manufactured per quarter and it takes time to ramp to higher volume”
“Billions” cannot buy more accelerators than exist, and the robot/compute/capabilities limits also limit the ROI that can be provided, which makes the billions not infinite as eventually investors get impatient.
What this means is that it may take 20 years or more of steady exponential growth (but only 10-50 percent annually) to reach ASI and self replicating factories and so on.
On a cosmic timescale or even a human lifespan this is extremely fast. I am noting this is more likely than “overnight” scenarios where someone tweaks a config file, an AI reaches high superintelligence and fills the earth with grey goo in days. There was not enough data in existence for the AI to reach high superintelligence, a “high” superintelligence would require thousands or millions of times as much training compute as GPT-4 (because it’s a power law), even once it’s trained it doesn’t have sufficient robotics to bootstrap to nanoforges without years or decades of steady ramping to be ready to do that.
(a high superintelligence is a machine that is not just a reasonable amount better than humans at all tasks but is essentially a deity outputting perfect moves on every task that take into account all of the machines plans and cross task and cross session knowledge.
So it might communicate with a lobbyist and 1e6 people at once and use information from all conversations in all conversations, essentially manipulating the world like a game of pool. Something genuinely uncontainable.)
Thanks for sharing your point of view. I tried to give myself a few days, but I’m aftraid I still don’t understand where you see the magic barrier for the transition from 3 to 4 to happen outside of the realm of human control.
3 says the reason right there. Compute, data, or robotics/money.
What are you not able to understand with a few days of thought?
There is extremely strong evidence that compute is the limit right now. This is trivially correct : the current llms architectures are very similar to prior working attempts for the simple reason that one “try” to train to scale costs millions of dollars in compute. (And getting more money saturates, there is a finite number of training accelerators manufactured per quarter and it takes time to ramp to higher volumes)
To find something better, a hard superintelligence only capped by physics obviously requires many tries at exploring the possibility space. (Even intelligent search algorithms need many function evaluations)
yes, it takes millions to advance, but companies are pouring BILLIONS into this and number 3 can earn its own money and create its own companies/DAOs/some new networks of cooperation if it wanted without humans realizing … have you seen any GDP per year charts whatsoever, why would you think we are anywhere close to saturation of money? have you seen any emergent capabilities from LLMs in the last year, why do you think we are anywhere close to saturation of capabilities per million of dollars? Alpaca-like improvemnts are somehow one-off miracle and things are not getting cheaper and better and more efficient in the future somehow?
it could totally happen, but what I don’t see is why are you so sure it will happen by default, are you extrapolating some trend from non-public data or just overly optimistic that 1+1 from previous trends is less than 2 in the future, totally unlike the compount effects in AI advancement in the last year?
Because we are saturated right now and I gave evidence and you can read the gpt-4 paper for more evidence. See:
“getting more money saturates, there is a finite number of training accelerators manufactured per quarter and it takes time to ramp to higher volume”
“Billions” cannot buy more accelerators than exist, and the robot/compute/capabilities limits also limit the ROI that can be provided, which makes the billions not infinite as eventually investors get impatient.
What this means is that it may take 20 years or more of steady exponential growth (but only 10-50 percent annually) to reach ASI and self replicating factories and so on.
On a cosmic timescale or even a human lifespan this is extremely fast. I am noting this is more likely than “overnight” scenarios where someone tweaks a config file, an AI reaches high superintelligence and fills the earth with grey goo in days. There was not enough data in existence for the AI to reach high superintelligence, a “high” superintelligence would require thousands or millions of times as much training compute as GPT-4 (because it’s a power law), even once it’s trained it doesn’t have sufficient robotics to bootstrap to nanoforges without years or decades of steady ramping to be ready to do that.
(a high superintelligence is a machine that is not just a reasonable amount better than humans at all tasks but is essentially a deity outputting perfect moves on every task that take into account all of the machines plans and cross task and cross session knowledge.
So it might communicate with a lobbyist and 1e6 people at once and use information from all conversations in all conversations, essentially manipulating the world like a game of pool. Something genuinely uncontainable.)