Isn’t it just plausible that current deep learning methods are universal but currently inefficient and thus it will take a huge amount of compute and/or algorithmic progress?
What’s the current best case for that hypothesis? I’m mildly skeptical of claims about the data-inefficiency of current methods compared to the human brain, due to it not being an apples to apples comparison + due to things like EfficientZero which seem pretty darn data-efficient.
As far as I understand, the gears to ascension means that either we have fast takeoff somewhen in the next ten years or we don’t have it at all. We can get 10+ years timelines, but if so, it is going to be slow takeoff.
strong takeoff: AIs can quickly be many, many orders of magnitude more intelligent than humans within a few development cycles, anything less than 5 years to being able to outmaneuver any human at anything.
weak takeoff: AIs can more or less match human capability, but to exceed it looks similar to humans exceeding other humans’ capability. they eventually are drastically stronger than humans were, but there’s never a moment where they can have a sudden insight that gives them this strength; it’s all incremental progress all the way down. [edit months later: we’re already a ways into this one.]
strong takeoff is the thing yudkowsky still seems to be expecting: a from-our-perspective-godlike-machine that is so fast and accurate that we have absolutely no hope of even comprehending it, and which is so far beyond the capabilities of any intelligent entity now that they all look like a handwriting recognition model in comparison. it’s not obvious to me that strong takeoff is possible. I’d give it 80% probability that it is right now, but I have significant expectation of logarithmically diminishing returns such that the things that are stronger than us never get so much stronger that we have no hope of understanding what they’re doing. eg, if we’re 10% quality jpegs of the universe and the new model is a 70% quality jpeg, contrast to the superai model where we’re imagining a 99.999% quality jpeg or something. (go look up jpeg quality levels to get an idea what I’m saying here)
but I don’t see any model where AI isn’t a training run away from stronger than human in every capability by like 2030 or so, and I think it’s ridiculous that anyone thinks it’s highly plausible. we’re already so close, how can you think there’s that much left to do! seems to me that the limits on ai capability are already pretty much just training data.
FWIW, people talking about “slow” or “continuous” takeoff don’t typically expect that long between “human-ish level AI” and “god” if things go as fast as possible (like maybe 1 to 3 years).
mmmmore like “weak” takeoff = foom/singularity ~doesn’t happen even slowly, “strong” takeoff = foom singularity ~happens, even if more slowly than expected.
weak takeoff is proposing “the curves saturate on ~data quality way sooner than you expected, and scaling further provides ongoing but diminishing returns”.
something like, does intelligence have a “going critical”-type thing at all, vs whether increasingly ~large AIs are just trying harder and harder to squeeze a little more optimality out of the same amount of data.
my thinking singularity probably does happen is because of things in the genre of, eg, phi-2. but it’s possible that that’s just us not being all the way up a sigmoid that is going to saturate, and I generally have a high prior on sigmoid saturation sorts of dynamics in the growth of individual technologies. updating off of ai progress it seems like that sigmoid could be enormous! but I still have significant probability on it isn’t.
I expect humans to be unequivocally beaten at ~everything in the next 10 years (and that’s giving a fairly wide margin! I’d be surprised if it takes that long) - the question I was commenting on is what happens after that.
No singularity seems pretty unlikely to me (e.g. 10%) and also I can easily imagine AI talking a while (e.g. 20 years) while still having a singularity.
Separately, no singularity plausibly implies no hinge of history and thus maybe implies that current work isn’t that important from a longtermist perspective
Well, humans are still at risk of being wiped out by a war with a successor species and/or a war with each other aided by a successor species, regardless of which of these models is true. Not dying as an individual or species is sort of a continuous always-a-hinge-of-history sort of thing.
Separately, no singularity plausibly implies we lose most value in the universe from the longtermist perspective.
I don’t really see why that would be the case. We can still go out and settle space, end disease, etc. it just means that starkly superintelligent machines turn out to not be alien minds by nature of their superintelligence.
it’s not obvious to me that strong takeoff is possible. I’d give it 80% probability that it is right now, but there’s significant weight on logarithmically diminishing returns such that the things that are strong than us
Other than compute requirements, have you considered what kinda of cognitive tasks you would assign a model to complete that would lead to developing this kind of superintelligence?
Remember you started with random numbers. In a sense, the annealing and SGD is saying in words “find the laziest function that will solve this regression robustly”. For LLMs the regression is between a calculated number of input tokens and the next token continuation.
The input set is so large that the “laziest” algorithm that has the least error seems to mimic the cognitive process humans use to generate words, which then guesses the most tokens.
And then RL etc after that.
So when I think of the kinds of things a superintelligence is supposed to be able to do, I ask myself how you would build a valid test where the model will be able to do this in the real world.
This is harder than it sounds because in many cases, if something is “correct” or not depends on information current sims can’t model well. For example particle sims don’t model armor perfectly, FSM sims don’t model the wear of mechanical parts correctly, and we have only poor models for how a human would respond to a given set of words said a specific way.
So like the flaw here is say you come up with tasks humans can’t quite do on their own, but the sim can check if it were done right. “Design an airliner” for instance. So the sim models every bolt and the cockpit avionics and so on. All of it.
And an ASI model is trained and it can do this task. Humans cannot beat this “game” because there are millions of discrete parts.
But any aircraft the ASI designs have horrific failure modes and crash eventually. Because the sim is just a little off. And the cockpit avionics HMI is unusable because the model of what humans can perceive is slightly off.
So you collect more data and make the model better and so on, but see it’s functionally “ceilinged” at just a bit better than humans because the model becomes just superintelligent enough to max out the airline design task and no more, or just smart enough to max out the suite of similar tasks.
It’s also not going to be able to ever 1 shot a real airplane, just get increasingly close.
yeah this sounds like a reasonable description of the importance of extremely high quality data. that training data limit I ended on is not trivial by any means.
80 percent confidence seems unsupported by evidence. With the evidence that human data is very poor—for clear and convincing evidence look at all the meta analysis of prior studies, or the constant “well actually” rebuttals to facts people thought they knew. (And then a rebuttal rebuttal and in the end nobody knows anything) A world where analyzing all the data humans know on a subject leaves most people less confident about anything than before they started is not one where we have the data to train a superintelligence.
Such a machine will be as confused as we are even if it has the memory to simultaneously assume every single assumption is both true and not true, and keep track of the combinatorial explosion of possibilities.
To describe the problem succinctly: if you have a problem that only a superintelligence can solve in front of you, and your beliefs about all the variables form a tree with hundreds of millions of possibilities (medical problems will be this way), you may have the cognitive capacity of a superintelligence but in actual effectiveness your actions will be barely better than humans. As in functionally not an ASI.
Getting the data is straightforward. You just need billions of robots. You replicate every study and experiment humans ever did with robots this time, you replicate human body failures with “reference bodies” that are consistent in behavior and artificial. All data analysis is done from raw data, all conclusions always take into account all prior experiments data, no p-hacking.
We don’t have the robots yet, though apparently Amazon robotics is on an exponential trajectory, having added 750k in the last 2 years, which is more than all prior years combined.
Assuming the trajectory continues, it will be 22 years until 1 billion robots. Takeoff but not foom.
there’s significant weight on logarithmically diminishing returns such that the things that are strong than us never get so much stronger that we have no hope of understanding what they’re doing
If autonomous research level AGIs are still 2 OOMs faster than humans, that leads to massive scaling of hardware within years even if they are not smarter, at which point it’s minds the size of cities. So the probable path to weak takeoff is a slow AGI that doesn’t get faster on hardware of the near future, and being slow it won’t soon help scale hardware.
Isn’t it just plausible that current deep learning methods are universal but currently inefficient and thus it will take a huge amount of compute and/or algorithmic progress?
This can easily get you 10+ year timelines.
What’s the current best case for that hypothesis? I’m mildly skeptical of claims about the data-inefficiency of current methods compared to the human brain, due to it not being an apples to apples comparison + due to things like EfficientZero which seem pretty darn data-efficient.
As far as I understand, the gears to ascension means that either we have fast takeoff somewhen in the next ten years or we don’t have it at all. We can get 10+ years timelines, but if so, it is going to be slow takeoff.
Ok, I guess I was unsure what “strong/weak” means here.
strong takeoff: AIs can quickly be many, many orders of magnitude more intelligent than humans within a few development cycles, anything less than 5 years to being able to outmaneuver any human at anything.
weak takeoff: AIs can more or less match human capability, but to exceed it looks similar to humans exceeding other humans’ capability. they eventually are drastically stronger than humans were, but there’s never a moment where they can have a sudden insight that gives them this strength; it’s all incremental progress all the way down. [edit months later: we’re already a ways into this one.]
strong takeoff is the thing yudkowsky still seems to be expecting: a from-our-perspective-godlike-machine that is so fast and accurate that we have absolutely no hope of even comprehending it, and which is so far beyond the capabilities of any intelligent entity now that they all look like a handwriting recognition model in comparison. it’s not obvious to me that strong takeoff is possible. I’d give it 80% probability that it is right now, but I have significant expectation of logarithmically diminishing returns such that the things that are stronger than us never get so much stronger that we have no hope of understanding what they’re doing. eg, if we’re 10% quality jpegs of the universe and the new model is a 70% quality jpeg, contrast to the superai model where we’re imagining a 99.999% quality jpeg or something. (go look up jpeg quality levels to get an idea what I’m saying here)
but I don’t see any model where AI isn’t a training run away from stronger than human in every capability by like 2030 or so, and I think it’s ridiculous that anyone thinks it’s highly plausible. we’re already so close, how can you think there’s that much left to do! seems to me that the limits on ai capability are already pretty much just training data.
FWIW, people talking about “slow” or “continuous” takeoff don’t typically expect that long between “human-ish level AI” and “god” if things go as fast as possible (like maybe 1 to 3 years).
See also What a compute-centric framework says about takeoff speeds
I’ve been saying for a few years that “slow takeoff” should be renamed “fast takeoff” and “fast takeoff” should be renamed “discontinuous takeoff”.
mmmmore like “weak” takeoff = foom/singularity ~doesn’t happen even slowly, “strong” takeoff = foom singularity ~happens, even if more slowly than expected.
weak takeoff is proposing “the curves saturate on ~data quality way sooner than you expected, and scaling further provides ongoing but diminishing returns”.
something like, does intelligence have a “going critical”-type thing at all, vs whether increasingly ~large AIs are just trying harder and harder to squeeze a little more optimality out of the same amount of data.
my thinking singularity probably does happen is because of things in the genre of, eg, phi-2. but it’s possible that that’s just us not being all the way up a sigmoid that is going to saturate, and I generally have a high prior on sigmoid saturation sorts of dynamics in the growth of individual technologies. updating off of ai progress it seems like that sigmoid could be enormous! but I still have significant probability on it isn’t.
I expect humans to be unequivocally beaten at ~everything in the next 10 years (and that’s giving a fairly wide margin! I’d be surprised if it takes that long) - the question I was commenting on is what happens after that.
No singularity seems pretty unlikely to me (e.g. 10%) and also I can easily imagine AI talking a while (e.g. 20 years) while still having a singularity.
Separately, no singularity plausibly implies no hinge of history and thus maybe implies that current work isn’t that important from a longtermist perspective
replying to edited version,
Well, humans are still at risk of being wiped out by a war with a successor species and/or a war with each other aided by a successor species, regardless of which of these models is true. Not dying as an individual or species is sort of a continuous always-a-hinge-of-history sort of thing.
I don’t really see why that would be the case. We can still go out and settle space, end disease, etc. it just means that starkly superintelligent machines turn out to not be alien minds by nature of their superintelligence.
(Sorry, I edited my comment because it was originally very unclear/misleading/wrong, does the edited version make more sense?)
Other than compute requirements, have you considered what kinda of cognitive tasks you would assign a model to complete that would lead to developing this kind of superintelligence?
Remember you started with random numbers. In a sense, the annealing and SGD is saying in words “find the laziest function that will solve this regression robustly”. For LLMs the regression is between a calculated number of input tokens and the next token continuation.
The input set is so large that the “laziest” algorithm that has the least error seems to mimic the cognitive process humans use to generate words, which then guesses the most tokens.
And then RL etc after that.
So when I think of the kinds of things a superintelligence is supposed to be able to do, I ask myself how you would build a valid test where the model will be able to do this in the real world.
This is harder than it sounds because in many cases, if something is “correct” or not depends on information current sims can’t model well. For example particle sims don’t model armor perfectly, FSM sims don’t model the wear of mechanical parts correctly, and we have only poor models for how a human would respond to a given set of words said a specific way.
So like the flaw here is say you come up with tasks humans can’t quite do on their own, but the sim can check if it were done right. “Design an airliner” for instance. So the sim models every bolt and the cockpit avionics and so on. All of it.
And an ASI model is trained and it can do this task. Humans cannot beat this “game” because there are millions of discrete parts.
But any aircraft the ASI designs have horrific failure modes and crash eventually. Because the sim is just a little off. And the cockpit avionics HMI is unusable because the model of what humans can perceive is slightly off.
So you collect more data and make the model better and so on, but see it’s functionally “ceilinged” at just a bit better than humans because the model becomes just superintelligent enough to max out the airline design task and no more, or just smart enough to max out the suite of similar tasks.
It’s also not going to be able to ever 1 shot a real airplane, just get increasingly close.
yeah this sounds like a reasonable description of the importance of extremely high quality data. that training data limit I ended on is not trivial by any means.
80 percent confidence seems unsupported by evidence. With the evidence that human data is very poor—for clear and convincing evidence look at all the meta analysis of prior studies, or the constant “well actually” rebuttals to facts people thought they knew. (And then a rebuttal rebuttal and in the end nobody knows anything) A world where analyzing all the data humans know on a subject leaves most people less confident about anything than before they started is not one where we have the data to train a superintelligence.
Such a machine will be as confused as we are even if it has the memory to simultaneously assume every single assumption is both true and not true, and keep track of the combinatorial explosion of possibilities.
To describe the problem succinctly: if you have a problem that only a superintelligence can solve in front of you, and your beliefs about all the variables form a tree with hundreds of millions of possibilities (medical problems will be this way), you may have the cognitive capacity of a superintelligence but in actual effectiveness your actions will be barely better than humans. As in functionally not an ASI.
Getting the data is straightforward. You just need billions of robots. You replicate every study and experiment humans ever did with robots this time, you replicate human body failures with “reference bodies” that are consistent in behavior and artificial. All data analysis is done from raw data, all conclusions always take into account all prior experiments data, no p-hacking.
We don’t have the robots yet, though apparently Amazon robotics is on an exponential trajectory, having added 750k in the last 2 years, which is more than all prior years combined.
Assuming the trajectory continues, it will be 22 years until 1 billion robots. Takeoff but not foom.
If autonomous research level AGIs are still 2 OOMs faster than humans, that leads to massive scaling of hardware within years even if they are not smarter, at which point it’s minds the size of cities. So the probable path to weak takeoff is a slow AGI that doesn’t get faster on hardware of the near future, and being slow it won’t soon help scale hardware.