This is incorrect, and you’re a world class expert in this domain.
What’s incorrect? My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?
(In the subsequent text it seems like you are saying that you don’t need to match human experts in every domain in order to have a transformative impact, which I agree with. I set the TAI threshold as “economic impact as large as” but believe that this impact will be achieved by systems which are in some respects weaker than human experts and in other respects stronger/faster/cheaper than humans.)
Do you think 30% is too low or too high for July 2033?
Do you think 30% is too low or too high for July 2033?
This is why I went over the definitions of criticality. Once criticality is achieved the odds drop to 0. A nuclear weapon that is prompt critical is definitely going to explode in bounded time because there are no futures where sufficient numbers of neutrons are lost to stop the next timestep releasing even more.
What’s incorrect? My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?
Your cheap expert scenario isn’t necessarily critical. Think of how it could quench, where you simply exhaust the market for certain kinds of expert services and cannot expand to any more because of lack of objective feedback and legal barriers.
An AI system that has hit the exponential criticality phase in capability is the same situation as the nuclear weapon. It will not quench, that is not a possible outcome in any future timeline [except timelines with immediate use of nuclear weapons on the parties with this capability]
So your question becomes : what is the odds that economic or physical criticality will be reached by 2033? I have doubts myself, but fundamentally the following has to happen for robotics:
A foundation model that includes physical tasks, like this.
Sufficient backend to make mass usage across many tasks possible, and convenient licensing and usage. Right now Google and a few startups exist and have anything using this approach. Colossal scale is needed. Something like ROS 2 but a lot better.
No blocking legal barriers. This is going to require a lot of GPUs to learn from all the video in the world. Each robot in the real world needs a rack of them just for itself.
Generative physical sims. Similar to generative video, but generating 3d worlds where short ‘dream’ like segments of events happening in the physical world can be modeled. This is what you need to automatically add generality to go from 60% success rate to 99%+. Tesla has demoed some but I don’t know of good, scaled, readily licensed software that offers this.
For economics:
1. Revenue collecting AI services good enough to pay for at scale
2. Cheap enough hardware, such as from competitors to Nvidia, that make the inference hardware cheap even for powerful models
You speak with such a confident authoritative tone, but it is so hard to parse what your actual conclusions are.
You are refuting Paul’s core conclusion that there’s a “30% chance of TAI by 2033,” but your long refutation is met with: “wait, are you trying to say that you think 30% is too high or too low?” Pretty clear sign you’re not communicating yourself properly.
Even your answer to his direct follow-up question: “Do you think 30% is too low or too high for July 2033?” was hard to parse. You did not say something simple and easily understandable like, “I think 30% is too high for these reasons: …” you say “Once criticality is achieved the odds drop to 0 [+ more words].” The odds of what drop to zero? The odds of TAI? But you seem to be saying that once criticality is reached, TAI is inevitable? Even the rest of your long answer leaves in doubt where you’re really coming down on the premise.
By the way, I don’t think I would even be making this comment myself if A) I didn’t have such a hard time trying to understand what your conclusions were myself and B) you didn’t have such a confident, authoritative tone that seemed to present your ideas as if they were patently obvious.
I’m confident about the consequences of criticality. It is a mathematical certainty, it creates a situation where all future possible timelines are affected. For example, covid was an example of criticality. Once you had sufficient evidence to show the growth was exponential, which was available in January 2020, you could be completely confident all future timelines would have a lot of covid infections in them and it would continue until quenching, which turned out to be infection of ~44% of the population of the planet. (and you can from the Ro estimate that final equilibrium number)
Once AI reaches a point where critical mass happens, it’s the same outcome. No futures exist where you won’t see AI systems in use everywhere for a large variety of tasks (economic criticality) or billions or scientific notation numbers of robots in use (physical criticality, true AGI criticality cases).
July 2033 thus requires the “January 2020” data to exist. There don’t have to be billions of robots yet, just a growth rate consistent with that.
I do not know precisely when the minimum components needed to reach said critical mass will exist.
I gave the variables of the problem. I would like Paul, who is a world class expert, to take the idea seriously and fill in estimates for the values of those variables. I think his model for what is transformative and what the requirements are for transformation is completely wrong, and I explain why.
If I had to give a number I would say 90%, but a better expert could develop a better number.
Update: edited to 90%. I would put it at 100% because we are already past investor criticality, but the system can still quench if revenue doesn’t continue to scale.
My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?
Just to add to this : many experts are just faking it. Simulating them is not helping. By faking it, because they are solving as humans an RL problem that can’t be solved, their learned policy is deeply suboptimal and in some cases simply wrong. Think expert positions like in social science, government, law, economics, business consulting, and possibly even professors who chair computer science departments but are not actually working on scaled cutting edge AI. Each of these “experts” cannot know a true policy that is effective, most of their status comes from various social proofs and finite Official Positions. The “cannot” because they will not in their lifespan receive enough objective feedback to learn a policy that is definitely correct. (they are more likely to be correct than non experts, however)
(In the subsequent text it seems like you are saying that you don’t need to match human experts in every domain in order to have a transformative impact, which I agree with. I set the TAI threshold as “economic impact as large as” but believe that this impact will be achieved by systems which are in some respects weaker than human experts and in other respects stronger/faster/cheaper than humans.)
I pointed out that you do not need to match human experts in any domain at all. Transformation depends on entirely different variables.
What’s incorrect? My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?
(In the subsequent text it seems like you are saying that you don’t need to match human experts in every domain in order to have a transformative impact, which I agree with. I set the TAI threshold as “economic impact as large as” but believe that this impact will be achieved by systems which are in some respects weaker than human experts and in other respects stronger/faster/cheaper than humans.)
Do you think 30% is too low or too high for July 2033?
Do you think 30% is too low or too high for July 2033?
This is why I went over the definitions of criticality. Once criticality is achieved the odds drop to 0. A nuclear weapon that is prompt critical is definitely going to explode in bounded time because there are no futures where sufficient numbers of neutrons are lost to stop the next timestep releasing even more.
What’s incorrect? My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?
Your cheap expert scenario isn’t necessarily critical. Think of how it could quench, where you simply exhaust the market for certain kinds of expert services and cannot expand to any more because of lack of objective feedback and legal barriers.
An AI system that has hit the exponential criticality phase in capability is the same situation as the nuclear weapon. It will not quench, that is not a possible outcome in any future timeline [except timelines with immediate use of nuclear weapons on the parties with this capability]
So your question becomes : what is the odds that economic or physical criticality will be reached by 2033? I have doubts myself, but fundamentally the following has to happen for robotics:
A foundation model that includes physical tasks, like this.
Sufficient backend to make mass usage across many tasks possible, and convenient licensing and usage. Right now Google and a few startups exist and have anything using this approach. Colossal scale is needed. Something like ROS 2 but a lot better.
No blocking legal barriers. This is going to require a lot of GPUs to learn from all the video in the world. Each robot in the real world needs a rack of them just for itself.
Generative physical sims. Similar to generative video, but generating 3d worlds where short ‘dream’ like segments of events happening in the physical world can be modeled. This is what you need to automatically add generality to go from 60% success rate to 99%+. Tesla has demoed some but I don’t know of good, scaled, readily licensed software that offers this.
For economics:
1. Revenue collecting AI services good enough to pay for at scale
2. Cheap enough hardware, such as from competitors to Nvidia, that make the inference hardware cheap even for powerful models
Either criticality is transformative.
You speak with such a confident authoritative tone, but it is so hard to parse what your actual conclusions are.
You are refuting Paul’s core conclusion that there’s a “30% chance of TAI by 2033,” but your long refutation is met with: “wait, are you trying to say that you think 30% is too high or too low?” Pretty clear sign you’re not communicating yourself properly.
Even your answer to his direct follow-up question: “Do you think 30% is too low or too high for July 2033?” was hard to parse. You did not say something simple and easily understandable like, “I think 30% is too high for these reasons: …” you say “Once criticality is achieved the odds drop to 0 [+ more words].” The odds of what drop to zero? The odds of TAI? But you seem to be saying that once criticality is reached, TAI is inevitable? Even the rest of your long answer leaves in doubt where you’re really coming down on the premise.
By the way, I don’t think I would even be making this comment myself if A) I didn’t have such a hard time trying to understand what your conclusions were myself and B) you didn’t have such a confident, authoritative tone that seemed to present your ideas as if they were patently obvious.
I’m confident about the consequences of criticality. It is a mathematical certainty, it creates a situation where all future possible timelines are affected. For example, covid was an example of criticality. Once you had sufficient evidence to show the growth was exponential, which was available in January 2020, you could be completely confident all future timelines would have a lot of covid infections in them and it would continue until quenching, which turned out to be infection of ~44% of the population of the planet. (and you can from the Ro estimate that final equilibrium number)
Once AI reaches a point where critical mass happens, it’s the same outcome. No futures exist where you won’t see AI systems in use everywhere for a large variety of tasks (economic criticality) or billions or scientific notation numbers of robots in use (physical criticality, true AGI criticality cases).
July 2033 thus requires the “January 2020” data to exist. There don’t have to be billions of robots yet, just a growth rate consistent with that.
I do not know precisely when the minimum components needed to reach said critical mass will exist.
I gave the variables of the problem. I would like Paul, who is a world class expert, to take the idea seriously and fill in estimates for the values of those variables. I think his model for what is transformative and what the requirements are for transformation is completely wrong, and I explain why.
If I had to give a number I would say 90%, but a better expert could develop a better number.
Update: edited to 90%. I would put it at 100% because we are already past investor criticality, but the system can still quench if revenue doesn’t continue to scale.
It seems like criticality is sufficient, bot not necessary, for TAI, and so only counting criticality scenarios causes underestimation.
This was a lot clearer, thank you.
My view that a cheap simulation of arbitrary human experts would be enough to end life as we know it one way or the other?
Just to add to this : many experts are just faking it. Simulating them is not helping. By faking it, because they are solving as humans an RL problem that can’t be solved, their learned policy is deeply suboptimal and in some cases simply wrong. Think expert positions like in social science, government, law, economics, business consulting, and possibly even professors who chair computer science departments but are not actually working on scaled cutting edge AI. Each of these “experts” cannot know a true policy that is effective, most of their status comes from various social proofs and finite Official Positions. The “cannot” because they will not in their lifespan receive enough objective feedback to learn a policy that is definitely correct. (they are more likely to be correct than non experts, however)
(In the subsequent text it seems like you are saying that you don’t need to match human experts in every domain in order to have a transformative impact, which I agree with. I set the TAI threshold as “economic impact as large as” but believe that this impact will be achieved by systems which are in some respects weaker than human experts and in other respects stronger/faster/cheaper than humans.)
I pointed out that you do not need to match human experts in any domain at all. Transformation depends on entirely different variables.