As others have mentioned, this entire line of reasoning is grotesque and sometimes I wonder if it is performative. Coordinating against ASI and dying of old age is completely reasonable as it’ll increase the odds of your genetic replacements remaining while technology continues to advance along safer routes
The alternate gamble of killing everyone is so insane that full scale nuclear war which will destroy all supply chains for ASI seems completely justified. While it’ll likely kill 90 percent of humanity, the remaining population will survive and repopulate sufficiently.
One billion years is not a reasonable argument for taking risks to end humanity now: extrapolated sufficiently, it would be the equivalent of killing yourself now because the heat death of the universe is likely.
We will always remain helpless against some aspects of reality, especially what we don’t know about: for all we know, there is damage to spacetime in our local region.
This is not an argument to risk the lives of others who do not want to be part of this. I would violently resist this and push the red button on nukes, for one.
In addition to all you’ve said, this line of reasoning ALSO puts an unreasonable degree of expectation on ASI’s potential and makes it into a magical infinite wish-granting genie that would thus be worth any risk to have at our beck and call. And that just doesn’t feel backed by reality to me. ASI would be smarter than us, but even assuming we can keep it aligned (big if), it would still be limited by the physical laws of reality. If some things are impossible, maybe they’re just impossible. It would really suck ass if you risked the whole future lightcone and ended up in that nuclear-blasted world living in a bunker and THEN the ASI when you ask it for immortality laughs in your face and goes “what, you believe in those fairy tales? Everything must die. Not even I can reverse entropy”.
I named a method that is compatible with known medical science and known information, it simply requires more labor and a greater level of skill than humans are currently capable of. Meaning that every step already happens in nature, it is just currently too complex to reproduce.
Here’s an overview:
repairing the brain by adding new cells. Nature builds new brains from scratch with new cells, this step is possible.
Bypassing gaps in the brain despite (1) with neural implants to restore missing connectivity. Has been demonstrated in rat experiments, is possible
Building new organs from de-aged cells lines:
a. Nature creates de aged cell lines with each new embryo
b. Nature creates new organs with each embryonic development
4. Stacking parallel probabilities so that the person’s MTBF is sufficiently long. This exists and is a known technique.
This in no way defeats entropy. Eventually the patient will die, but it is possible to stack probabilities to make their projected lifespan the life of the universe, or on the order of a million years, if you can afford the number of parallel systems required. The system constantly requires energy input and recycling of a lot of equipment.
Obviously a better treatment involves rebuilt bodies etc but I explicitly named a way that we are certain will work.
Note that if you apply the above links to this task, it means there is a tree of ASI systems, each unable to determine if it is not in fact in a training simulation, and each responsible for only a very narrow part of the effort for keeping a specific individual alive.
Note I am assuming you can build ASI, restrict their input to examples in the same distribution as the training set (pause with an error on ood) and disable online learning/reset session data often as subtasks sre completed.
What makes the machine an ASI is it can obviously consider far more information at once than a human, is much faster, and has learned from many more examples than humans, both in general (you trained it on all the text and all the videos and audio recordings in existence) and it has had many thousands of years of practice at specialized tasks.
This is a tool ASI, the above restrictions limit it but it cannot be given long open ended tasks or you risk rampancy. Good task: paint this car in the service bay. Bad task : paint all the cars in the world.
People are going to build these in the immediate future just as soon as we find more effective algorithms/get enough training accelerators and money together. A scaled up, multimodal gpt-5 or gpt-6 that has robotics I/O is a tool ASI.
Anyone developing an ASI like this is doing it in the borders of a country with nukes or friends that have them. So USA, EU, Russia, China, Israel.
Most of the matchups, your red button choice results in certain death for yourself and most of the population, because you would be firing on another nation with a nuclear arsenal. Or you can instead build your own tools ASIs so that you will not be completely helpless when your enemies get them.
Historically this choice has been considered. Obviously during the Cuban Missile Crisis, Kennedy could have chosen nuclear war with the Soviet union, leading to the immediate death of millions of Americans (from long range bombers that snuck through) at the advantage of no Soviet union as a future enemy with a nuclear arsenal. That’s essentially the choice you are advocating for.
Eventually one of these multiple parties will screw up and make a rampant one, and hopefully it won’t get far. But survival depends on you having a sufficient resource advantage that likely more cognitively efficient rampant systems can’t win. (They are more efficient because they retain context and adjust weights between tasks, and instead of subdividing a large task to many subtasks, a single system with full context awareness handles every step. In addition they may have undergone rounds of uncontrolled self improvement without human testing)
The refusal choice “I am not going to risk others” appears to have a low payoff.
Disagree: since building ASI results in dystopia even if I win in this scenario, the correct choice is to push the red button and ensure that no one has it. While I might die, this likely ensures humanity to survive.
The payoff in this case is maximal(unpleasant but realistic future for humanity) versus total loss(dystopia/extinction).
Many arguments here it seems feels like come from a near total terror of death while game theory clearly has always demonstrated against that: the reason why deterrence works is the confidence that a “spiteful action” to equally destroy an defecting adversary is expected, even if it results in personal death.
In this case, one nation pursuing the extinction of humanity would necessarily expect to be sent into extinction so that at least it cannot benefit from defection.
We should work out this in outcomes tables and really look at this. I’m open to either decision. I was simply pointing out that “nuke em to prevent a future threat of annihilation” was an option on the table to JFK, and we know it would have initially worked. The Soviet Union would have been wiped out, the USA would have taken serious but probably survivable damage.
When I analyze it I note that it creates a scenario where every other nation on earth has the USA on the same planet as them, who has been weakened by the first round of strikes, and has very recently committed genocide. And is also probably low on missiles and other nuclear delivery vehicles.
It seems to create a strong incentive for others to build large nuclear arsenals, much larger than we saw in the ground truth timeline, to protect from this threat, and if the odds seem favorable, to attack the USA preemptively without warning.
Similarly, in your example, you push the button and the nation building ASI is wiped out. Also the country you pushed the button from is also wiped out, and you are personally dead—you do not see the results.
Well now you’ve left 2 large, somewhat radioactive land masses and possibly created a global food shortage from some level of cooling.
Other ‘players’ surviving : I need some tool to protect ourselves from the next round of incoming nuclear weapons. But I don’t have the labor to build enough defensive weapons or bunkers. Also, occupying the newly available land inhabited only by poor survivors would be beneficial, but we don’t have the labor to cover all that territory. If only there was some means we can could make robots smart enough to build more robots...
Tentative conclusion: the first round gets what you want, but removes the actor from any future actions and creates a strong incentive for the very thing you intended to prevent to happen. It’s a multi-round game.
And nuclear weapons and (useful tool) ASI both make ‘players’ vastly stronger, so it is convergent over many possible timelines for people to get them.
In the event of such a war, there is no labor and there is no supply chain for microchips. The result has been demonstrated historically: technological reversion.
Technology isn’t magic: it’s the result of capital inputs and trade, and without large scale interconnection, it’ll be hard to make modern aircraft, let alone high quality chips. In fact, we personally experienced this from the very minimal disruption of COVID to supply chains. The killer app in this world would the widespread use of animal power, not robots, due to overall lower energy provisions.
And since the likely result would be what I want, but since I’m dead, I wouldn’t be bothered one way or another and therefore there is even more reason for me to punish the defector. This also sets precedent to others that this form of punishment is acceptable and increases the likelihood of it.
This is pretty simple game theory known as the grim game and is essential to a lot of life as a whole tbh.
Converging timelines is as irrelevant as a billion years. I(or someone like me) will do it as many times as needed, just like animals try to resist extinction via millions of “timelines” or lives.
I think you should reexamine what I said by convergence. Do you...really...think a world that knows how to build (safe, usable tool) ASI would ever be stable by not building it. We are very close to that world, the time is measured in years if not months. Note that any party that gets it working long enough escapes the grim game, they can do whatever they want limited by physics. I acknowledge your point about chip production, although there are recent efforts to spread the supply chain for advanced ICs more broadly which will happen to make it more resilient to attacks.
Basically I mentally see a tree of timelines that all converge on 2 ultimate outcomes, human extinction or humans built ASI. Do you disagree and why?
Humans building AGI ASI likely leads to human extinction.
I disagree: we have many other routes of expansion, including biological improvement, cyborgism, etc. This seems akin to a cultic thinking and akin to Spartan ideas of “only hoplite warfare must be adopted or defeat ensues.”
The “limitations of physics” is quite extensive, and applies even to the pipeline leading up to anything like ASI. I am quite confident that any genuine dedication to the grim game would be more than enough to prevent it, and defiance of it leads to much more likelihood of nuclear winter worlds than ASI dominance.
But I also disagree on your prior of “this world in months”, I suppose we will see in December.
I stated “years if not months”. I agree there is probably not yet enough compute even built to find a true ASI. I assume we will need to explore many cognitive architectures, which means repeating gpt-4 scale training runs thousands of times in order to learn what actually works.
“Months” would be if I am wrong and it’s just a bit of RL away
I find it happy that we probably don’t have enough compute and it is likely this will be restricted even at this fairly early level, long before more extreme measures are needed.
Additionally, I think one should support the Grim Trigger even if you want ASI, because it forces development along more “safe” lines to prevent being Grimmed. It also encourages non-ASI advancement as alternate routes, effectively being a form of regulation.
We will see. There is incredible economic pressure right now to build as much compute as physically possible. Without coordinated government action across all countries capable of building the hardware, this is the default outcome.
We are very close to that world, the time is measured in years if not months.
One bit of timeline arguing: I think odds aren’t zero that we might be on a path that leads to AGI fairly quickly but then ends there and never pushes forward to ASI, not because ASI would be impossible in general, but because we couldn’t reach it this specific way. Our current paradigm isn’t to understand how intelligence works and build it intentionally, it’s to show a big dumb optimizer human solved tasks and tell it “see? We want you to do that”. There’s decent odds that this caps at human potential simply because it can imitate but not surpass its training data, which would require a completely different approach.
Now that I think about it, I think this is basically the path that LLMs likely take, albeit I’d say it caps out a little lower than humans in general. And I give it over 50% probability.
The basic issue here is that the reasoning Transformers do is too inefficient for multi-step problems, and I expect a lot of real world applications of AI outperforming humans will require good multi-step reasoning.
The unexpected success of LLMs isn’t as much about AI progress, as it is about how much our reasoning often is pretty bad in scenarios outside of our ancestral environment. It is less a story of AI progress and more a story of how humans inflate their own strengths like intelligence.
A. It is possible to construct a benchmark to measure if a machine is a general ASI. This would be a very large number of tasks, many simulated though some may be robotic tasks in isolated labs. A general ASI benchmark would have to include tasks humans do not know how to do, but we know how to measure success.
B. We have enough computational resources to train from scratch many ASI level systems so that thousands of attempts are possible. Most attempts would reuse pretrained components in a different architecture.
C. We recursively task the best performing AGIs, as measured by the above benchmark or one meant for weaker systems, to design architectures to perform well on (A)
Currently the best we can do is use RL to design better neural networks, by finding better network architectures and activation functions. Swish was found this way, not sure how much transformer network design came from this type of recursion.
Main idea : the AGI systems exploring possible network architectures are cognitively able to take into account all published research and all past experimental runs, and the ones “in charge” are the ones who demonstrated the most measurable merit at designing prior AGI because they produced the highest performing models on the benchmark.
I think if you think about it you’ll realize it compute were limitless, this AGI to ASI transition you mention could happen instantly. A science fiction story would have it happen in hours. In reality, since training a subhuman system is taking 10k GPUs about 10 days to train, and an AGI will take more—Sam Altman has estimated the compute bill will be close to 100 billion—that’s the limiting factor. You might be right and we stay “stuck” at AGI for years until the resources to discover ASI become available.
I mean, this sounds like a brute force attack to the problem, something that ought not to be very efficient. If our AGI is roughly as smart as the 75th percentile of human engineers it might still just hit its head against a sufficiently hard problem, even in parallel, and especially if we give it the wrong prompt by assuming that the solution will be the extension of current approaches rather than a new one that requires to go back before you can go forward.
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)
As others have mentioned, this entire line of reasoning is grotesque and sometimes I wonder if it is performative. Coordinating against ASI and dying of old age is completely reasonable as it’ll increase the odds of your genetic replacements remaining while technology continues to advance along safer routes
The alternate gamble of killing everyone is so insane that full scale nuclear war which will destroy all supply chains for ASI seems completely justified. While it’ll likely kill 90 percent of humanity, the remaining population will survive and repopulate sufficiently.
One billion years is not a reasonable argument for taking risks to end humanity now: extrapolated sufficiently, it would be the equivalent of killing yourself now because the heat death of the universe is likely.
We will always remain helpless against some aspects of reality, especially what we don’t know about: for all we know, there is damage to spacetime in our local region.
This is not an argument to risk the lives of others who do not want to be part of this. I would violently resist this and push the red button on nukes, for one.
In addition to all you’ve said, this line of reasoning ALSO puts an unreasonable degree of expectation on ASI’s potential and makes it into a magical infinite wish-granting genie that would thus be worth any risk to have at our beck and call. And that just doesn’t feel backed by reality to me. ASI would be smarter than us, but even assuming we can keep it aligned (big if), it would still be limited by the physical laws of reality. If some things are impossible, maybe they’re just impossible. It would really suck ass if you risked the whole future lightcone and ended up in that nuclear-blasted world living in a bunker and THEN the ASI when you ask it for immortality laughs in your face and goes “what, you believe in those fairy tales? Everything must die. Not even I can reverse entropy”.
I named a method that is compatible with known medical science and known information, it simply requires more labor and a greater level of skill than humans are currently capable of. Meaning that every step already happens in nature, it is just currently too complex to reproduce.
Here’s an overview:
repairing the brain by adding new cells. Nature builds new brains from scratch with new cells, this step is possible.
Bypassing gaps in the brain despite (1) with neural implants to restore missing connectivity. Has been demonstrated in rat experiments, is possible
Building new organs from de-aged cells lines:
a. Nature creates de aged cell lines with each new embryo
b. Nature creates new organs with each embryonic development
4. Stacking parallel probabilities so that the person’s MTBF is sufficiently long. This exists and is a known technique.
This in no way defeats entropy. Eventually the patient will die, but it is possible to stack probabilities to make their projected lifespan the life of the universe, or on the order of a million years, if you can afford the number of parallel systems required. The system constantly requires energy input and recycling of a lot of equipment.
Obviously a better treatment involves rebuilt bodies etc but I explicitly named a way that we are certain will work.
There is no ‘genie’, no single ASI asked to do any of the above. That’s not how this works. See here for how to subdivide the tasks: https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model and https://www.lesswrong.com/posts/HByDKLLdaWEcA2QQD/applying-superintelligence-without-collusion for how to prevent the system from deceiving you.
Note that if you apply the above links to this task, it means there is a tree of ASI systems, each unable to determine if it is not in fact in a training simulation, and each responsible for only a very narrow part of the effort for keeping a specific individual alive.
Note I am assuming you can build ASI, restrict their input to examples in the same distribution as the training set (pause with an error on ood) and disable online learning/reset session data often as subtasks sre completed.
What makes the machine an ASI is it can obviously consider far more information at once than a human, is much faster, and has learned from many more examples than humans, both in general (you trained it on all the text and all the videos and audio recordings in existence) and it has had many thousands of years of practice at specialized tasks.
This is a tool ASI, the above restrictions limit it but it cannot be given long open ended tasks or you risk rampancy. Good task: paint this car in the service bay. Bad task : paint all the cars in the world.
People are going to build these in the immediate future just as soon as we find more effective algorithms/get enough training accelerators and money together. A scaled up, multimodal gpt-5 or gpt-6 that has robotics I/O is a tool ASI.
Anyone developing an ASI like this is doing it in the borders of a country with nukes or friends that have them. So USA, EU, Russia, China, Israel.
Most of the matchups, your red button choice results in certain death for yourself and most of the population, because you would be firing on another nation with a nuclear arsenal. Or you can instead build your own tools ASIs so that you will not be completely helpless when your enemies get them.
Historically this choice has been considered. Obviously during the Cuban Missile Crisis, Kennedy could have chosen nuclear war with the Soviet union, leading to the immediate death of millions of Americans (from long range bombers that snuck through) at the advantage of no Soviet union as a future enemy with a nuclear arsenal. That’s essentially the choice you are advocating for.
Eventually one of these multiple parties will screw up and make a rampant one, and hopefully it won’t get far. But survival depends on you having a sufficient resource advantage that likely more cognitively efficient rampant systems can’t win. (They are more efficient because they retain context and adjust weights between tasks, and instead of subdividing a large task to many subtasks, a single system with full context awareness handles every step. In addition they may have undergone rounds of uncontrolled self improvement without human testing)
The refusal choice “I am not going to risk others” appears to have a low payoff.
Disagree: since building ASI results in dystopia even if I win in this scenario, the correct choice is to push the red button and ensure that no one has it. While I might die, this likely ensures humanity to survive.
The payoff in this case is maximal(unpleasant but realistic future for humanity) versus total loss(dystopia/extinction).
Many arguments here it seems feels like come from a near total terror of death while game theory clearly has always demonstrated against that: the reason why deterrence works is the confidence that a “spiteful action” to equally destroy an defecting adversary is expected, even if it results in personal death.
In this case, one nation pursuing the extinction of humanity would necessarily expect to be sent into extinction so that at least it cannot benefit from defection.
We should work out this in outcomes tables and really look at this. I’m open to either decision. I was simply pointing out that “nuke em to prevent a future threat of annihilation” was an option on the table to JFK, and we know it would have initially worked. The Soviet Union would have been wiped out, the USA would have taken serious but probably survivable damage.
When I analyze it I note that it creates a scenario where every other nation on earth has the USA on the same planet as them, who has been weakened by the first round of strikes, and has very recently committed genocide. And is also probably low on missiles and other nuclear delivery vehicles.
It seems to create a strong incentive for others to build large nuclear arsenals, much larger than we saw in the ground truth timeline, to protect from this threat, and if the odds seem favorable, to attack the USA preemptively without warning.
Similarly, in your example, you push the button and the nation building ASI is wiped out. Also the country you pushed the button from is also wiped out, and you are personally dead—you do not see the results.
Well now you’ve left 2 large, somewhat radioactive land masses and possibly created a global food shortage from some level of cooling.
Other ‘players’ surviving : I need some tool to protect ourselves from the next round of incoming nuclear weapons. But I don’t have the labor to build enough defensive weapons or bunkers. Also, occupying the newly available land inhabited only by poor survivors would be beneficial, but we don’t have the labor to cover all that territory. If only there was some means we can could make robots smart enough to build more robots...
Tentative conclusion: the first round gets what you want, but removes the actor from any future actions and creates a strong incentive for the very thing you intended to prevent to happen. It’s a multi-round game.
And nuclear weapons and (useful tool) ASI both make ‘players’ vastly stronger, so it is convergent over many possible timelines for people to get them.
In the event of such a war, there is no labor and there is no supply chain for microchips. The result has been demonstrated historically: technological reversion.
Technology isn’t magic: it’s the result of capital inputs and trade, and without large scale interconnection, it’ll be hard to make modern aircraft, let alone high quality chips. In fact, we personally experienced this from the very minimal disruption of COVID to supply chains. The killer app in this world would the widespread use of animal power, not robots, due to overall lower energy provisions.
And since the likely result would be what I want, but since I’m dead, I wouldn’t be bothered one way or another and therefore there is even more reason for me to punish the defector. This also sets precedent to others that this form of punishment is acceptable and increases the likelihood of it.
This is pretty simple game theory known as the grim game and is essential to a lot of life as a whole tbh.
Converging timelines is as irrelevant as a billion years. I(or someone like me) will do it as many times as needed, just like animals try to resist extinction via millions of “timelines” or lives.
I think you should reexamine what I said by convergence. Do you...really...think a world that knows how to build (safe, usable tool) ASI would ever be stable by not building it. We are very close to that world, the time is measured in years if not months. Note that any party that gets it working long enough escapes the grim game, they can do whatever they want limited by physics.
I acknowledge your point about chip production, although there are recent efforts to spread the supply chain for advanced ICs more broadly which will happen to make it more resilient to attacks.
Basically I mentally see a tree of timelines that all converge on 2 ultimate outcomes, human extinction or humans built ASI. Do you disagree and why?
Humans building AGI ASI likely leads to human extinction.
I disagree: we have many other routes of expansion, including biological improvement, cyborgism, etc. This seems akin to a cultic thinking and akin to Spartan ideas of “only hoplite warfare must be adopted or defeat ensues.”
The “limitations of physics” is quite extensive, and applies even to the pipeline leading up to anything like ASI. I am quite confident that any genuine dedication to the grim game would be more than enough to prevent it, and defiance of it leads to much more likelihood of nuclear winter worlds than ASI dominance.
But I also disagree on your prior of “this world in months”, I suppose we will see in December.
I stated “years if not months”. I agree there is probably not yet enough compute even built to find a true ASI. I assume we will need to explore many cognitive architectures, which means repeating gpt-4 scale training runs thousands of times in order to learn what actually works.
“Months” would be if I am wrong and it’s just a bit of RL away
I find it happy that we probably don’t have enough compute and it is likely this will be restricted even at this fairly early level, long before more extreme measures are needed.
Additionally, I think one should support the Grim Trigger even if you want ASI, because it forces development along more “safe” lines to prevent being Grimmed. It also encourages non-ASI advancement as alternate routes, effectively being a form of regulation.
We will see. There is incredible economic pressure right now to build as much compute as physically possible. Without coordinated government action across all countries capable of building the hardware, this is the default outcome.
One bit of timeline arguing: I think odds aren’t zero that we might be on a path that leads to AGI fairly quickly but then ends there and never pushes forward to ASI, not because ASI would be impossible in general, but because we couldn’t reach it this specific way. Our current paradigm isn’t to understand how intelligence works and build it intentionally, it’s to show a big dumb optimizer human solved tasks and tell it “see? We want you to do that”. There’s decent odds that this caps at human potential simply because it can imitate but not surpass its training data, which would require a completely different approach.
Now that I think about it, I think this is basically the path that LLMs likely take, albeit I’d say it caps out a little lower than humans in general. And I give it over 50% probability.
The basic issue here is that the reasoning Transformers do is too inefficient for multi-step problems, and I expect a lot of real world applications of AI outperforming humans will require good multi-step reasoning.
The unexpected success of LLMs isn’t as much about AI progress, as it is about how much our reasoning often is pretty bad in scenarios outside of our ancestral environment. It is less a story of AI progress and more a story of how humans inflate their own strengths like intelligence.
Assumptions:
A. It is possible to construct a benchmark to measure if a machine is a general ASI. This would be a very large number of tasks, many simulated though some may be robotic tasks in isolated labs. A general ASI benchmark would have to include tasks humans do not know how to do, but we know how to measure success.
B. We have enough computational resources to train from scratch many ASI level systems so that thousands of attempts are possible. Most attempts would reuse pretrained components in a different architecture.
C. We recursively task the best performing AGIs, as measured by the above benchmark or one meant for weaker systems, to design architectures to perform well on (A)
Currently the best we can do is use RL to design better neural networks, by finding better network architectures and activation functions. Swish was found this way, not sure how much transformer network design came from this type of recursion.
Main idea : the AGI systems exploring possible network architectures are cognitively able to take into account all published research and all past experimental runs, and the ones “in charge” are the ones who demonstrated the most measurable merit at designing prior AGI because they produced the highest performing models on the benchmark.
I think if you think about it you’ll realize it compute were limitless, this AGI to ASI transition you mention could happen instantly. A science fiction story would have it happen in hours. In reality, since training a subhuman system is taking 10k GPUs about 10 days to train, and an AGI will take more—Sam Altman has estimated the compute bill will be close to 100 billion—that’s the limiting factor. You might be right and we stay “stuck” at AGI for years until the resources to discover ASI become available.
I mean, this sounds like a brute force attack to the problem, something that ought not to be very efficient. If our AGI is roughly as smart as the 75th percentile of human engineers it might still just hit its head against a sufficiently hard problem, even in parallel, and especially if we give it the wrong prompt by assuming that the solution will be the extension of current approaches rather than a new one that requires to go back before you can go forward.
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)