You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)