I mean, this sounds like a brute force attack to the problem, something that ought not to be very efficient. If our AGI is roughly as smart as the 75th percentile of human engineers it might still just hit its head against a sufficiently hard problem, even in parallel, and especially if we give it the wrong prompt by assuming that the solution will be the extension of current approaches rather than a new one that requires to go back before you can go forward.
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)
I mean, this sounds like a brute force attack to the problem, something that ought not to be very efficient. If our AGI is roughly as smart as the 75th percentile of human engineers it might still just hit its head against a sufficiently hard problem, even in parallel, and especially if we give it the wrong prompt by assuming that the solution will be the extension of current approaches rather than a new one that requires to go back before you can go forward.
You’re correct. In the narrow domain of designing AI architectures you need the system to be at least 1.01 times as good as a human. You want more gain than that because there is a cost to running the system.
Getting gain seems to be trivially easy at least for the types of AI design tasks this has been tried on. Humans are bad at designing network architectures and activation functions.
I theorize that a machine could study the data flows from snapshots from an AI architecture attempting tasks on the AGI/ASI gym, and use that information as well as all previous results to design better architectures.
The last bit is where I expect enormous gain, because the training data set will exceed the amount of data humans can take in in a lifetime, and you would obviously have many smaller “training exercises” to design small systems to build up a general ability. (Enormous early gain. Eventually architectures are going to approach the limits allowed by the underlying compute and datasets)