Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
The AI called EY because it’s stuck while trying to grow, so it hasn’t achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don’t see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.
Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
Well, maybe the theory is inobviously correct.
The AI called EY because it’s stuck while trying to grow, so it hasn’t achieved its full potential yet. It should be able to comprehend any theory a human EY can comprehend; but I don’t see why we should expect it to be able to independently derive any theory a human could ever derive in their lifetimes, in (small) finite time, and without all the data available to that human.