Solving problems in abstract mathematics can be immensely useful even by itself, I think.
Agreed. But the package of ideas entailed by AGI centers around systems that use human level reasoning, natural language understanding, and solve the set of AI-complete problems. The AI-complete problem set can be reduced to finding a compact generative model for natural language knowledge, which really is finding a compact generative model for the universe we observe.
Note: physics knowledge at low levels is indistinguishable from mathematics
Not quite. Abstract mathematics is too general. Useful “Physics knowledge” is a narrow set of mathematics that compactly describe the particular specific universe we observe. This specifity is both crucial and potentially dangerous.
But the main use of the system would be—safely studying the behavior of a (super-)intelligence, in preparation for a true FAI.
A super-intelligence (super-intelligent to us) will necessarily be AI-complete, and thus it must know of our universe. Any system that hopes to understand such a super-intelligence must likewise also know of our universe, simply because “super-intelligent” really means “having super-optimization power over this universe”.
By (super-)intelligence I mean EY’s definition, as a powerful general-purpose optimization process. It does not need to actually know about natural language or our universe to be AI-complete. A potential to learn them is sufficient. Abstract mathematics is arbitrarily complex, so sufficiently powerful optimization process in this domain will have to be sufficiently general for everything.
In theory we could all live inside an infinite turing simulation right now. In practise any super-intelligences in our universe will need to know of our universe to be super-relevant to our universe.
Agreed. But the package of ideas entailed by AGI centers around systems that use human level reasoning, natural language understanding, and solve the set of AI-complete problems. The AI-complete problem set can be reduced to finding a compact generative model for natural language knowledge, which really is finding a compact generative model for the universe we observe.
Not quite. Abstract mathematics is too general. Useful “Physics knowledge” is a narrow set of mathematics that compactly describe the particular specific universe we observe. This specifity is both crucial and potentially dangerous.
A super-intelligence (super-intelligent to us) will necessarily be AI-complete, and thus it must know of our universe. Any system that hopes to understand such a super-intelligence must likewise also know of our universe, simply because “super-intelligent” really means “having super-optimization power over this universe”.
By (super-)intelligence I mean EY’s definition, as a powerful general-purpose optimization process. It does not need to actually know about natural language or our universe to be AI-complete. A potential to learn them is sufficient. Abstract mathematics is arbitrarily complex, so sufficiently powerful optimization process in this domain will have to be sufficiently general for everything.
In theory we could all live inside an infinite turing simulation right now. In practise any super-intelligences in our universe will need to know of our universe to be super-relevant to our universe.