the question isn’t what class of problems can be understood, it’s how efficiently can you jump to correct conclusions, check them, and build on them. any human can understand almost any topic, given enough interest and willingness to admit error that they actually try enough times to fail and see how to correct themselves. but for some, it might take an unreasonably long time to learn some fields, and they’re likely to get bored before perseverance compensates for efficiency at jumping to correct conclusions.
in the same way, a sufficiently strong ai is likely to be able to find cleaner representations of the same part of the universe’s manifold of implications, and potentially render the implications in parts of possibility space much further away than a human brain could given the same context, actions, and outcomes.
in terms of why we expect it to be stronger, because we expect someone to be able to find algorithms that are able to model the same parts of the universe as advanced physics folks study, with the same or better accuracy in-distribution and/or out-of-distribution, given the same order amount of energy burned as it takes to run a human brain. once the model is found it may be explainable to humans, in fact! the energy constraint seems to push it to be, though not perfectly. and likely the stuff too complex for humans to figure out at all is pretty rare—it would have to be pseudo-laws about a fairly large system, and would probably require seeing a huge amount of training data to figure it out.
semi-chaotic fluid systems will be the last thing intelligence finds exact equations for.
the question isn’t what class of problems can be understood, it’s how efficiently can you jump to correct conclusions, check them, and build on them. any human can understand almost any topic, given enough interest and willingness to admit error that they actually try enough times to fail and see how to correct themselves. but for some, it might take an unreasonably long time to learn some fields, and they’re likely to get bored before perseverance compensates for efficiency at jumping to correct conclusions.
in the same way, a sufficiently strong ai is likely to be able to find cleaner representations of the same part of the universe’s manifold of implications, and potentially render the implications in parts of possibility space much further away than a human brain could given the same context, actions, and outcomes.
in terms of why we expect it to be stronger, because we expect someone to be able to find algorithms that are able to model the same parts of the universe as advanced physics folks study, with the same or better accuracy in-distribution and/or out-of-distribution, given the same order amount of energy burned as it takes to run a human brain. once the model is found it may be explainable to humans, in fact! the energy constraint seems to push it to be, though not perfectly. and likely the stuff too complex for humans to figure out at all is pretty rare—it would have to be pseudo-laws about a fairly large system, and would probably require seeing a huge amount of training data to figure it out.
semi-chaotic fluid systems will be the last thing intelligence finds exact equations for.