To be clear, I didn’t mean to say that I think AGI should be evolved. The analogy to breeding was merely to point out that you can notice a basically correct trick for manipulating a complex system without being able to prove that the trick works a priori and without understanding the mechanism by which it works. You notice the regularity on the level of pure conceptual thought, something closer to philosophy than math. Then you prove it afterward. As far as I’m aware, this is indeed how most truly novel discoveries are made.
You’ve forced me to consider, though, that if you know all the math, you’re probably going to be much better and faster at spotting those hidden flaws. It may not take great mathematical knowledge to come up with a new and useful insight, but it may indeed require math knowledge to prove that the insight is correct, or to prove that it only applies in some specific cases, or to show that, hey, it wasn’t actually that great after all.
To be clear, I didn’t mean to say that I think AGI should be evolved. The analogy to breeding was merely to point out that you can notice a basically correct trick for manipulating a complex system without being able to prove that the trick works a priori and without understanding the mechanism by which it works. You notice the regularity on the level of pure conceptual thought, something closer to philosophy than math. Then you prove it afterward. As far as I’m aware, this is indeed how most truly novel discoveries are made.
You’ve forced me to consider, though, that if you know all the math, you’re probably going to be much better and faster at spotting those hidden flaws. It may not take great mathematical knowledge to come up with a new and useful insight, but it may indeed require math knowledge to prove that the insight is correct, or to prove that it only applies in some specific cases, or to show that, hey, it wasn’t actually that great after all.